Test Report: KVM_Linux_crio 19370

                    
                      dd51e72d60a15da3a1a4a8c267729efa6313a896:2024-08-06:35671
                    
                

Test fail (29/326)

Order failed test Duration
43 TestAddons/parallel/Ingress 153.25
45 TestAddons/parallel/MetricsServer 299.68
54 TestAddons/StoppedEnableDisable 154.41
173 TestMultiControlPlane/serial/StopSecondaryNode 141.86
175 TestMultiControlPlane/serial/RestartSecondaryNode 59.92
177 TestMultiControlPlane/serial/RestartClusterKeepsNodes 376.43
180 TestMultiControlPlane/serial/StopCluster 141.63
240 TestMultiNode/serial/RestartKeepsNodes 327.81
242 TestMultiNode/serial/StopMultiNode 141.44
249 TestPreload 278.7
257 TestKubernetesUpgrade 464.53
329 TestStartStop/group/old-k8s-version/serial/FirstStart 283.66
354 TestStartStop/group/no-preload/serial/Stop 139.03
357 TestStartStop/group/embed-certs/serial/Stop 139.09
360 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.11
361 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.39
363 TestStartStop/group/old-k8s-version/serial/DeployApp 0.46
364 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 116.74
365 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
367 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
371 TestStartStop/group/old-k8s-version/serial/SecondStart 709.44
372 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 544.42
373 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 544.44
374 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 544.45
375 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.66
376 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 474.52
377 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 460.37
378 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 328.79
379 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 145.28
x
+
TestAddons/parallel/Ingress (153.25s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-505457 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-505457 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-505457 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [7fced162-c099-4abd-a6ab-20626a136899] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [7fced162-c099-4abd-a6ab-20626a136899] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.004102497s
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-505457 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-505457 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.442245941s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-505457 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-505457 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.6
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-505457 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-amd64 -p addons-505457 addons disable ingress-dns --alsologtostderr -v=1: (1.10516366s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-505457 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-505457 addons disable ingress --alsologtostderr -v=1: (7.723709683s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-505457 -n addons-505457
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-505457 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-505457 logs -n 25: (1.204081246s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-257964                                                                     | download-only-257964 | jenkins | v1.33.1 | 06 Aug 24 07:06 UTC | 06 Aug 24 07:06 UTC |
	| delete  | -p download-only-460250                                                                     | download-only-460250 | jenkins | v1.33.1 | 06 Aug 24 07:06 UTC | 06 Aug 24 07:06 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-030771 | jenkins | v1.33.1 | 06 Aug 24 07:06 UTC |                     |
	|         | binary-mirror-030771                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:41953                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-030771                                                                     | binary-mirror-030771 | jenkins | v1.33.1 | 06 Aug 24 07:06 UTC | 06 Aug 24 07:06 UTC |
	| addons  | enable dashboard -p                                                                         | addons-505457        | jenkins | v1.33.1 | 06 Aug 24 07:06 UTC |                     |
	|         | addons-505457                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-505457        | jenkins | v1.33.1 | 06 Aug 24 07:06 UTC |                     |
	|         | addons-505457                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-505457 --wait=true                                                                | addons-505457        | jenkins | v1.33.1 | 06 Aug 24 07:06 UTC | 06 Aug 24 07:09 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-505457 addons disable                                                                | addons-505457        | jenkins | v1.33.1 | 06 Aug 24 07:09 UTC | 06 Aug 24 07:09 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-505457 addons disable                                                                | addons-505457        | jenkins | v1.33.1 | 06 Aug 24 07:09 UTC | 06 Aug 24 07:09 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| ip      | addons-505457 ip                                                                            | addons-505457        | jenkins | v1.33.1 | 06 Aug 24 07:09 UTC | 06 Aug 24 07:09 UTC |
	| addons  | addons-505457 addons disable                                                                | addons-505457        | jenkins | v1.33.1 | 06 Aug 24 07:09 UTC | 06 Aug 24 07:09 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-505457        | jenkins | v1.33.1 | 06 Aug 24 07:09 UTC | 06 Aug 24 07:09 UTC |
	|         | addons-505457                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-505457 ssh curl -s                                                                   | addons-505457        | jenkins | v1.33.1 | 06 Aug 24 07:10 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ssh     | addons-505457 ssh cat                                                                       | addons-505457        | jenkins | v1.33.1 | 06 Aug 24 07:10 UTC | 06 Aug 24 07:10 UTC |
	|         | /opt/local-path-provisioner/pvc-20492753-e965-4561-96c4-8676548fb006_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-505457 addons disable                                                                | addons-505457        | jenkins | v1.33.1 | 06 Aug 24 07:10 UTC | 06 Aug 24 07:10 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-505457 addons                                                                        | addons-505457        | jenkins | v1.33.1 | 06 Aug 24 07:10 UTC | 06 Aug 24 07:10 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-505457 addons                                                                        | addons-505457        | jenkins | v1.33.1 | 06 Aug 24 07:10 UTC | 06 Aug 24 07:10 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-505457        | jenkins | v1.33.1 | 06 Aug 24 07:10 UTC | 06 Aug 24 07:10 UTC |
	|         | -p addons-505457                                                                            |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-505457        | jenkins | v1.33.1 | 06 Aug 24 07:10 UTC | 06 Aug 24 07:10 UTC |
	|         | addons-505457                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-505457        | jenkins | v1.33.1 | 06 Aug 24 07:10 UTC | 06 Aug 24 07:11 UTC |
	|         | -p addons-505457                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-505457 addons disable                                                                | addons-505457        | jenkins | v1.33.1 | 06 Aug 24 07:11 UTC | 06 Aug 24 07:11 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-505457 addons disable                                                                | addons-505457        | jenkins | v1.33.1 | 06 Aug 24 07:11 UTC | 06 Aug 24 07:11 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-505457 ip                                                                            | addons-505457        | jenkins | v1.33.1 | 06 Aug 24 07:12 UTC | 06 Aug 24 07:12 UTC |
	| addons  | addons-505457 addons disable                                                                | addons-505457        | jenkins | v1.33.1 | 06 Aug 24 07:12 UTC | 06 Aug 24 07:12 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-505457 addons disable                                                                | addons-505457        | jenkins | v1.33.1 | 06 Aug 24 07:12 UTC | 06 Aug 24 07:12 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/06 07:06:45
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0806 07:06:45.744920   21485 out.go:291] Setting OutFile to fd 1 ...
	I0806 07:06:45.745137   21485 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 07:06:45.745150   21485 out.go:304] Setting ErrFile to fd 2...
	I0806 07:06:45.745156   21485 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 07:06:45.745355   21485 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19370-13057/.minikube/bin
	I0806 07:06:45.745930   21485 out.go:298] Setting JSON to false
	I0806 07:06:45.746731   21485 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2952,"bootTime":1722925054,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0806 07:06:45.746787   21485 start.go:139] virtualization: kvm guest
	I0806 07:06:45.748749   21485 out.go:177] * [addons-505457] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0806 07:06:45.749991   21485 notify.go:220] Checking for updates...
	I0806 07:06:45.750014   21485 out.go:177]   - MINIKUBE_LOCATION=19370
	I0806 07:06:45.751355   21485 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0806 07:06:45.752770   21485 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19370-13057/kubeconfig
	I0806 07:06:45.754015   21485 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19370-13057/.minikube
	I0806 07:06:45.755246   21485 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0806 07:06:45.756420   21485 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0806 07:06:45.758058   21485 driver.go:392] Setting default libvirt URI to qemu:///system
	I0806 07:06:45.789749   21485 out.go:177] * Using the kvm2 driver based on user configuration
	I0806 07:06:45.791133   21485 start.go:297] selected driver: kvm2
	I0806 07:06:45.791148   21485 start.go:901] validating driver "kvm2" against <nil>
	I0806 07:06:45.791159   21485 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0806 07:06:45.791816   21485 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 07:06:45.791907   21485 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19370-13057/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0806 07:06:45.806561   21485 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0806 07:06:45.806630   21485 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0806 07:06:45.806906   21485 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0806 07:06:45.806941   21485 cni.go:84] Creating CNI manager for ""
	I0806 07:06:45.806951   21485 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0806 07:06:45.806965   21485 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0806 07:06:45.807054   21485 start.go:340] cluster config:
	{Name:addons-505457 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-505457 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 07:06:45.807199   21485 iso.go:125] acquiring lock: {Name:mke58d71de28babe305c5eb0c31c274e9713bb3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 07:06:45.809402   21485 out.go:177] * Starting "addons-505457" primary control-plane node in "addons-505457" cluster
	I0806 07:06:45.810968   21485 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0806 07:06:45.811026   21485 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0806 07:06:45.811047   21485 cache.go:56] Caching tarball of preloaded images
	I0806 07:06:45.811135   21485 preload.go:172] Found /home/jenkins/minikube-integration/19370-13057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0806 07:06:45.811149   21485 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0806 07:06:45.811459   21485 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/addons-505457/config.json ...
	I0806 07:06:45.811482   21485 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/addons-505457/config.json: {Name:mk989ff9cc1dd8ed035392b6461550a03a68e416 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 07:06:45.811677   21485 start.go:360] acquireMachinesLock for addons-505457: {Name:mkc602ac9c8cc3360dcc0fcb8104303837aaab61 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 07:06:45.811748   21485 start.go:364] duration metric: took 52.327µs to acquireMachinesLock for "addons-505457"
	I0806 07:06:45.811767   21485 start.go:93] Provisioning new machine with config: &{Name:addons-505457 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:addons-505457 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0806 07:06:45.811836   21485 start.go:125] createHost starting for "" (driver="kvm2")
	I0806 07:06:45.813505   21485 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0806 07:06:45.813646   21485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:06:45.813692   21485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:06:45.827860   21485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41229
	I0806 07:06:45.828259   21485 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:06:45.828786   21485 main.go:141] libmachine: Using API Version  1
	I0806 07:06:45.828810   21485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:06:45.829104   21485 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:06:45.829350   21485 main.go:141] libmachine: (addons-505457) Calling .GetMachineName
	I0806 07:06:45.829483   21485 main.go:141] libmachine: (addons-505457) Calling .DriverName
	I0806 07:06:45.829692   21485 start.go:159] libmachine.API.Create for "addons-505457" (driver="kvm2")
	I0806 07:06:45.829721   21485 client.go:168] LocalClient.Create starting
	I0806 07:06:45.829757   21485 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem
	I0806 07:06:45.913833   21485 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem
	I0806 07:06:46.025286   21485 main.go:141] libmachine: Running pre-create checks...
	I0806 07:06:46.025308   21485 main.go:141] libmachine: (addons-505457) Calling .PreCreateCheck
	I0806 07:06:46.025762   21485 main.go:141] libmachine: (addons-505457) Calling .GetConfigRaw
	I0806 07:06:46.026193   21485 main.go:141] libmachine: Creating machine...
	I0806 07:06:46.026212   21485 main.go:141] libmachine: (addons-505457) Calling .Create
	I0806 07:06:46.026340   21485 main.go:141] libmachine: (addons-505457) Creating KVM machine...
	I0806 07:06:46.027455   21485 main.go:141] libmachine: (addons-505457) DBG | found existing default KVM network
	I0806 07:06:46.028114   21485 main.go:141] libmachine: (addons-505457) DBG | I0806 07:06:46.027973   21507 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ad0}
	I0806 07:06:46.028146   21485 main.go:141] libmachine: (addons-505457) DBG | created network xml: 
	I0806 07:06:46.028227   21485 main.go:141] libmachine: (addons-505457) DBG | <network>
	I0806 07:06:46.028247   21485 main.go:141] libmachine: (addons-505457) DBG |   <name>mk-addons-505457</name>
	I0806 07:06:46.028265   21485 main.go:141] libmachine: (addons-505457) DBG |   <dns enable='no'/>
	I0806 07:06:46.028289   21485 main.go:141] libmachine: (addons-505457) DBG |   
	I0806 07:06:46.028302   21485 main.go:141] libmachine: (addons-505457) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0806 07:06:46.028309   21485 main.go:141] libmachine: (addons-505457) DBG |     <dhcp>
	I0806 07:06:46.028324   21485 main.go:141] libmachine: (addons-505457) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0806 07:06:46.028346   21485 main.go:141] libmachine: (addons-505457) DBG |     </dhcp>
	I0806 07:06:46.028359   21485 main.go:141] libmachine: (addons-505457) DBG |   </ip>
	I0806 07:06:46.028381   21485 main.go:141] libmachine: (addons-505457) DBG |   
	I0806 07:06:46.028393   21485 main.go:141] libmachine: (addons-505457) DBG | </network>
	I0806 07:06:46.028402   21485 main.go:141] libmachine: (addons-505457) DBG | 
	I0806 07:06:46.033352   21485 main.go:141] libmachine: (addons-505457) DBG | trying to create private KVM network mk-addons-505457 192.168.39.0/24...
	I0806 07:06:46.096189   21485 main.go:141] libmachine: (addons-505457) Setting up store path in /home/jenkins/minikube-integration/19370-13057/.minikube/machines/addons-505457 ...
	I0806 07:06:46.096241   21485 main.go:141] libmachine: (addons-505457) Building disk image from file:///home/jenkins/minikube-integration/19370-13057/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0806 07:06:46.096255   21485 main.go:141] libmachine: (addons-505457) DBG | private KVM network mk-addons-505457 192.168.39.0/24 created
	I0806 07:06:46.096275   21485 main.go:141] libmachine: (addons-505457) DBG | I0806 07:06:46.096120   21507 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19370-13057/.minikube
	I0806 07:06:46.096304   21485 main.go:141] libmachine: (addons-505457) Downloading /home/jenkins/minikube-integration/19370-13057/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19370-13057/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0806 07:06:46.361297   21485 main.go:141] libmachine: (addons-505457) DBG | I0806 07:06:46.361138   21507 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/addons-505457/id_rsa...
	I0806 07:06:46.491500   21485 main.go:141] libmachine: (addons-505457) DBG | I0806 07:06:46.491355   21507 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/addons-505457/addons-505457.rawdisk...
	I0806 07:06:46.491530   21485 main.go:141] libmachine: (addons-505457) DBG | Writing magic tar header
	I0806 07:06:46.491544   21485 main.go:141] libmachine: (addons-505457) DBG | Writing SSH key tar header
	I0806 07:06:46.491557   21485 main.go:141] libmachine: (addons-505457) DBG | I0806 07:06:46.491467   21507 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19370-13057/.minikube/machines/addons-505457 ...
	I0806 07:06:46.491631   21485 main.go:141] libmachine: (addons-505457) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/addons-505457
	I0806 07:06:46.491668   21485 main.go:141] libmachine: (addons-505457) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19370-13057/.minikube/machines
	I0806 07:06:46.491681   21485 main.go:141] libmachine: (addons-505457) Setting executable bit set on /home/jenkins/minikube-integration/19370-13057/.minikube/machines/addons-505457 (perms=drwx------)
	I0806 07:06:46.491695   21485 main.go:141] libmachine: (addons-505457) Setting executable bit set on /home/jenkins/minikube-integration/19370-13057/.minikube/machines (perms=drwxr-xr-x)
	I0806 07:06:46.491705   21485 main.go:141] libmachine: (addons-505457) Setting executable bit set on /home/jenkins/minikube-integration/19370-13057/.minikube (perms=drwxr-xr-x)
	I0806 07:06:46.491719   21485 main.go:141] libmachine: (addons-505457) Setting executable bit set on /home/jenkins/minikube-integration/19370-13057 (perms=drwxrwxr-x)
	I0806 07:06:46.491733   21485 main.go:141] libmachine: (addons-505457) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0806 07:06:46.491748   21485 main.go:141] libmachine: (addons-505457) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0806 07:06:46.491764   21485 main.go:141] libmachine: (addons-505457) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19370-13057/.minikube
	I0806 07:06:46.491774   21485 main.go:141] libmachine: (addons-505457) Creating domain...
	I0806 07:06:46.491790   21485 main.go:141] libmachine: (addons-505457) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19370-13057
	I0806 07:06:46.491802   21485 main.go:141] libmachine: (addons-505457) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0806 07:06:46.491815   21485 main.go:141] libmachine: (addons-505457) DBG | Checking permissions on dir: /home/jenkins
	I0806 07:06:46.491826   21485 main.go:141] libmachine: (addons-505457) DBG | Checking permissions on dir: /home
	I0806 07:06:46.491839   21485 main.go:141] libmachine: (addons-505457) DBG | Skipping /home - not owner
	I0806 07:06:46.492873   21485 main.go:141] libmachine: (addons-505457) define libvirt domain using xml: 
	I0806 07:06:46.492900   21485 main.go:141] libmachine: (addons-505457) <domain type='kvm'>
	I0806 07:06:46.492915   21485 main.go:141] libmachine: (addons-505457)   <name>addons-505457</name>
	I0806 07:06:46.492924   21485 main.go:141] libmachine: (addons-505457)   <memory unit='MiB'>4000</memory>
	I0806 07:06:46.492938   21485 main.go:141] libmachine: (addons-505457)   <vcpu>2</vcpu>
	I0806 07:06:46.492943   21485 main.go:141] libmachine: (addons-505457)   <features>
	I0806 07:06:46.492949   21485 main.go:141] libmachine: (addons-505457)     <acpi/>
	I0806 07:06:46.492959   21485 main.go:141] libmachine: (addons-505457)     <apic/>
	I0806 07:06:46.492966   21485 main.go:141] libmachine: (addons-505457)     <pae/>
	I0806 07:06:46.492974   21485 main.go:141] libmachine: (addons-505457)     
	I0806 07:06:46.492985   21485 main.go:141] libmachine: (addons-505457)   </features>
	I0806 07:06:46.492999   21485 main.go:141] libmachine: (addons-505457)   <cpu mode='host-passthrough'>
	I0806 07:06:46.493007   21485 main.go:141] libmachine: (addons-505457)   
	I0806 07:06:46.493016   21485 main.go:141] libmachine: (addons-505457)   </cpu>
	I0806 07:06:46.493024   21485 main.go:141] libmachine: (addons-505457)   <os>
	I0806 07:06:46.493029   21485 main.go:141] libmachine: (addons-505457)     <type>hvm</type>
	I0806 07:06:46.493035   21485 main.go:141] libmachine: (addons-505457)     <boot dev='cdrom'/>
	I0806 07:06:46.493048   21485 main.go:141] libmachine: (addons-505457)     <boot dev='hd'/>
	I0806 07:06:46.493061   21485 main.go:141] libmachine: (addons-505457)     <bootmenu enable='no'/>
	I0806 07:06:46.493083   21485 main.go:141] libmachine: (addons-505457)   </os>
	I0806 07:06:46.493095   21485 main.go:141] libmachine: (addons-505457)   <devices>
	I0806 07:06:46.493103   21485 main.go:141] libmachine: (addons-505457)     <disk type='file' device='cdrom'>
	I0806 07:06:46.493116   21485 main.go:141] libmachine: (addons-505457)       <source file='/home/jenkins/minikube-integration/19370-13057/.minikube/machines/addons-505457/boot2docker.iso'/>
	I0806 07:06:46.493124   21485 main.go:141] libmachine: (addons-505457)       <target dev='hdc' bus='scsi'/>
	I0806 07:06:46.493129   21485 main.go:141] libmachine: (addons-505457)       <readonly/>
	I0806 07:06:46.493138   21485 main.go:141] libmachine: (addons-505457)     </disk>
	I0806 07:06:46.493152   21485 main.go:141] libmachine: (addons-505457)     <disk type='file' device='disk'>
	I0806 07:06:46.493169   21485 main.go:141] libmachine: (addons-505457)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0806 07:06:46.493184   21485 main.go:141] libmachine: (addons-505457)       <source file='/home/jenkins/minikube-integration/19370-13057/.minikube/machines/addons-505457/addons-505457.rawdisk'/>
	I0806 07:06:46.493195   21485 main.go:141] libmachine: (addons-505457)       <target dev='hda' bus='virtio'/>
	I0806 07:06:46.493206   21485 main.go:141] libmachine: (addons-505457)     </disk>
	I0806 07:06:46.493214   21485 main.go:141] libmachine: (addons-505457)     <interface type='network'>
	I0806 07:06:46.493221   21485 main.go:141] libmachine: (addons-505457)       <source network='mk-addons-505457'/>
	I0806 07:06:46.493231   21485 main.go:141] libmachine: (addons-505457)       <model type='virtio'/>
	I0806 07:06:46.493242   21485 main.go:141] libmachine: (addons-505457)     </interface>
	I0806 07:06:46.493254   21485 main.go:141] libmachine: (addons-505457)     <interface type='network'>
	I0806 07:06:46.493265   21485 main.go:141] libmachine: (addons-505457)       <source network='default'/>
	I0806 07:06:46.493277   21485 main.go:141] libmachine: (addons-505457)       <model type='virtio'/>
	I0806 07:06:46.493286   21485 main.go:141] libmachine: (addons-505457)     </interface>
	I0806 07:06:46.493344   21485 main.go:141] libmachine: (addons-505457)     <serial type='pty'>
	I0806 07:06:46.493361   21485 main.go:141] libmachine: (addons-505457)       <target port='0'/>
	I0806 07:06:46.493377   21485 main.go:141] libmachine: (addons-505457)     </serial>
	I0806 07:06:46.493390   21485 main.go:141] libmachine: (addons-505457)     <console type='pty'>
	I0806 07:06:46.493403   21485 main.go:141] libmachine: (addons-505457)       <target type='serial' port='0'/>
	I0806 07:06:46.493411   21485 main.go:141] libmachine: (addons-505457)     </console>
	I0806 07:06:46.493431   21485 main.go:141] libmachine: (addons-505457)     <rng model='virtio'>
	I0806 07:06:46.493443   21485 main.go:141] libmachine: (addons-505457)       <backend model='random'>/dev/random</backend>
	I0806 07:06:46.493461   21485 main.go:141] libmachine: (addons-505457)     </rng>
	I0806 07:06:46.493471   21485 main.go:141] libmachine: (addons-505457)     
	I0806 07:06:46.493479   21485 main.go:141] libmachine: (addons-505457)     
	I0806 07:06:46.493489   21485 main.go:141] libmachine: (addons-505457)   </devices>
	I0806 07:06:46.493498   21485 main.go:141] libmachine: (addons-505457) </domain>
	I0806 07:06:46.493505   21485 main.go:141] libmachine: (addons-505457) 
	I0806 07:06:46.500107   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined MAC address 52:54:00:1a:ac:0c in network default
	I0806 07:06:46.500697   21485 main.go:141] libmachine: (addons-505457) Ensuring networks are active...
	I0806 07:06:46.500724   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:06:46.501398   21485 main.go:141] libmachine: (addons-505457) Ensuring network default is active
	I0806 07:06:46.501766   21485 main.go:141] libmachine: (addons-505457) Ensuring network mk-addons-505457 is active
	I0806 07:06:46.502335   21485 main.go:141] libmachine: (addons-505457) Getting domain xml...
	I0806 07:06:46.502988   21485 main.go:141] libmachine: (addons-505457) Creating domain...
	I0806 07:06:47.913123   21485 main.go:141] libmachine: (addons-505457) Waiting to get IP...
	I0806 07:06:47.913921   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:06:47.914198   21485 main.go:141] libmachine: (addons-505457) DBG | unable to find current IP address of domain addons-505457 in network mk-addons-505457
	I0806 07:06:47.914237   21485 main.go:141] libmachine: (addons-505457) DBG | I0806 07:06:47.914183   21507 retry.go:31] will retry after 303.819298ms: waiting for machine to come up
	I0806 07:06:48.219834   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:06:48.220268   21485 main.go:141] libmachine: (addons-505457) DBG | unable to find current IP address of domain addons-505457 in network mk-addons-505457
	I0806 07:06:48.220314   21485 main.go:141] libmachine: (addons-505457) DBG | I0806 07:06:48.220265   21507 retry.go:31] will retry after 289.016505ms: waiting for machine to come up
	I0806 07:06:48.510799   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:06:48.511227   21485 main.go:141] libmachine: (addons-505457) DBG | unable to find current IP address of domain addons-505457 in network mk-addons-505457
	I0806 07:06:48.511251   21485 main.go:141] libmachine: (addons-505457) DBG | I0806 07:06:48.511179   21507 retry.go:31] will retry after 313.891709ms: waiting for machine to come up
	I0806 07:06:48.826515   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:06:48.827049   21485 main.go:141] libmachine: (addons-505457) DBG | unable to find current IP address of domain addons-505457 in network mk-addons-505457
	I0806 07:06:48.827071   21485 main.go:141] libmachine: (addons-505457) DBG | I0806 07:06:48.827018   21507 retry.go:31] will retry after 378.126122ms: waiting for machine to come up
	I0806 07:06:49.206422   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:06:49.206870   21485 main.go:141] libmachine: (addons-505457) DBG | unable to find current IP address of domain addons-505457 in network mk-addons-505457
	I0806 07:06:49.206896   21485 main.go:141] libmachine: (addons-505457) DBG | I0806 07:06:49.206834   21507 retry.go:31] will retry after 633.845271ms: waiting for machine to come up
	I0806 07:06:49.842758   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:06:49.843234   21485 main.go:141] libmachine: (addons-505457) DBG | unable to find current IP address of domain addons-505457 in network mk-addons-505457
	I0806 07:06:49.843263   21485 main.go:141] libmachine: (addons-505457) DBG | I0806 07:06:49.843175   21507 retry.go:31] will retry after 776.815657ms: waiting for machine to come up
	I0806 07:06:50.620976   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:06:50.621424   21485 main.go:141] libmachine: (addons-505457) DBG | unable to find current IP address of domain addons-505457 in network mk-addons-505457
	I0806 07:06:50.621450   21485 main.go:141] libmachine: (addons-505457) DBG | I0806 07:06:50.621376   21507 retry.go:31] will retry after 1.059906345s: waiting for machine to come up
	I0806 07:06:51.683023   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:06:51.683531   21485 main.go:141] libmachine: (addons-505457) DBG | unable to find current IP address of domain addons-505457 in network mk-addons-505457
	I0806 07:06:51.683553   21485 main.go:141] libmachine: (addons-505457) DBG | I0806 07:06:51.683482   21507 retry.go:31] will retry after 1.356188516s: waiting for machine to come up
	I0806 07:06:53.043011   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:06:53.043468   21485 main.go:141] libmachine: (addons-505457) DBG | unable to find current IP address of domain addons-505457 in network mk-addons-505457
	I0806 07:06:53.043489   21485 main.go:141] libmachine: (addons-505457) DBG | I0806 07:06:53.043433   21507 retry.go:31] will retry after 1.355832507s: waiting for machine to come up
	I0806 07:06:54.401010   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:06:54.401401   21485 main.go:141] libmachine: (addons-505457) DBG | unable to find current IP address of domain addons-505457 in network mk-addons-505457
	I0806 07:06:54.401431   21485 main.go:141] libmachine: (addons-505457) DBG | I0806 07:06:54.401345   21507 retry.go:31] will retry after 1.545085748s: waiting for machine to come up
	I0806 07:06:55.947981   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:06:55.948495   21485 main.go:141] libmachine: (addons-505457) DBG | unable to find current IP address of domain addons-505457 in network mk-addons-505457
	I0806 07:06:55.948523   21485 main.go:141] libmachine: (addons-505457) DBG | I0806 07:06:55.948456   21507 retry.go:31] will retry after 2.636113139s: waiting for machine to come up
	I0806 07:06:58.588161   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:06:58.588721   21485 main.go:141] libmachine: (addons-505457) DBG | unable to find current IP address of domain addons-505457 in network mk-addons-505457
	I0806 07:06:58.588749   21485 main.go:141] libmachine: (addons-505457) DBG | I0806 07:06:58.588683   21507 retry.go:31] will retry after 3.160002907s: waiting for machine to come up
	I0806 07:07:01.750669   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:01.751223   21485 main.go:141] libmachine: (addons-505457) DBG | unable to find current IP address of domain addons-505457 in network mk-addons-505457
	I0806 07:07:01.751248   21485 main.go:141] libmachine: (addons-505457) DBG | I0806 07:07:01.751194   21507 retry.go:31] will retry after 3.715915148s: waiting for machine to come up
	I0806 07:07:05.470996   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:05.471282   21485 main.go:141] libmachine: (addons-505457) DBG | unable to find current IP address of domain addons-505457 in network mk-addons-505457
	I0806 07:07:05.471308   21485 main.go:141] libmachine: (addons-505457) DBG | I0806 07:07:05.471243   21507 retry.go:31] will retry after 4.206631754s: waiting for machine to come up
	I0806 07:07:09.682654   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:09.683157   21485 main.go:141] libmachine: (addons-505457) Found IP for machine: 192.168.39.6
	I0806 07:07:09.683197   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has current primary IP address 192.168.39.6 and MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:09.683211   21485 main.go:141] libmachine: (addons-505457) Reserving static IP address...
	I0806 07:07:09.683636   21485 main.go:141] libmachine: (addons-505457) DBG | unable to find host DHCP lease matching {name: "addons-505457", mac: "52:54:00:d5:7a:7b", ip: "192.168.39.6"} in network mk-addons-505457
	I0806 07:07:09.753707   21485 main.go:141] libmachine: (addons-505457) DBG | Getting to WaitForSSH function...
	I0806 07:07:09.753736   21485 main.go:141] libmachine: (addons-505457) Reserved static IP address: 192.168.39.6
	I0806 07:07:09.753782   21485 main.go:141] libmachine: (addons-505457) Waiting for SSH to be available...
	I0806 07:07:09.756309   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:09.756812   21485 main.go:141] libmachine: (addons-505457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:7a:7b", ip: ""} in network mk-addons-505457: {Iface:virbr1 ExpiryTime:2024-08-06 08:07:01 +0000 UTC Type:0 Mac:52:54:00:d5:7a:7b Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:minikube Clientid:01:52:54:00:d5:7a:7b}
	I0806 07:07:09.756845   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined IP address 192.168.39.6 and MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:09.757022   21485 main.go:141] libmachine: (addons-505457) DBG | Using SSH client type: external
	I0806 07:07:09.757043   21485 main.go:141] libmachine: (addons-505457) DBG | Using SSH private key: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/addons-505457/id_rsa (-rw-------)
	I0806 07:07:09.757079   21485 main.go:141] libmachine: (addons-505457) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.6 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19370-13057/.minikube/machines/addons-505457/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0806 07:07:09.757093   21485 main.go:141] libmachine: (addons-505457) DBG | About to run SSH command:
	I0806 07:07:09.757105   21485 main.go:141] libmachine: (addons-505457) DBG | exit 0
	I0806 07:07:09.896534   21485 main.go:141] libmachine: (addons-505457) DBG | SSH cmd err, output: <nil>: 
	I0806 07:07:09.896811   21485 main.go:141] libmachine: (addons-505457) KVM machine creation complete!
	I0806 07:07:09.897099   21485 main.go:141] libmachine: (addons-505457) Calling .GetConfigRaw
	I0806 07:07:09.897614   21485 main.go:141] libmachine: (addons-505457) Calling .DriverName
	I0806 07:07:09.897870   21485 main.go:141] libmachine: (addons-505457) Calling .DriverName
	I0806 07:07:09.898059   21485 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0806 07:07:09.898075   21485 main.go:141] libmachine: (addons-505457) Calling .GetState
	I0806 07:07:09.899322   21485 main.go:141] libmachine: Detecting operating system of created instance...
	I0806 07:07:09.899336   21485 main.go:141] libmachine: Waiting for SSH to be available...
	I0806 07:07:09.899342   21485 main.go:141] libmachine: Getting to WaitForSSH function...
	I0806 07:07:09.899347   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHHostname
	I0806 07:07:09.901405   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:09.901763   21485 main.go:141] libmachine: (addons-505457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:7a:7b", ip: ""} in network mk-addons-505457: {Iface:virbr1 ExpiryTime:2024-08-06 08:07:01 +0000 UTC Type:0 Mac:52:54:00:d5:7a:7b Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:addons-505457 Clientid:01:52:54:00:d5:7a:7b}
	I0806 07:07:09.901805   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined IP address 192.168.39.6 and MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:09.901925   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHPort
	I0806 07:07:09.902106   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHKeyPath
	I0806 07:07:09.902268   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHKeyPath
	I0806 07:07:09.902412   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHUsername
	I0806 07:07:09.902594   21485 main.go:141] libmachine: Using SSH client type: native
	I0806 07:07:09.902762   21485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0806 07:07:09.902773   21485 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0806 07:07:10.015723   21485 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 07:07:10.015747   21485 main.go:141] libmachine: Detecting the provisioner...
	I0806 07:07:10.015759   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHHostname
	I0806 07:07:10.018158   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:10.018530   21485 main.go:141] libmachine: (addons-505457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:7a:7b", ip: ""} in network mk-addons-505457: {Iface:virbr1 ExpiryTime:2024-08-06 08:07:01 +0000 UTC Type:0 Mac:52:54:00:d5:7a:7b Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:addons-505457 Clientid:01:52:54:00:d5:7a:7b}
	I0806 07:07:10.018567   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined IP address 192.168.39.6 and MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:10.018675   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHPort
	I0806 07:07:10.018861   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHKeyPath
	I0806 07:07:10.019005   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHKeyPath
	I0806 07:07:10.019132   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHUsername
	I0806 07:07:10.019303   21485 main.go:141] libmachine: Using SSH client type: native
	I0806 07:07:10.019478   21485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0806 07:07:10.019491   21485 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0806 07:07:10.133113   21485 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0806 07:07:10.133187   21485 main.go:141] libmachine: found compatible host: buildroot
	I0806 07:07:10.133198   21485 main.go:141] libmachine: Provisioning with buildroot...
	I0806 07:07:10.133207   21485 main.go:141] libmachine: (addons-505457) Calling .GetMachineName
	I0806 07:07:10.133472   21485 buildroot.go:166] provisioning hostname "addons-505457"
	I0806 07:07:10.133501   21485 main.go:141] libmachine: (addons-505457) Calling .GetMachineName
	I0806 07:07:10.133736   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHHostname
	I0806 07:07:10.136322   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:10.136596   21485 main.go:141] libmachine: (addons-505457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:7a:7b", ip: ""} in network mk-addons-505457: {Iface:virbr1 ExpiryTime:2024-08-06 08:07:01 +0000 UTC Type:0 Mac:52:54:00:d5:7a:7b Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:addons-505457 Clientid:01:52:54:00:d5:7a:7b}
	I0806 07:07:10.136635   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined IP address 192.168.39.6 and MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:10.136849   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHPort
	I0806 07:07:10.137048   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHKeyPath
	I0806 07:07:10.137179   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHKeyPath
	I0806 07:07:10.137343   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHUsername
	I0806 07:07:10.137505   21485 main.go:141] libmachine: Using SSH client type: native
	I0806 07:07:10.137664   21485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0806 07:07:10.137676   21485 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-505457 && echo "addons-505457" | sudo tee /etc/hostname
	I0806 07:07:10.267193   21485 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-505457
	
	I0806 07:07:10.267215   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHHostname
	I0806 07:07:10.269789   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:10.270099   21485 main.go:141] libmachine: (addons-505457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:7a:7b", ip: ""} in network mk-addons-505457: {Iface:virbr1 ExpiryTime:2024-08-06 08:07:01 +0000 UTC Type:0 Mac:52:54:00:d5:7a:7b Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:addons-505457 Clientid:01:52:54:00:d5:7a:7b}
	I0806 07:07:10.270127   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined IP address 192.168.39.6 and MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:10.270274   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHPort
	I0806 07:07:10.270445   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHKeyPath
	I0806 07:07:10.270626   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHKeyPath
	I0806 07:07:10.270771   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHUsername
	I0806 07:07:10.271035   21485 main.go:141] libmachine: Using SSH client type: native
	I0806 07:07:10.271219   21485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0806 07:07:10.271233   21485 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-505457' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-505457/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-505457' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0806 07:07:10.393141   21485 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 07:07:10.393174   21485 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19370-13057/.minikube CaCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19370-13057/.minikube}
	I0806 07:07:10.393214   21485 buildroot.go:174] setting up certificates
	I0806 07:07:10.393228   21485 provision.go:84] configureAuth start
	I0806 07:07:10.393237   21485 main.go:141] libmachine: (addons-505457) Calling .GetMachineName
	I0806 07:07:10.393536   21485 main.go:141] libmachine: (addons-505457) Calling .GetIP
	I0806 07:07:10.396273   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:10.396769   21485 main.go:141] libmachine: (addons-505457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:7a:7b", ip: ""} in network mk-addons-505457: {Iface:virbr1 ExpiryTime:2024-08-06 08:07:01 +0000 UTC Type:0 Mac:52:54:00:d5:7a:7b Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:addons-505457 Clientid:01:52:54:00:d5:7a:7b}
	I0806 07:07:10.396797   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined IP address 192.168.39.6 and MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:10.397001   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHHostname
	I0806 07:07:10.399088   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:10.399417   21485 main.go:141] libmachine: (addons-505457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:7a:7b", ip: ""} in network mk-addons-505457: {Iface:virbr1 ExpiryTime:2024-08-06 08:07:01 +0000 UTC Type:0 Mac:52:54:00:d5:7a:7b Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:addons-505457 Clientid:01:52:54:00:d5:7a:7b}
	I0806 07:07:10.399441   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined IP address 192.168.39.6 and MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:10.399582   21485 provision.go:143] copyHostCerts
	I0806 07:07:10.399650   21485 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem (1078 bytes)
	I0806 07:07:10.399771   21485 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem (1123 bytes)
	I0806 07:07:10.399857   21485 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem (1675 bytes)
	I0806 07:07:10.399916   21485 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem org=jenkins.addons-505457 san=[127.0.0.1 192.168.39.6 addons-505457 localhost minikube]
	I0806 07:07:10.558240   21485 provision.go:177] copyRemoteCerts
	I0806 07:07:10.558299   21485 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0806 07:07:10.558332   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHHostname
	I0806 07:07:10.561302   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:10.561613   21485 main.go:141] libmachine: (addons-505457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:7a:7b", ip: ""} in network mk-addons-505457: {Iface:virbr1 ExpiryTime:2024-08-06 08:07:01 +0000 UTC Type:0 Mac:52:54:00:d5:7a:7b Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:addons-505457 Clientid:01:52:54:00:d5:7a:7b}
	I0806 07:07:10.561638   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined IP address 192.168.39.6 and MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:10.561847   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHPort
	I0806 07:07:10.562072   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHKeyPath
	I0806 07:07:10.562271   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHUsername
	I0806 07:07:10.562397   21485 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/addons-505457/id_rsa Username:docker}
	I0806 07:07:10.651363   21485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0806 07:07:10.676106   21485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0806 07:07:10.700880   21485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0806 07:07:10.725132   21485 provision.go:87] duration metric: took 331.892152ms to configureAuth
	I0806 07:07:10.725265   21485 buildroot.go:189] setting minikube options for container-runtime
	I0806 07:07:10.725455   21485 config.go:182] Loaded profile config "addons-505457": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 07:07:10.725566   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHHostname
	I0806 07:07:10.728296   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:10.728666   21485 main.go:141] libmachine: (addons-505457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:7a:7b", ip: ""} in network mk-addons-505457: {Iface:virbr1 ExpiryTime:2024-08-06 08:07:01 +0000 UTC Type:0 Mac:52:54:00:d5:7a:7b Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:addons-505457 Clientid:01:52:54:00:d5:7a:7b}
	I0806 07:07:10.728695   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined IP address 192.168.39.6 and MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:10.728880   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHPort
	I0806 07:07:10.729049   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHKeyPath
	I0806 07:07:10.729198   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHKeyPath
	I0806 07:07:10.729294   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHUsername
	I0806 07:07:10.729399   21485 main.go:141] libmachine: Using SSH client type: native
	I0806 07:07:10.729548   21485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0806 07:07:10.729563   21485 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0806 07:07:11.019239   21485 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0806 07:07:11.019261   21485 main.go:141] libmachine: Checking connection to Docker...
	I0806 07:07:11.019269   21485 main.go:141] libmachine: (addons-505457) Calling .GetURL
	I0806 07:07:11.020572   21485 main.go:141] libmachine: (addons-505457) DBG | Using libvirt version 6000000
	I0806 07:07:11.022645   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:11.022912   21485 main.go:141] libmachine: (addons-505457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:7a:7b", ip: ""} in network mk-addons-505457: {Iface:virbr1 ExpiryTime:2024-08-06 08:07:01 +0000 UTC Type:0 Mac:52:54:00:d5:7a:7b Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:addons-505457 Clientid:01:52:54:00:d5:7a:7b}
	I0806 07:07:11.022945   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined IP address 192.168.39.6 and MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:11.023140   21485 main.go:141] libmachine: Docker is up and running!
	I0806 07:07:11.023154   21485 main.go:141] libmachine: Reticulating splines...
	I0806 07:07:11.023162   21485 client.go:171] duration metric: took 25.193432238s to LocalClient.Create
	I0806 07:07:11.023188   21485 start.go:167] duration metric: took 25.193495675s to libmachine.API.Create "addons-505457"
	I0806 07:07:11.023199   21485 start.go:293] postStartSetup for "addons-505457" (driver="kvm2")
	I0806 07:07:11.023213   21485 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0806 07:07:11.023237   21485 main.go:141] libmachine: (addons-505457) Calling .DriverName
	I0806 07:07:11.023463   21485 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0806 07:07:11.023485   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHHostname
	I0806 07:07:11.025659   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:11.025946   21485 main.go:141] libmachine: (addons-505457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:7a:7b", ip: ""} in network mk-addons-505457: {Iface:virbr1 ExpiryTime:2024-08-06 08:07:01 +0000 UTC Type:0 Mac:52:54:00:d5:7a:7b Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:addons-505457 Clientid:01:52:54:00:d5:7a:7b}
	I0806 07:07:11.025972   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined IP address 192.168.39.6 and MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:11.026072   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHPort
	I0806 07:07:11.026248   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHKeyPath
	I0806 07:07:11.026415   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHUsername
	I0806 07:07:11.026541   21485 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/addons-505457/id_rsa Username:docker}
	I0806 07:07:11.115415   21485 ssh_runner.go:195] Run: cat /etc/os-release
	I0806 07:07:11.119552   21485 info.go:137] Remote host: Buildroot 2023.02.9
	I0806 07:07:11.119573   21485 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-13057/.minikube/addons for local assets ...
	I0806 07:07:11.119682   21485 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-13057/.minikube/files for local assets ...
	I0806 07:07:11.119723   21485 start.go:296] duration metric: took 96.516549ms for postStartSetup
	I0806 07:07:11.119764   21485 main.go:141] libmachine: (addons-505457) Calling .GetConfigRaw
	I0806 07:07:11.120384   21485 main.go:141] libmachine: (addons-505457) Calling .GetIP
	I0806 07:07:11.122996   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:11.123369   21485 main.go:141] libmachine: (addons-505457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:7a:7b", ip: ""} in network mk-addons-505457: {Iface:virbr1 ExpiryTime:2024-08-06 08:07:01 +0000 UTC Type:0 Mac:52:54:00:d5:7a:7b Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:addons-505457 Clientid:01:52:54:00:d5:7a:7b}
	I0806 07:07:11.123396   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined IP address 192.168.39.6 and MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:11.123595   21485 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/addons-505457/config.json ...
	I0806 07:07:11.123822   21485 start.go:128] duration metric: took 25.311976507s to createHost
	I0806 07:07:11.123846   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHHostname
	I0806 07:07:11.126008   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:11.126284   21485 main.go:141] libmachine: (addons-505457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:7a:7b", ip: ""} in network mk-addons-505457: {Iface:virbr1 ExpiryTime:2024-08-06 08:07:01 +0000 UTC Type:0 Mac:52:54:00:d5:7a:7b Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:addons-505457 Clientid:01:52:54:00:d5:7a:7b}
	I0806 07:07:11.126309   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined IP address 192.168.39.6 and MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:11.126504   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHPort
	I0806 07:07:11.126653   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHKeyPath
	I0806 07:07:11.126760   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHKeyPath
	I0806 07:07:11.126915   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHUsername
	I0806 07:07:11.127065   21485 main.go:141] libmachine: Using SSH client type: native
	I0806 07:07:11.127262   21485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0806 07:07:11.127273   21485 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0806 07:07:11.241267   21485 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722928031.217396244
	
	I0806 07:07:11.241309   21485 fix.go:216] guest clock: 1722928031.217396244
	I0806 07:07:11.241319   21485 fix.go:229] Guest: 2024-08-06 07:07:11.217396244 +0000 UTC Remote: 2024-08-06 07:07:11.123835233 +0000 UTC m=+25.411716346 (delta=93.561011ms)
	I0806 07:07:11.241363   21485 fix.go:200] guest clock delta is within tolerance: 93.561011ms
	I0806 07:07:11.241372   21485 start.go:83] releasing machines lock for "addons-505457", held for 25.429613061s
	I0806 07:07:11.241405   21485 main.go:141] libmachine: (addons-505457) Calling .DriverName
	I0806 07:07:11.241691   21485 main.go:141] libmachine: (addons-505457) Calling .GetIP
	I0806 07:07:11.245016   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:11.245431   21485 main.go:141] libmachine: (addons-505457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:7a:7b", ip: ""} in network mk-addons-505457: {Iface:virbr1 ExpiryTime:2024-08-06 08:07:01 +0000 UTC Type:0 Mac:52:54:00:d5:7a:7b Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:addons-505457 Clientid:01:52:54:00:d5:7a:7b}
	I0806 07:07:11.245457   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined IP address 192.168.39.6 and MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:11.245663   21485 main.go:141] libmachine: (addons-505457) Calling .DriverName
	I0806 07:07:11.246337   21485 main.go:141] libmachine: (addons-505457) Calling .DriverName
	I0806 07:07:11.246560   21485 main.go:141] libmachine: (addons-505457) Calling .DriverName
	I0806 07:07:11.246655   21485 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0806 07:07:11.246686   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHHostname
	I0806 07:07:11.246795   21485 ssh_runner.go:195] Run: cat /version.json
	I0806 07:07:11.246821   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHHostname
	I0806 07:07:11.249742   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:11.249801   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:11.250227   21485 main.go:141] libmachine: (addons-505457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:7a:7b", ip: ""} in network mk-addons-505457: {Iface:virbr1 ExpiryTime:2024-08-06 08:07:01 +0000 UTC Type:0 Mac:52:54:00:d5:7a:7b Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:addons-505457 Clientid:01:52:54:00:d5:7a:7b}
	I0806 07:07:11.250248   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined IP address 192.168.39.6 and MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:11.250272   21485 main.go:141] libmachine: (addons-505457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:7a:7b", ip: ""} in network mk-addons-505457: {Iface:virbr1 ExpiryTime:2024-08-06 08:07:01 +0000 UTC Type:0 Mac:52:54:00:d5:7a:7b Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:addons-505457 Clientid:01:52:54:00:d5:7a:7b}
	I0806 07:07:11.250287   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined IP address 192.168.39.6 and MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:11.250402   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHPort
	I0806 07:07:11.250541   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHPort
	I0806 07:07:11.250695   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHKeyPath
	I0806 07:07:11.250707   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHKeyPath
	I0806 07:07:11.250857   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHUsername
	I0806 07:07:11.250882   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHUsername
	I0806 07:07:11.251014   21485 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/addons-505457/id_rsa Username:docker}
	I0806 07:07:11.251018   21485 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/addons-505457/id_rsa Username:docker}
	I0806 07:07:11.356388   21485 ssh_runner.go:195] Run: systemctl --version
	I0806 07:07:11.362561   21485 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0806 07:07:11.521417   21485 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0806 07:07:11.527258   21485 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0806 07:07:11.527328   21485 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0806 07:07:11.546564   21485 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0806 07:07:11.546598   21485 start.go:495] detecting cgroup driver to use...
	I0806 07:07:11.546661   21485 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0806 07:07:11.563089   21485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 07:07:11.577568   21485 docker.go:217] disabling cri-docker service (if available) ...
	I0806 07:07:11.577635   21485 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0806 07:07:11.591904   21485 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0806 07:07:11.606015   21485 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0806 07:07:11.717025   21485 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0806 07:07:11.857443   21485 docker.go:233] disabling docker service ...
	I0806 07:07:11.857523   21485 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0806 07:07:11.875227   21485 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0806 07:07:11.889022   21485 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0806 07:07:12.041847   21485 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0806 07:07:12.167764   21485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0806 07:07:12.182018   21485 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 07:07:12.200600   21485 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0806 07:07:12.200673   21485 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 07:07:12.211140   21485 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0806 07:07:12.211209   21485 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 07:07:12.221989   21485 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 07:07:12.232576   21485 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 07:07:12.242915   21485 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0806 07:07:12.254666   21485 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 07:07:12.265714   21485 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 07:07:12.283601   21485 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 07:07:12.294216   21485 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0806 07:07:12.303945   21485 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0806 07:07:12.304000   21485 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0806 07:07:12.316706   21485 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0806 07:07:12.326696   21485 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 07:07:12.441365   21485 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0806 07:07:12.584552   21485 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0806 07:07:12.584637   21485 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0806 07:07:12.589764   21485 start.go:563] Will wait 60s for crictl version
	I0806 07:07:12.589848   21485 ssh_runner.go:195] Run: which crictl
	I0806 07:07:12.593691   21485 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0806 07:07:12.630920   21485 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0806 07:07:12.631014   21485 ssh_runner.go:195] Run: crio --version
	I0806 07:07:12.658831   21485 ssh_runner.go:195] Run: crio --version
	I0806 07:07:12.690543   21485 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0806 07:07:12.692016   21485 main.go:141] libmachine: (addons-505457) Calling .GetIP
	I0806 07:07:12.694965   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:12.695270   21485 main.go:141] libmachine: (addons-505457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:7a:7b", ip: ""} in network mk-addons-505457: {Iface:virbr1 ExpiryTime:2024-08-06 08:07:01 +0000 UTC Type:0 Mac:52:54:00:d5:7a:7b Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:addons-505457 Clientid:01:52:54:00:d5:7a:7b}
	I0806 07:07:12.695294   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined IP address 192.168.39.6 and MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:12.695515   21485 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0806 07:07:12.699698   21485 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 07:07:12.712115   21485 kubeadm.go:883] updating cluster {Name:addons-505457 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:addons-505457 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0806 07:07:12.712256   21485 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0806 07:07:12.712310   21485 ssh_runner.go:195] Run: sudo crictl images --output json
	I0806 07:07:12.744020   21485 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0806 07:07:12.744084   21485 ssh_runner.go:195] Run: which lz4
	I0806 07:07:12.748495   21485 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0806 07:07:12.752915   21485 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0806 07:07:12.752947   21485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0806 07:07:14.100542   21485 crio.go:462] duration metric: took 1.35214106s to copy over tarball
	I0806 07:07:14.100636   21485 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0806 07:07:16.444828   21485 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.344155382s)
	I0806 07:07:16.444868   21485 crio.go:469] duration metric: took 2.344294513s to extract the tarball
	I0806 07:07:16.444876   21485 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0806 07:07:16.483687   21485 ssh_runner.go:195] Run: sudo crictl images --output json
	I0806 07:07:16.533613   21485 crio.go:514] all images are preloaded for cri-o runtime.
	I0806 07:07:16.533640   21485 cache_images.go:84] Images are preloaded, skipping loading
	I0806 07:07:16.533660   21485 kubeadm.go:934] updating node { 192.168.39.6 8443 v1.30.3 crio true true} ...
	I0806 07:07:16.533764   21485 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-505457 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:addons-505457 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0806 07:07:16.533837   21485 ssh_runner.go:195] Run: crio config
	I0806 07:07:16.583416   21485 cni.go:84] Creating CNI manager for ""
	I0806 07:07:16.583439   21485 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0806 07:07:16.583447   21485 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0806 07:07:16.583468   21485 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.6 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-505457 NodeName:addons-505457 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.6"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.6 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0806 07:07:16.583588   21485 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.6
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-505457"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.6
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.6"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0806 07:07:16.583649   21485 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0806 07:07:16.594082   21485 binaries.go:44] Found k8s binaries, skipping transfer
	I0806 07:07:16.594157   21485 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0806 07:07:16.604306   21485 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0806 07:07:16.620616   21485 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0806 07:07:16.637142   21485 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0806 07:07:16.654373   21485 ssh_runner.go:195] Run: grep 192.168.39.6	control-plane.minikube.internal$ /etc/hosts
	I0806 07:07:16.658371   21485 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.6	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 07:07:16.671832   21485 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 07:07:16.789769   21485 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 07:07:16.807525   21485 certs.go:68] Setting up /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/addons-505457 for IP: 192.168.39.6
	I0806 07:07:16.807554   21485 certs.go:194] generating shared ca certs ...
	I0806 07:07:16.807589   21485 certs.go:226] acquiring lock for ca certs: {Name:mkf5c5aed91f4108054d9f2c742cad66c86afbed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 07:07:16.807758   21485 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.key
	I0806 07:07:16.959954   21485 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt ...
	I0806 07:07:16.959985   21485 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt: {Name:mk3e1a3a83bc9726eedf3bb6b14c66e1d057eef7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 07:07:16.960175   21485 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19370-13057/.minikube/ca.key ...
	I0806 07:07:16.960190   21485 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/.minikube/ca.key: {Name:mk03682aa44f086e8b8699e058741f3f6e561f04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 07:07:16.960283   21485 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.key
	I0806 07:07:17.430458   21485 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.crt ...
	I0806 07:07:17.430489   21485 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.crt: {Name:mk403c4b5dfc06b347e4e3bd6ec9130c8187c172 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 07:07:17.430644   21485 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.key ...
	I0806 07:07:17.430654   21485 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.key: {Name:mk7d37cf4d4ae9a8d1ef98ba079a9a37709655e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 07:07:17.430718   21485 certs.go:256] generating profile certs ...
	I0806 07:07:17.430769   21485 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/addons-505457/client.key
	I0806 07:07:17.430779   21485 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/addons-505457/client.crt with IP's: []
	I0806 07:07:18.001607   21485 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/addons-505457/client.crt ...
	I0806 07:07:18.001637   21485 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/addons-505457/client.crt: {Name:mk1a65d31a4aa6d8d190f854e2e2bbb382cce155 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 07:07:18.001795   21485 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/addons-505457/client.key ...
	I0806 07:07:18.001805   21485 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/addons-505457/client.key: {Name:mk3830dfb911cd9bfd155214549d8758ddc9d4d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 07:07:18.001872   21485 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/addons-505457/apiserver.key.97df24c3
	I0806 07:07:18.001890   21485 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/addons-505457/apiserver.crt.97df24c3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.6]
	I0806 07:07:18.134522   21485 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/addons-505457/apiserver.crt.97df24c3 ...
	I0806 07:07:18.134552   21485 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/addons-505457/apiserver.crt.97df24c3: {Name:mkddc23c4834db15fa89cfacac311a5e4c07ac65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 07:07:18.134707   21485 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/addons-505457/apiserver.key.97df24c3 ...
	I0806 07:07:18.134719   21485 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/addons-505457/apiserver.key.97df24c3: {Name:mk96c9748b8cee91e0a7e02d7a9be63a9310ac88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 07:07:18.134793   21485 certs.go:381] copying /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/addons-505457/apiserver.crt.97df24c3 -> /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/addons-505457/apiserver.crt
	I0806 07:07:18.134861   21485 certs.go:385] copying /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/addons-505457/apiserver.key.97df24c3 -> /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/addons-505457/apiserver.key
	I0806 07:07:18.134911   21485 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/addons-505457/proxy-client.key
	I0806 07:07:18.134927   21485 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/addons-505457/proxy-client.crt with IP's: []
	I0806 07:07:18.275587   21485 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/addons-505457/proxy-client.crt ...
	I0806 07:07:18.275620   21485 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/addons-505457/proxy-client.crt: {Name:mkcbfe8acad38f1d392edc65bf3721ae5beac0a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 07:07:18.275834   21485 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/addons-505457/proxy-client.key ...
	I0806 07:07:18.275855   21485 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/addons-505457/proxy-client.key: {Name:mkd2c740d57e918062af740fabeb121d8a5b731c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 07:07:18.276100   21485 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem (1675 bytes)
	I0806 07:07:18.276143   21485 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem (1078 bytes)
	I0806 07:07:18.276181   21485 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem (1123 bytes)
	I0806 07:07:18.276218   21485 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem (1675 bytes)
	I0806 07:07:18.276875   21485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0806 07:07:18.306210   21485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0806 07:07:18.331354   21485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0806 07:07:18.355074   21485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0806 07:07:18.380976   21485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/addons-505457/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0806 07:07:18.405330   21485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/addons-505457/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0806 07:07:18.428526   21485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/addons-505457/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0806 07:07:18.452442   21485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/addons-505457/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0806 07:07:18.477264   21485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0806 07:07:18.500601   21485 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0806 07:07:18.516830   21485 ssh_runner.go:195] Run: openssl version
	I0806 07:07:18.522587   21485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0806 07:07:18.533750   21485 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0806 07:07:18.538067   21485 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  6 07:07 /usr/share/ca-certificates/minikubeCA.pem
	I0806 07:07:18.538127   21485 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0806 07:07:18.543875   21485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0806 07:07:18.555097   21485 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0806 07:07:18.559259   21485 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0806 07:07:18.559316   21485 kubeadm.go:392] StartCluster: {Name:addons-505457 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 C
lusterName:addons-505457 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 07:07:18.559399   21485 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0806 07:07:18.559462   21485 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0806 07:07:18.600114   21485 cri.go:89] found id: ""
	I0806 07:07:18.600188   21485 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0806 07:07:18.611548   21485 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0806 07:07:18.622157   21485 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0806 07:07:18.632871   21485 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0806 07:07:18.632890   21485 kubeadm.go:157] found existing configuration files:
	
	I0806 07:07:18.632929   21485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0806 07:07:18.642369   21485 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0806 07:07:18.642438   21485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0806 07:07:18.652097   21485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0806 07:07:18.661721   21485 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0806 07:07:18.661774   21485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0806 07:07:18.671817   21485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0806 07:07:18.681993   21485 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0806 07:07:18.682056   21485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0806 07:07:18.692700   21485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0806 07:07:18.702466   21485 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0806 07:07:18.702539   21485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0806 07:07:18.713056   21485 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0806 07:07:18.774764   21485 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0806 07:07:18.774854   21485 kubeadm.go:310] [preflight] Running pre-flight checks
	I0806 07:07:18.911622   21485 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0806 07:07:18.911777   21485 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0806 07:07:18.911911   21485 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0806 07:07:19.137987   21485 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0806 07:07:19.328964   21485 out.go:204]   - Generating certificates and keys ...
	I0806 07:07:19.329056   21485 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0806 07:07:19.329138   21485 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0806 07:07:19.329232   21485 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0806 07:07:19.336003   21485 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0806 07:07:19.409839   21485 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0806 07:07:19.634639   21485 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0806 07:07:19.703596   21485 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0806 07:07:19.703710   21485 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-505457 localhost] and IPs [192.168.39.6 127.0.0.1 ::1]
	I0806 07:07:19.834637   21485 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0806 07:07:19.834787   21485 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-505457 localhost] and IPs [192.168.39.6 127.0.0.1 ::1]
	I0806 07:07:19.923168   21485 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0806 07:07:20.136623   21485 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0806 07:07:20.375620   21485 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0806 07:07:20.375919   21485 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0806 07:07:20.571258   21485 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0806 07:07:20.776735   21485 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0806 07:07:21.099140   21485 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0806 07:07:21.182126   21485 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0806 07:07:21.391199   21485 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0806 07:07:21.391843   21485 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0806 07:07:21.394719   21485 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0806 07:07:21.396813   21485 out.go:204]   - Booting up control plane ...
	I0806 07:07:21.396911   21485 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0806 07:07:21.397003   21485 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0806 07:07:21.399913   21485 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0806 07:07:21.417712   21485 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0806 07:07:21.418624   21485 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0806 07:07:21.418678   21485 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0806 07:07:21.555513   21485 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0806 07:07:21.555594   21485 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0806 07:07:22.556412   21485 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001622868s
	I0806 07:07:22.556504   21485 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0806 07:07:27.557223   21485 kubeadm.go:310] [api-check] The API server is healthy after 5.001751666s
	I0806 07:07:27.568314   21485 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0806 07:07:28.085164   21485 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0806 07:07:28.115735   21485 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0806 07:07:28.115965   21485 kubeadm.go:310] [mark-control-plane] Marking the node addons-505457 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0806 07:07:28.132722   21485 kubeadm.go:310] [bootstrap-token] Using token: 7h6mq1.bmdp83lhi0wmiffm
	I0806 07:07:28.134249   21485 out.go:204]   - Configuring RBAC rules ...
	I0806 07:07:28.134367   21485 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0806 07:07:28.144141   21485 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0806 07:07:28.153965   21485 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0806 07:07:28.157380   21485 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0806 07:07:28.163979   21485 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0806 07:07:28.167111   21485 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0806 07:07:28.282115   21485 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0806 07:07:28.715511   21485 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0806 07:07:29.280195   21485 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0806 07:07:29.280221   21485 kubeadm.go:310] 
	I0806 07:07:29.280292   21485 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0806 07:07:29.280299   21485 kubeadm.go:310] 
	I0806 07:07:29.280383   21485 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0806 07:07:29.280395   21485 kubeadm.go:310] 
	I0806 07:07:29.280427   21485 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0806 07:07:29.280529   21485 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0806 07:07:29.280620   21485 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0806 07:07:29.280631   21485 kubeadm.go:310] 
	I0806 07:07:29.280707   21485 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0806 07:07:29.280719   21485 kubeadm.go:310] 
	I0806 07:07:29.280756   21485 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0806 07:07:29.280762   21485 kubeadm.go:310] 
	I0806 07:07:29.280821   21485 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0806 07:07:29.280922   21485 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0806 07:07:29.280998   21485 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0806 07:07:29.281007   21485 kubeadm.go:310] 
	I0806 07:07:29.281138   21485 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0806 07:07:29.281210   21485 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0806 07:07:29.281223   21485 kubeadm.go:310] 
	I0806 07:07:29.281318   21485 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 7h6mq1.bmdp83lhi0wmiffm \
	I0806 07:07:29.281459   21485 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d36e5c1dfe989c7d561c809d7c06e0fc13926ac8f6d664cc5d6865ea5f79931b \
	I0806 07:07:29.281494   21485 kubeadm.go:310] 	--control-plane 
	I0806 07:07:29.281500   21485 kubeadm.go:310] 
	I0806 07:07:29.281623   21485 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0806 07:07:29.281644   21485 kubeadm.go:310] 
	I0806 07:07:29.281748   21485 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 7h6mq1.bmdp83lhi0wmiffm \
	I0806 07:07:29.281867   21485 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d36e5c1dfe989c7d561c809d7c06e0fc13926ac8f6d664cc5d6865ea5f79931b 
	I0806 07:07:29.282270   21485 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0806 07:07:29.282293   21485 cni.go:84] Creating CNI manager for ""
	I0806 07:07:29.282300   21485 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0806 07:07:29.284317   21485 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0806 07:07:29.285828   21485 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0806 07:07:29.299491   21485 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0806 07:07:29.319896   21485 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0806 07:07:29.320009   21485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-505457 minikube.k8s.io/updated_at=2024_08_06T07_07_29_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=e92cb06692f5ea1ba801d10d148e5e92e807f9c8 minikube.k8s.io/name=addons-505457 minikube.k8s.io/primary=true
	I0806 07:07:29.320015   21485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 07:07:29.349014   21485 ops.go:34] apiserver oom_adj: -16
	I0806 07:07:29.442739   21485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 07:07:29.943354   21485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 07:07:30.443455   21485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 07:07:30.943739   21485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 07:07:31.442843   21485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 07:07:31.943503   21485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 07:07:32.443798   21485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 07:07:32.943163   21485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 07:07:33.442842   21485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 07:07:33.943250   21485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 07:07:34.443105   21485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 07:07:34.942850   21485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 07:07:35.443012   21485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 07:07:35.942805   21485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 07:07:36.443209   21485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 07:07:36.943684   21485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 07:07:37.443420   21485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 07:07:37.943184   21485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 07:07:38.443379   21485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 07:07:38.943369   21485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 07:07:39.443575   21485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 07:07:39.943030   21485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 07:07:40.443253   21485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 07:07:40.943780   21485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 07:07:41.442874   21485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 07:07:41.541122   21485 kubeadm.go:1113] duration metric: took 12.221193425s to wait for elevateKubeSystemPrivileges
	I0806 07:07:41.541148   21485 kubeadm.go:394] duration metric: took 22.981835919s to StartCluster
	I0806 07:07:41.541163   21485 settings.go:142] acquiring lock: {Name:mkb0bfbcae3b4205ef43487a9de84cfd8afd2e5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 07:07:41.541278   21485 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19370-13057/kubeconfig
	I0806 07:07:41.541632   21485 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/kubeconfig: {Name:mk5c066914acf8e36556b29792f75b98076ca34a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 07:07:41.541814   21485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0806 07:07:41.541850   21485 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0806 07:07:41.541932   21485 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0806 07:07:41.542038   21485 addons.go:69] Setting yakd=true in profile "addons-505457"
	I0806 07:07:41.542080   21485 addons.go:234] Setting addon yakd=true in "addons-505457"
	I0806 07:07:41.542095   21485 addons.go:69] Setting inspektor-gadget=true in profile "addons-505457"
	I0806 07:07:41.542112   21485 config.go:182] Loaded profile config "addons-505457": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 07:07:41.542122   21485 addons.go:69] Setting storage-provisioner=true in profile "addons-505457"
	I0806 07:07:41.542137   21485 addons.go:234] Setting addon inspektor-gadget=true in "addons-505457"
	I0806 07:07:41.542146   21485 addons.go:69] Setting volumesnapshots=true in profile "addons-505457"
	I0806 07:07:41.542153   21485 addons.go:234] Setting addon storage-provisioner=true in "addons-505457"
	I0806 07:07:41.542121   21485 host.go:66] Checking if "addons-505457" exists ...
	I0806 07:07:41.542170   21485 host.go:66] Checking if "addons-505457" exists ...
	I0806 07:07:41.542175   21485 addons.go:69] Setting metrics-server=true in profile "addons-505457"
	I0806 07:07:41.542179   21485 host.go:66] Checking if "addons-505457" exists ...
	I0806 07:07:41.542181   21485 addons.go:69] Setting registry=true in profile "addons-505457"
	I0806 07:07:41.542192   21485 addons.go:234] Setting addon metrics-server=true in "addons-505457"
	I0806 07:07:41.542200   21485 addons.go:69] Setting default-storageclass=true in profile "addons-505457"
	I0806 07:07:41.542212   21485 addons.go:69] Setting cloud-spanner=true in profile "addons-505457"
	I0806 07:07:41.542219   21485 addons.go:69] Setting ingress=true in profile "addons-505457"
	I0806 07:07:41.542227   21485 addons.go:69] Setting helm-tiller=true in profile "addons-505457"
	I0806 07:07:41.542239   21485 addons.go:234] Setting addon ingress=true in "addons-505457"
	I0806 07:07:41.542204   21485 addons.go:234] Setting addon registry=true in "addons-505457"
	I0806 07:07:41.542249   21485 addons.go:234] Setting addon helm-tiller=true in "addons-505457"
	I0806 07:07:41.542252   21485 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-505457"
	I0806 07:07:41.542264   21485 host.go:66] Checking if "addons-505457" exists ...
	I0806 07:07:41.542270   21485 host.go:66] Checking if "addons-505457" exists ...
	I0806 07:07:41.542274   21485 host.go:66] Checking if "addons-505457" exists ...
	I0806 07:07:41.542394   21485 addons.go:69] Setting ingress-dns=true in profile "addons-505457"
	I0806 07:07:41.542418   21485 addons.go:234] Setting addon ingress-dns=true in "addons-505457"
	I0806 07:07:41.542448   21485 host.go:66] Checking if "addons-505457" exists ...
	I0806 07:07:41.542130   21485 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-505457"
	I0806 07:07:41.542621   21485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:07:41.542634   21485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:07:41.542240   21485 addons.go:234] Setting addon cloud-spanner=true in "addons-505457"
	I0806 07:07:41.542657   21485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:07:41.542664   21485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:07:41.542675   21485 host.go:66] Checking if "addons-505457" exists ...
	I0806 07:07:41.542621   21485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:07:41.542759   21485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:07:41.542770   21485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:07:41.542799   21485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:07:41.542679   21485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:07:41.542936   21485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:07:41.542983   21485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:07:41.542138   21485 addons.go:69] Setting volcano=true in profile "addons-505457"
	I0806 07:07:41.543187   21485 addons.go:234] Setting addon volcano=true in "addons-505457"
	I0806 07:07:41.543212   21485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:07:41.542667   21485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:07:41.543226   21485 host.go:66] Checking if "addons-505457" exists ...
	I0806 07:07:41.542191   21485 addons.go:69] Setting gcp-auth=true in profile "addons-505457"
	I0806 07:07:41.543501   21485 mustload.go:65] Loading cluster: addons-505457
	I0806 07:07:41.543585   21485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:07:41.543666   21485 config.go:182] Loaded profile config "addons-505457": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 07:07:41.543804   21485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:07:41.543846   21485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:07:41.544108   21485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:07:41.544165   21485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:07:41.542195   21485 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-505457"
	I0806 07:07:41.544408   21485 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-505457"
	I0806 07:07:41.544436   21485 host.go:66] Checking if "addons-505457" exists ...
	I0806 07:07:41.544792   21485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:07:41.544810   21485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:07:41.542164   21485 addons.go:234] Setting addon volumesnapshots=true in "addons-505457"
	I0806 07:07:41.542219   21485 host.go:66] Checking if "addons-505457" exists ...
	I0806 07:07:41.545036   21485 host.go:66] Checking if "addons-505457" exists ...
	I0806 07:07:41.542625   21485 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-505457"
	I0806 07:07:41.545383   21485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:07:41.545402   21485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:07:41.545428   21485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:07:41.545455   21485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:07:41.542681   21485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:07:41.542651   21485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:07:41.545947   21485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:07:41.542691   21485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:07:41.545914   21485 out.go:177] * Verifying Kubernetes components...
	I0806 07:07:41.542157   21485 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-505457"
	I0806 07:07:41.551630   21485 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-505457"
	I0806 07:07:41.551672   21485 host.go:66] Checking if "addons-505457" exists ...
	I0806 07:07:41.552053   21485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:07:41.552090   21485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:07:41.553024   21485 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 07:07:41.563666   21485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39891
	I0806 07:07:41.563871   21485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39921
	I0806 07:07:41.564319   21485 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:07:41.564430   21485 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:07:41.564990   21485 main.go:141] libmachine: Using API Version  1
	I0806 07:07:41.564998   21485 main.go:141] libmachine: Using API Version  1
	I0806 07:07:41.565008   21485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:07:41.565017   21485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:07:41.565097   21485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37547
	I0806 07:07:41.565134   21485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37995
	I0806 07:07:41.565513   21485 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:07:41.565790   21485 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:07:41.565883   21485 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:07:41.566160   21485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:07:41.566198   21485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:07:41.566411   21485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:07:41.566452   21485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:07:41.566665   21485 main.go:141] libmachine: Using API Version  1
	I0806 07:07:41.566694   21485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:07:41.566750   21485 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:07:41.567061   21485 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:07:41.567428   21485 main.go:141] libmachine: Using API Version  1
	I0806 07:07:41.567444   21485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:07:41.567579   21485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:07:41.567624   21485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:07:41.567818   21485 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:07:41.580894   21485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:07:41.580936   21485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:07:41.580949   21485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:07:41.580974   21485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:07:41.583527   21485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45161
	I0806 07:07:41.584469   21485 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:07:41.585198   21485 main.go:141] libmachine: Using API Version  1
	I0806 07:07:41.585217   21485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:07:41.586452   21485 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:07:41.587120   21485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:07:41.587164   21485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:07:41.590582   21485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40539
	I0806 07:07:41.590598   21485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41753
	I0806 07:07:41.590662   21485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41675
	I0806 07:07:41.591179   21485 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:07:41.591273   21485 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:07:41.591716   21485 main.go:141] libmachine: Using API Version  1
	I0806 07:07:41.591740   21485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:07:41.592081   21485 main.go:141] libmachine: Using API Version  1
	I0806 07:07:41.592098   21485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:07:41.592163   21485 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:07:41.592214   21485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38655
	I0806 07:07:41.592391   21485 main.go:141] libmachine: (addons-505457) Calling .GetState
	I0806 07:07:41.592788   21485 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:07:41.593308   21485 main.go:141] libmachine: Using API Version  1
	I0806 07:07:41.593324   21485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:07:41.593465   21485 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:07:41.594095   21485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:07:41.594122   21485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:07:41.594357   21485 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:07:41.594586   21485 main.go:141] libmachine: (addons-505457) Calling .GetState
	I0806 07:07:41.594980   21485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44035
	I0806 07:07:41.595090   21485 main.go:141] libmachine: (addons-505457) Calling .DriverName
	I0806 07:07:41.595196   21485 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:07:41.595421   21485 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:07:41.595912   21485 main.go:141] libmachine: Using API Version  1
	I0806 07:07:41.595928   21485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:07:41.596486   21485 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:07:41.596539   21485 main.go:141] libmachine: (addons-505457) Calling .DriverName
	I0806 07:07:41.596872   21485 main.go:141] libmachine: (addons-505457) Calling .GetState
	I0806 07:07:41.597188   21485 main.go:141] libmachine: Using API Version  1
	I0806 07:07:41.597325   21485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:07:41.597969   21485 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	I0806 07:07:41.598528   21485 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:07:41.598848   21485 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0806 07:07:41.599027   21485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:07:41.599277   21485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:07:41.600219   21485 host.go:66] Checking if "addons-505457" exists ...
	I0806 07:07:41.600608   21485 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0806 07:07:41.600624   21485 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0806 07:07:41.600641   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHHostname
	I0806 07:07:41.601175   21485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:07:41.601216   21485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:07:41.601663   21485 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0806 07:07:41.602745   21485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36999
	I0806 07:07:41.603353   21485 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:07:41.604072   21485 main.go:141] libmachine: Using API Version  1
	I0806 07:07:41.604087   21485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:07:41.604160   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:41.604591   21485 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:07:41.604712   21485 main.go:141] libmachine: (addons-505457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:7a:7b", ip: ""} in network mk-addons-505457: {Iface:virbr1 ExpiryTime:2024-08-06 08:07:01 +0000 UTC Type:0 Mac:52:54:00:d5:7a:7b Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:addons-505457 Clientid:01:52:54:00:d5:7a:7b}
	I0806 07:07:41.604727   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined IP address 192.168.39.6 and MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:41.604927   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHPort
	I0806 07:07:41.605211   21485 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0806 07:07:41.605296   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHKeyPath
	I0806 07:07:41.605429   21485 main.go:141] libmachine: (addons-505457) Calling .GetState
	I0806 07:07:41.605469   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHUsername
	I0806 07:07:41.605644   21485 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/addons-505457/id_rsa Username:docker}
	I0806 07:07:41.607410   21485 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0806 07:07:41.607428   21485 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0806 07:07:41.607445   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHHostname
	I0806 07:07:41.608468   21485 main.go:141] libmachine: (addons-505457) Calling .DriverName
	I0806 07:07:41.610110   21485 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0806 07:07:41.610641   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:41.611001   21485 main.go:141] libmachine: (addons-505457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:7a:7b", ip: ""} in network mk-addons-505457: {Iface:virbr1 ExpiryTime:2024-08-06 08:07:01 +0000 UTC Type:0 Mac:52:54:00:d5:7a:7b Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:addons-505457 Clientid:01:52:54:00:d5:7a:7b}
	I0806 07:07:41.611020   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined IP address 192.168.39.6 and MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:41.611307   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHPort
	I0806 07:07:41.611504   21485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34085
	I0806 07:07:41.611781   21485 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0806 07:07:41.611795   21485 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0806 07:07:41.611813   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHHostname
	I0806 07:07:41.612455   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHKeyPath
	I0806 07:07:41.612648   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHUsername
	I0806 07:07:41.612931   21485 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/addons-505457/id_rsa Username:docker}
	I0806 07:07:41.615080   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:41.615446   21485 main.go:141] libmachine: (addons-505457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:7a:7b", ip: ""} in network mk-addons-505457: {Iface:virbr1 ExpiryTime:2024-08-06 08:07:01 +0000 UTC Type:0 Mac:52:54:00:d5:7a:7b Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:addons-505457 Clientid:01:52:54:00:d5:7a:7b}
	I0806 07:07:41.615473   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined IP address 192.168.39.6 and MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:41.615733   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHPort
	I0806 07:07:41.615844   21485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45369
	I0806 07:07:41.615935   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHKeyPath
	I0806 07:07:41.616091   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHUsername
	I0806 07:07:41.616235   21485 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/addons-505457/id_rsa Username:docker}
	I0806 07:07:41.616551   21485 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:07:41.616639   21485 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:07:41.618585   21485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46569
	I0806 07:07:41.618730   21485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43195
	I0806 07:07:41.619007   21485 main.go:141] libmachine: Using API Version  1
	I0806 07:07:41.619028   21485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:07:41.619043   21485 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:07:41.619461   21485 main.go:141] libmachine: Using API Version  1
	I0806 07:07:41.619486   21485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:07:41.619538   21485 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:07:41.619574   21485 main.go:141] libmachine: Using API Version  1
	I0806 07:07:41.619588   21485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:07:41.620143   21485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:07:41.620179   21485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:07:41.620431   21485 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:07:41.620533   21485 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:07:41.620583   21485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42341
	I0806 07:07:41.620669   21485 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:07:41.620728   21485 main.go:141] libmachine: (addons-505457) Calling .GetState
	I0806 07:07:41.620861   21485 main.go:141] libmachine: (addons-505457) Calling .GetState
	I0806 07:07:41.620970   21485 main.go:141] libmachine: Using API Version  1
	I0806 07:07:41.620991   21485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:07:41.621719   21485 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:07:41.621853   21485 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:07:41.622275   21485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:07:41.622316   21485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:07:41.622493   21485 main.go:141] libmachine: Using API Version  1
	I0806 07:07:41.622514   21485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:07:41.623018   21485 main.go:141] libmachine: (addons-505457) Calling .DriverName
	I0806 07:07:41.624753   21485 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:07:41.625870   21485 addons.go:234] Setting addon default-storageclass=true in "addons-505457"
	I0806 07:07:41.625906   21485 host.go:66] Checking if "addons-505457" exists ...
	I0806 07:07:41.626273   21485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:07:41.626303   21485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:07:41.627735   21485 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	I0806 07:07:41.629159   21485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:07:41.629203   21485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:07:41.629506   21485 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0806 07:07:41.629527   21485 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0806 07:07:41.629550   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHHostname
	I0806 07:07:41.632920   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:41.633351   21485 main.go:141] libmachine: (addons-505457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:7a:7b", ip: ""} in network mk-addons-505457: {Iface:virbr1 ExpiryTime:2024-08-06 08:07:01 +0000 UTC Type:0 Mac:52:54:00:d5:7a:7b Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:addons-505457 Clientid:01:52:54:00:d5:7a:7b}
	I0806 07:07:41.633370   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined IP address 192.168.39.6 and MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:41.633666   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHPort
	I0806 07:07:41.633877   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHKeyPath
	I0806 07:07:41.634042   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHUsername
	I0806 07:07:41.634157   21485 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/addons-505457/id_rsa Username:docker}
	I0806 07:07:41.634512   21485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39553
	I0806 07:07:41.634987   21485 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:07:41.635077   21485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41101
	I0806 07:07:41.635520   21485 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:07:41.635696   21485 main.go:141] libmachine: Using API Version  1
	I0806 07:07:41.635708   21485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:07:41.636192   21485 main.go:141] libmachine: Using API Version  1
	I0806 07:07:41.636207   21485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:07:41.636537   21485 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:07:41.637074   21485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:07:41.637109   21485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:07:41.637606   21485 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:07:41.638174   21485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:07:41.638196   21485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:07:41.642462   21485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40551
	I0806 07:07:41.642879   21485 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:07:41.643389   21485 main.go:141] libmachine: Using API Version  1
	I0806 07:07:41.643411   21485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:07:41.643713   21485 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:07:41.643878   21485 main.go:141] libmachine: (addons-505457) Calling .DriverName
	I0806 07:07:41.647261   21485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35203
	I0806 07:07:41.647860   21485 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:07:41.648446   21485 main.go:141] libmachine: Using API Version  1
	I0806 07:07:41.648464   21485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:07:41.649025   21485 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:07:41.650841   21485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44253
	I0806 07:07:41.651142   21485 main.go:141] libmachine: (addons-505457) Calling .GetState
	I0806 07:07:41.651521   21485 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:07:41.652186   21485 main.go:141] libmachine: Using API Version  1
	I0806 07:07:41.652203   21485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:07:41.652743   21485 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:07:41.652953   21485 main.go:141] libmachine: (addons-505457) Calling .GetState
	I0806 07:07:41.653312   21485 main.go:141] libmachine: (addons-505457) Calling .DriverName
	I0806 07:07:41.654547   21485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35531
	I0806 07:07:41.654661   21485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32941
	I0806 07:07:41.655024   21485 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:07:41.655326   21485 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0806 07:07:41.655507   21485 main.go:141] libmachine: Using API Version  1
	I0806 07:07:41.655527   21485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:07:41.655556   21485 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:07:41.655757   21485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46429
	I0806 07:07:41.655914   21485 main.go:141] libmachine: (addons-505457) Calling .DriverName
	I0806 07:07:41.655940   21485 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:07:41.656082   21485 main.go:141] libmachine: Using API Version  1
	I0806 07:07:41.656097   21485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:07:41.656598   21485 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:07:41.656914   21485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:07:41.656954   21485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:07:41.657079   21485 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0806 07:07:41.657094   21485 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0806 07:07:41.657111   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHHostname
	I0806 07:07:41.657539   21485 main.go:141] libmachine: (addons-505457) Calling .GetState
	I0806 07:07:41.657572   21485 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:07:41.657858   21485 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.1
	I0806 07:07:41.658553   21485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34269
	I0806 07:07:41.659123   21485 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0806 07:07:41.658136   21485 main.go:141] libmachine: Using API Version  1
	I0806 07:07:41.659147   21485 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0806 07:07:41.659276   21485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:07:41.659289   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHHostname
	I0806 07:07:41.659886   21485 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:07:41.660422   21485 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:07:41.660533   21485 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-505457"
	I0806 07:07:41.660574   21485 host.go:66] Checking if "addons-505457" exists ...
	I0806 07:07:41.660996   21485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:07:41.661029   21485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:07:41.661094   21485 main.go:141] libmachine: Using API Version  1
	I0806 07:07:41.661112   21485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:07:41.661208   21485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43677
	I0806 07:07:41.661383   21485 main.go:141] libmachine: (addons-505457) Calling .GetState
	I0806 07:07:41.661476   21485 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:07:41.662033   21485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:07:41.662075   21485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:07:41.662597   21485 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:07:41.663697   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:41.663819   21485 main.go:141] libmachine: (addons-505457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:7a:7b", ip: ""} in network mk-addons-505457: {Iface:virbr1 ExpiryTime:2024-08-06 08:07:01 +0000 UTC Type:0 Mac:52:54:00:d5:7a:7b Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:addons-505457 Clientid:01:52:54:00:d5:7a:7b}
	I0806 07:07:41.663831   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined IP address 192.168.39.6 and MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:41.664130   21485 main.go:141] libmachine: (addons-505457) Calling .DriverName
	I0806 07:07:41.664399   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHPort
	I0806 07:07:41.664618   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHKeyPath
	I0806 07:07:41.664812   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHUsername
	I0806 07:07:41.665011   21485 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/addons-505457/id_rsa Username:docker}
	I0806 07:07:41.666554   21485 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0806 07:07:41.666754   21485 main.go:141] libmachine: Using API Version  1
	I0806 07:07:41.666772   21485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:07:41.667114   21485 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:07:41.667264   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:41.667484   21485 main.go:141] libmachine: (addons-505457) Calling .GetState
	I0806 07:07:41.668580   21485 main.go:141] libmachine: (addons-505457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:7a:7b", ip: ""} in network mk-addons-505457: {Iface:virbr1 ExpiryTime:2024-08-06 08:07:01 +0000 UTC Type:0 Mac:52:54:00:d5:7a:7b Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:addons-505457 Clientid:01:52:54:00:d5:7a:7b}
	I0806 07:07:41.668606   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined IP address 192.168.39.6 and MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:41.669586   21485 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0806 07:07:41.669735   21485 main.go:141] libmachine: (addons-505457) Calling .DriverName
	I0806 07:07:41.669798   21485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41645
	I0806 07:07:41.670154   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHPort
	I0806 07:07:41.670299   21485 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:07:41.670311   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHKeyPath
	I0806 07:07:41.670452   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHUsername
	I0806 07:07:41.670592   21485 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/addons-505457/id_rsa Username:docker}
	I0806 07:07:41.670697   21485 main.go:141] libmachine: Using API Version  1
	I0806 07:07:41.670715   21485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:07:41.671044   21485 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:07:41.671261   21485 main.go:141] libmachine: (addons-505457) Calling .GetState
	I0806 07:07:41.671375   21485 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0806 07:07:41.672837   21485 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0806 07:07:41.672858   21485 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0806 07:07:41.672885   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHHostname
	I0806 07:07:41.672950   21485 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0806 07:07:41.673806   21485 main.go:141] libmachine: (addons-505457) Calling .DriverName
	I0806 07:07:41.674434   21485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44747
	I0806 07:07:41.675077   21485 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:07:41.675175   21485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40973
	I0806 07:07:41.675923   21485 main.go:141] libmachine: Using API Version  1
	I0806 07:07:41.675941   21485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:07:41.676014   21485 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:07:41.676081   21485 out.go:177]   - Using image docker.io/registry:2.8.3
	I0806 07:07:41.676533   21485 main.go:141] libmachine: Using API Version  1
	I0806 07:07:41.676553   21485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:07:41.676613   21485 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:07:41.676774   21485 main.go:141] libmachine: (addons-505457) Calling .GetState
	I0806 07:07:41.676843   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:41.676969   21485 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0806 07:07:41.677038   21485 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:07:41.677089   21485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46549
	I0806 07:07:41.677713   21485 main.go:141] libmachine: (addons-505457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:7a:7b", ip: ""} in network mk-addons-505457: {Iface:virbr1 ExpiryTime:2024-08-06 08:07:01 +0000 UTC Type:0 Mac:52:54:00:d5:7a:7b Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:addons-505457 Clientid:01:52:54:00:d5:7a:7b}
	I0806 07:07:41.677733   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined IP address 192.168.39.6 and MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:41.677871   21485 main.go:141] libmachine: (addons-505457) Calling .GetState
	I0806 07:07:41.677933   21485 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:07:41.677996   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHPort
	I0806 07:07:41.678159   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHKeyPath
	I0806 07:07:41.678285   21485 main.go:141] libmachine: Using API Version  1
	I0806 07:07:41.678444   21485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:07:41.678533   21485 main.go:141] libmachine: (addons-505457) Calling .DriverName
	I0806 07:07:41.678592   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHUsername
	I0806 07:07:41.678743   21485 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/addons-505457/id_rsa Username:docker}
	I0806 07:07:41.679024   21485 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:07:41.679398   21485 main.go:141] libmachine: (addons-505457) Calling .GetState
	I0806 07:07:41.679586   21485 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0806 07:07:41.680495   21485 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0806 07:07:41.680560   21485 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0806 07:07:41.680978   21485 main.go:141] libmachine: (addons-505457) Calling .DriverName
	I0806 07:07:41.681039   21485 main.go:141] libmachine: (addons-505457) Calling .DriverName
	I0806 07:07:41.681226   21485 main.go:141] libmachine: Making call to close driver server
	I0806 07:07:41.681237   21485 main.go:141] libmachine: (addons-505457) Calling .Close
	I0806 07:07:41.681393   21485 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0806 07:07:41.681412   21485 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0806 07:07:41.681429   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHHostname
	I0806 07:07:41.681530   21485 main.go:141] libmachine: (addons-505457) DBG | Closing plugin on server side
	I0806 07:07:41.681545   21485 main.go:141] libmachine: Successfully made call to close driver server
	I0806 07:07:41.681556   21485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 07:07:41.681572   21485 main.go:141] libmachine: Making call to close driver server
	I0806 07:07:41.681579   21485 main.go:141] libmachine: (addons-505457) Calling .Close
	I0806 07:07:41.682704   21485 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 07:07:41.682778   21485 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0806 07:07:41.682790   21485 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0806 07:07:41.682805   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHHostname
	I0806 07:07:41.683400   21485 main.go:141] libmachine: Successfully made call to close driver server
	I0806 07:07:41.683414   21485 main.go:141] libmachine: Making call to close connection to plugin binary
	W0806 07:07:41.683476   21485 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0806 07:07:41.684141   21485 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0806 07:07:41.684160   21485 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0806 07:07:41.684174   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHHostname
	I0806 07:07:41.684707   21485 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0806 07:07:41.685118   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:41.685612   21485 main.go:141] libmachine: (addons-505457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:7a:7b", ip: ""} in network mk-addons-505457: {Iface:virbr1 ExpiryTime:2024-08-06 08:07:01 +0000 UTC Type:0 Mac:52:54:00:d5:7a:7b Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:addons-505457 Clientid:01:52:54:00:d5:7a:7b}
	I0806 07:07:41.685634   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined IP address 192.168.39.6 and MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:41.686151   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHPort
	I0806 07:07:41.686338   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:41.686373   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHKeyPath
	I0806 07:07:41.686511   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHUsername
	I0806 07:07:41.686654   21485 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/addons-505457/id_rsa Username:docker}
	I0806 07:07:41.687129   21485 main.go:141] libmachine: (addons-505457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:7a:7b", ip: ""} in network mk-addons-505457: {Iface:virbr1 ExpiryTime:2024-08-06 08:07:01 +0000 UTC Type:0 Mac:52:54:00:d5:7a:7b Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:addons-505457 Clientid:01:52:54:00:d5:7a:7b}
	I0806 07:07:41.687149   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined IP address 192.168.39.6 and MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:41.687310   21485 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0806 07:07:41.687342   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHPort
	I0806 07:07:41.687535   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHKeyPath
	I0806 07:07:41.687670   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHUsername
	I0806 07:07:41.687800   21485 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/addons-505457/id_rsa Username:docker}
	I0806 07:07:41.688037   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:41.688227   21485 main.go:141] libmachine: (addons-505457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:7a:7b", ip: ""} in network mk-addons-505457: {Iface:virbr1 ExpiryTime:2024-08-06 08:07:01 +0000 UTC Type:0 Mac:52:54:00:d5:7a:7b Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:addons-505457 Clientid:01:52:54:00:d5:7a:7b}
	I0806 07:07:41.688243   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined IP address 192.168.39.6 and MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:41.688411   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHPort
	I0806 07:07:41.688576   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHKeyPath
	I0806 07:07:41.688741   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHUsername
	I0806 07:07:41.689056   21485 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/addons-505457/id_rsa Username:docker}
	I0806 07:07:41.690206   21485 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0806 07:07:41.691580   21485 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0806 07:07:41.691601   21485 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0806 07:07:41.691618   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHHostname
	I0806 07:07:41.692846   21485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44127
	I0806 07:07:41.693496   21485 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:07:41.694185   21485 main.go:141] libmachine: Using API Version  1
	I0806 07:07:41.694205   21485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:07:41.694500   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:41.694716   21485 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:07:41.694984   21485 main.go:141] libmachine: (addons-505457) Calling .GetState
	I0806 07:07:41.695044   21485 main.go:141] libmachine: (addons-505457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:7a:7b", ip: ""} in network mk-addons-505457: {Iface:virbr1 ExpiryTime:2024-08-06 08:07:01 +0000 UTC Type:0 Mac:52:54:00:d5:7a:7b Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:addons-505457 Clientid:01:52:54:00:d5:7a:7b}
	I0806 07:07:41.695060   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined IP address 192.168.39.6 and MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:41.695149   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHPort
	I0806 07:07:41.695715   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHKeyPath
	I0806 07:07:41.695924   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHUsername
	I0806 07:07:41.696100   21485 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/addons-505457/id_rsa Username:docker}
	I0806 07:07:41.696729   21485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37875
	I0806 07:07:41.697050   21485 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:07:41.697263   21485 main.go:141] libmachine: (addons-505457) Calling .DriverName
	I0806 07:07:41.697620   21485 main.go:141] libmachine: Using API Version  1
	I0806 07:07:41.697636   21485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:07:41.698012   21485 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:07:41.698475   21485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:07:41.698512   21485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:07:41.699107   21485 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0806 07:07:41.700268   21485 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0806 07:07:41.700285   21485 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0806 07:07:41.700304   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHHostname
	I0806 07:07:41.706154   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:41.706529   21485 main.go:141] libmachine: (addons-505457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:7a:7b", ip: ""} in network mk-addons-505457: {Iface:virbr1 ExpiryTime:2024-08-06 08:07:01 +0000 UTC Type:0 Mac:52:54:00:d5:7a:7b Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:addons-505457 Clientid:01:52:54:00:d5:7a:7b}
	I0806 07:07:41.706554   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined IP address 192.168.39.6 and MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:41.706665   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHPort
	I0806 07:07:41.706857   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHKeyPath
	I0806 07:07:41.707047   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHUsername
	I0806 07:07:41.707209   21485 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/addons-505457/id_rsa Username:docker}
	I0806 07:07:41.710833   21485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39249
	I0806 07:07:41.711312   21485 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:07:41.711922   21485 main.go:141] libmachine: Using API Version  1
	I0806 07:07:41.711945   21485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:07:41.712300   21485 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:07:41.712649   21485 main.go:141] libmachine: (addons-505457) Calling .GetState
	I0806 07:07:41.714120   21485 main.go:141] libmachine: (addons-505457) Calling .DriverName
	I0806 07:07:41.714382   21485 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0806 07:07:41.714396   21485 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0806 07:07:41.714414   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHHostname
	I0806 07:07:41.715574   21485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37033
	I0806 07:07:41.716421   21485 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:07:41.717048   21485 main.go:141] libmachine: Using API Version  1
	I0806 07:07:41.717076   21485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:07:41.717562   21485 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:07:41.717777   21485 main.go:141] libmachine: (addons-505457) Calling .GetState
	I0806 07:07:41.718177   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:41.718611   21485 main.go:141] libmachine: (addons-505457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:7a:7b", ip: ""} in network mk-addons-505457: {Iface:virbr1 ExpiryTime:2024-08-06 08:07:01 +0000 UTC Type:0 Mac:52:54:00:d5:7a:7b Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:addons-505457 Clientid:01:52:54:00:d5:7a:7b}
	I0806 07:07:41.718629   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined IP address 192.168.39.6 and MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:41.718774   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHPort
	I0806 07:07:41.718964   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHKeyPath
	I0806 07:07:41.719137   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHUsername
	I0806 07:07:41.719326   21485 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/addons-505457/id_rsa Username:docker}
	I0806 07:07:41.720045   21485 main.go:141] libmachine: (addons-505457) Calling .DriverName
	I0806 07:07:41.721954   21485 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0806 07:07:41.723316   21485 out.go:177]   - Using image docker.io/busybox:stable
	I0806 07:07:41.724558   21485 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0806 07:07:41.724577   21485 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0806 07:07:41.724595   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHHostname
	I0806 07:07:41.727292   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:41.727628   21485 main.go:141] libmachine: (addons-505457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:7a:7b", ip: ""} in network mk-addons-505457: {Iface:virbr1 ExpiryTime:2024-08-06 08:07:01 +0000 UTC Type:0 Mac:52:54:00:d5:7a:7b Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:addons-505457 Clientid:01:52:54:00:d5:7a:7b}
	I0806 07:07:41.727652   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined IP address 192.168.39.6 and MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:41.727804   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHPort
	I0806 07:07:41.727966   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHKeyPath
	I0806 07:07:41.728102   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHUsername
	I0806 07:07:41.728398   21485 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/addons-505457/id_rsa Username:docker}
	W0806 07:07:41.729691   21485 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:53744->192.168.39.6:22: read: connection reset by peer
	I0806 07:07:41.729720   21485 retry.go:31] will retry after 290.754069ms: ssh: handshake failed: read tcp 192.168.39.1:53744->192.168.39.6:22: read: connection reset by peer
	I0806 07:07:42.044424   21485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0806 07:07:42.080531   21485 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0806 07:07:42.080553   21485 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0806 07:07:42.083204   21485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0806 07:07:42.133505   21485 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0806 07:07:42.133527   21485 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0806 07:07:42.173625   21485 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0806 07:07:42.173654   21485 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0806 07:07:42.190073   21485 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0806 07:07:42.190097   21485 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0806 07:07:42.212534   21485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0806 07:07:42.266356   21485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0806 07:07:42.273161   21485 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0806 07:07:42.273180   21485 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0806 07:07:42.277220   21485 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0806 07:07:42.277236   21485 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0806 07:07:42.295112   21485 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0806 07:07:42.295137   21485 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0806 07:07:42.335476   21485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0806 07:07:42.352718   21485 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0806 07:07:42.352748   21485 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0806 07:07:42.375897   21485 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0806 07:07:42.375924   21485 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0806 07:07:42.385412   21485 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0806 07:07:42.385438   21485 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0806 07:07:42.398619   21485 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 07:07:42.398675   21485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0806 07:07:42.417048   21485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0806 07:07:42.423670   21485 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0806 07:07:42.423691   21485 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0806 07:07:42.426028   21485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0806 07:07:42.520124   21485 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0806 07:07:42.520160   21485 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0806 07:07:42.573247   21485 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0806 07:07:42.573275   21485 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0806 07:07:42.593604   21485 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0806 07:07:42.593627   21485 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0806 07:07:42.608034   21485 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0806 07:07:42.608067   21485 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0806 07:07:42.614380   21485 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0806 07:07:42.614404   21485 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0806 07:07:42.638282   21485 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0806 07:07:42.638316   21485 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0806 07:07:42.680320   21485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0806 07:07:42.790023   21485 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0806 07:07:42.790053   21485 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0806 07:07:42.814619   21485 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0806 07:07:42.814645   21485 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0806 07:07:42.819943   21485 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0806 07:07:42.819962   21485 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0806 07:07:42.855550   21485 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0806 07:07:42.855569   21485 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0806 07:07:42.883794   21485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0806 07:07:42.884161   21485 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0806 07:07:42.884183   21485 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0806 07:07:42.996274   21485 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0806 07:07:42.996294   21485 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0806 07:07:43.035349   21485 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0806 07:07:43.035373   21485 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0806 07:07:43.069499   21485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0806 07:07:43.085520   21485 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0806 07:07:43.085553   21485 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0806 07:07:43.107768   21485 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0806 07:07:43.107795   21485 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0806 07:07:43.201204   21485 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0806 07:07:43.201227   21485 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0806 07:07:43.206067   21485 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0806 07:07:43.206091   21485 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0806 07:07:43.356151   21485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0806 07:07:43.405575   21485 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0806 07:07:43.405598   21485 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0806 07:07:43.424518   21485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0806 07:07:43.497902   21485 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0806 07:07:43.497924   21485 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0806 07:07:43.539251   21485 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0806 07:07:43.539275   21485 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0806 07:07:43.730367   21485 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.685909801s)
	I0806 07:07:43.730419   21485 main.go:141] libmachine: Making call to close driver server
	I0806 07:07:43.730429   21485 main.go:141] libmachine: (addons-505457) Calling .Close
	I0806 07:07:43.730765   21485 main.go:141] libmachine: Successfully made call to close driver server
	I0806 07:07:43.730786   21485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 07:07:43.730796   21485 main.go:141] libmachine: Making call to close driver server
	I0806 07:07:43.730804   21485 main.go:141] libmachine: (addons-505457) Calling .Close
	I0806 07:07:43.731073   21485 main.go:141] libmachine: Successfully made call to close driver server
	I0806 07:07:43.731090   21485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 07:07:43.951920   21485 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0806 07:07:43.951945   21485 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0806 07:07:43.960513   21485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0806 07:07:44.268677   21485 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0806 07:07:44.268702   21485 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0806 07:07:44.588498   21485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0806 07:07:46.326281   21485 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.11371277s)
	I0806 07:07:46.326281   21485 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.243048043s)
	I0806 07:07:46.326337   21485 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.059951011s)
	I0806 07:07:46.326336   21485 main.go:141] libmachine: Making call to close driver server
	I0806 07:07:46.326382   21485 main.go:141] libmachine: Making call to close driver server
	I0806 07:07:46.326405   21485 main.go:141] libmachine: (addons-505457) Calling .Close
	I0806 07:07:46.326425   21485 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.927777565s)
	I0806 07:07:46.326461   21485 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.927766408s)
	I0806 07:07:46.326481   21485 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0806 07:07:46.326385   21485 main.go:141] libmachine: (addons-505457) Calling .Close
	I0806 07:07:46.326494   21485 main.go:141] libmachine: Making call to close driver server
	I0806 07:07:46.326512   21485 main.go:141] libmachine: (addons-505457) Calling .Close
	I0806 07:07:46.326536   21485 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (3.90946426s)
	I0806 07:07:46.326389   21485 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.990886702s)
	I0806 07:07:46.326601   21485 main.go:141] libmachine: Making call to close driver server
	I0806 07:07:46.326615   21485 main.go:141] libmachine: (addons-505457) Calling .Close
	I0806 07:07:46.326558   21485 main.go:141] libmachine: Making call to close driver server
	I0806 07:07:46.326674   21485 main.go:141] libmachine: (addons-505457) Calling .Close
	I0806 07:07:46.327649   21485 node_ready.go:35] waiting up to 6m0s for node "addons-505457" to be "Ready" ...
	I0806 07:07:46.327828   21485 main.go:141] libmachine: Successfully made call to close driver server
	I0806 07:07:46.327838   21485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 07:07:46.327813   21485 main.go:141] libmachine: (addons-505457) DBG | Closing plugin on server side
	I0806 07:07:46.327863   21485 main.go:141] libmachine: (addons-505457) DBG | Closing plugin on server side
	I0806 07:07:46.327880   21485 main.go:141] libmachine: (addons-505457) DBG | Closing plugin on server side
	I0806 07:07:46.327881   21485 main.go:141] libmachine: Successfully made call to close driver server
	I0806 07:07:46.327890   21485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 07:07:46.327899   21485 main.go:141] libmachine: Making call to close driver server
	I0806 07:07:46.327900   21485 main.go:141] libmachine: Successfully made call to close driver server
	I0806 07:07:46.327907   21485 main.go:141] libmachine: (addons-505457) Calling .Close
	I0806 07:07:46.327907   21485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 07:07:46.327916   21485 main.go:141] libmachine: Making call to close driver server
	I0806 07:07:46.327922   21485 main.go:141] libmachine: (addons-505457) Calling .Close
	I0806 07:07:46.328019   21485 main.go:141] libmachine: Successfully made call to close driver server
	I0806 07:07:46.328031   21485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 07:07:46.328040   21485 main.go:141] libmachine: Making call to close driver server
	I0806 07:07:46.328048   21485 main.go:141] libmachine: (addons-505457) Calling .Close
	I0806 07:07:46.328136   21485 main.go:141] libmachine: (addons-505457) DBG | Closing plugin on server side
	I0806 07:07:46.328160   21485 main.go:141] libmachine: Successfully made call to close driver server
	I0806 07:07:46.328167   21485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 07:07:46.328173   21485 main.go:141] libmachine: (addons-505457) DBG | Closing plugin on server side
	I0806 07:07:46.328208   21485 main.go:141] libmachine: Successfully made call to close driver server
	I0806 07:07:46.328215   21485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 07:07:46.328222   21485 main.go:141] libmachine: Making call to close driver server
	I0806 07:07:46.328230   21485 main.go:141] libmachine: (addons-505457) Calling .Close
	I0806 07:07:46.327846   21485 main.go:141] libmachine: Making call to close driver server
	I0806 07:07:46.328264   21485 main.go:141] libmachine: (addons-505457) Calling .Close
	I0806 07:07:46.329169   21485 main.go:141] libmachine: (addons-505457) DBG | Closing plugin on server side
	I0806 07:07:46.329206   21485 main.go:141] libmachine: (addons-505457) DBG | Closing plugin on server side
	I0806 07:07:46.329236   21485 main.go:141] libmachine: Successfully made call to close driver server
	I0806 07:07:46.329244   21485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 07:07:46.329415   21485 main.go:141] libmachine: Successfully made call to close driver server
	I0806 07:07:46.329424   21485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 07:07:46.329457   21485 main.go:141] libmachine: (addons-505457) DBG | Closing plugin on server side
	I0806 07:07:46.329480   21485 main.go:141] libmachine: Successfully made call to close driver server
	I0806 07:07:46.329490   21485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 07:07:46.329499   21485 addons.go:475] Verifying addon registry=true in "addons-505457"
	I0806 07:07:46.329701   21485 main.go:141] libmachine: (addons-505457) DBG | Closing plugin on server side
	I0806 07:07:46.329726   21485 main.go:141] libmachine: Successfully made call to close driver server
	I0806 07:07:46.329733   21485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 07:07:46.331471   21485 out.go:177] * Verifying registry addon...
	I0806 07:07:46.333944   21485 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0806 07:07:46.360587   21485 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0806 07:07:46.360615   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:07:46.368956   21485 node_ready.go:49] node "addons-505457" has status "Ready":"True"
	I0806 07:07:46.368983   21485 node_ready.go:38] duration metric: took 41.307309ms for node "addons-505457" to be "Ready" ...
	I0806 07:07:46.368993   21485 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 07:07:46.421962   21485 main.go:141] libmachine: Making call to close driver server
	I0806 07:07:46.421986   21485 main.go:141] libmachine: (addons-505457) Calling .Close
	I0806 07:07:46.422408   21485 main.go:141] libmachine: (addons-505457) DBG | Closing plugin on server side
	I0806 07:07:46.422440   21485 main.go:141] libmachine: Successfully made call to close driver server
	I0806 07:07:46.422458   21485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 07:07:46.425315   21485 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-k5d8d" in "kube-system" namespace to be "Ready" ...
	I0806 07:07:46.930734   21485 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-505457" context rescaled to 1 replicas
	I0806 07:07:46.944892   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:07:47.368864   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:07:47.918575   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:07:48.448482   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:07:48.556719   21485 pod_ready.go:92] pod "coredns-7db6d8ff4d-k5d8d" in "kube-system" namespace has status "Ready":"True"
	I0806 07:07:48.556742   21485 pod_ready.go:81] duration metric: took 2.131404444s for pod "coredns-7db6d8ff4d-k5d8d" in "kube-system" namespace to be "Ready" ...
	I0806 07:07:48.556755   21485 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-l2x2b" in "kube-system" namespace to be "Ready" ...
	I0806 07:07:48.591054   21485 pod_ready.go:92] pod "coredns-7db6d8ff4d-l2x2b" in "kube-system" namespace has status "Ready":"True"
	I0806 07:07:48.591075   21485 pod_ready.go:81] duration metric: took 34.311425ms for pod "coredns-7db6d8ff4d-l2x2b" in "kube-system" namespace to be "Ready" ...
	I0806 07:07:48.591088   21485 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-505457" in "kube-system" namespace to be "Ready" ...
	I0806 07:07:48.634099   21485 pod_ready.go:92] pod "etcd-addons-505457" in "kube-system" namespace has status "Ready":"True"
	I0806 07:07:48.634122   21485 pod_ready.go:81] duration metric: took 43.026472ms for pod "etcd-addons-505457" in "kube-system" namespace to be "Ready" ...
	I0806 07:07:48.634141   21485 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-505457" in "kube-system" namespace to be "Ready" ...
	I0806 07:07:48.661157   21485 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0806 07:07:48.661193   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHHostname
	I0806 07:07:48.664411   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:48.664931   21485 main.go:141] libmachine: (addons-505457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:7a:7b", ip: ""} in network mk-addons-505457: {Iface:virbr1 ExpiryTime:2024-08-06 08:07:01 +0000 UTC Type:0 Mac:52:54:00:d5:7a:7b Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:addons-505457 Clientid:01:52:54:00:d5:7a:7b}
	I0806 07:07:48.664989   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined IP address 192.168.39.6 and MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:48.665156   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHPort
	I0806 07:07:48.665361   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHKeyPath
	I0806 07:07:48.665565   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHUsername
	I0806 07:07:48.665745   21485 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/addons-505457/id_rsa Username:docker}
	I0806 07:07:48.669109   21485 pod_ready.go:92] pod "kube-apiserver-addons-505457" in "kube-system" namespace has status "Ready":"True"
	I0806 07:07:48.669126   21485 pod_ready.go:81] duration metric: took 34.978654ms for pod "kube-apiserver-addons-505457" in "kube-system" namespace to be "Ready" ...
	I0806 07:07:48.669136   21485 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-505457" in "kube-system" namespace to be "Ready" ...
	I0806 07:07:48.680720   21485 pod_ready.go:92] pod "kube-controller-manager-addons-505457" in "kube-system" namespace has status "Ready":"True"
	I0806 07:07:48.680741   21485 pod_ready.go:81] duration metric: took 11.598929ms for pod "kube-controller-manager-addons-505457" in "kube-system" namespace to be "Ready" ...
	I0806 07:07:48.680752   21485 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-t7djg" in "kube-system" namespace to be "Ready" ...
	I0806 07:07:48.834605   21485 pod_ready.go:92] pod "kube-proxy-t7djg" in "kube-system" namespace has status "Ready":"True"
	I0806 07:07:48.834624   21485 pod_ready.go:81] duration metric: took 153.865794ms for pod "kube-proxy-t7djg" in "kube-system" namespace to be "Ready" ...
	I0806 07:07:48.834632   21485 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-505457" in "kube-system" namespace to be "Ready" ...
	I0806 07:07:48.845666   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:07:49.287006   21485 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0806 07:07:49.357744   21485 pod_ready.go:92] pod "kube-scheduler-addons-505457" in "kube-system" namespace has status "Ready":"True"
	I0806 07:07:49.357767   21485 pod_ready.go:81] duration metric: took 523.128862ms for pod "kube-scheduler-addons-505457" in "kube-system" namespace to be "Ready" ...
	I0806 07:07:49.357777   21485 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-bxmng" in "kube-system" namespace to be "Ready" ...
	I0806 07:07:49.381509   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:07:49.546771   21485 addons.go:234] Setting addon gcp-auth=true in "addons-505457"
	I0806 07:07:49.546826   21485 host.go:66] Checking if "addons-505457" exists ...
	I0806 07:07:49.547274   21485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:07:49.547309   21485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:07:49.562294   21485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40609
	I0806 07:07:49.562703   21485 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:07:49.563152   21485 main.go:141] libmachine: Using API Version  1
	I0806 07:07:49.563172   21485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:07:49.563482   21485 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:07:49.564032   21485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:07:49.564078   21485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:07:49.579548   21485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33635
	I0806 07:07:49.579943   21485 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:07:49.580453   21485 main.go:141] libmachine: Using API Version  1
	I0806 07:07:49.580485   21485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:07:49.580811   21485 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:07:49.581025   21485 main.go:141] libmachine: (addons-505457) Calling .GetState
	I0806 07:07:49.582517   21485 main.go:141] libmachine: (addons-505457) Calling .DriverName
	I0806 07:07:49.582726   21485 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0806 07:07:49.582752   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHHostname
	I0806 07:07:49.585398   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:49.585869   21485 main.go:141] libmachine: (addons-505457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:7a:7b", ip: ""} in network mk-addons-505457: {Iface:virbr1 ExpiryTime:2024-08-06 08:07:01 +0000 UTC Type:0 Mac:52:54:00:d5:7a:7b Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:addons-505457 Clientid:01:52:54:00:d5:7a:7b}
	I0806 07:07:49.585895   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined IP address 192.168.39.6 and MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:49.586008   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHPort
	I0806 07:07:49.586179   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHKeyPath
	I0806 07:07:49.586342   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHUsername
	I0806 07:07:49.586478   21485 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/addons-505457/id_rsa Username:docker}
	I0806 07:07:49.851246   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:07:50.371776   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:07:50.460951   21485 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.034886464s)
	I0806 07:07:50.461010   21485 main.go:141] libmachine: Making call to close driver server
	I0806 07:07:50.461021   21485 main.go:141] libmachine: (addons-505457) Calling .Close
	I0806 07:07:50.461022   21485 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.780669681s)
	I0806 07:07:50.461072   21485 main.go:141] libmachine: Making call to close driver server
	I0806 07:07:50.461129   21485 main.go:141] libmachine: (addons-505457) Calling .Close
	I0806 07:07:50.461151   21485 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.577331206s)
	I0806 07:07:50.461187   21485 main.go:141] libmachine: Making call to close driver server
	I0806 07:07:50.461205   21485 main.go:141] libmachine: (addons-505457) Calling .Close
	I0806 07:07:50.461233   21485 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.391692933s)
	I0806 07:07:50.461266   21485 main.go:141] libmachine: Making call to close driver server
	I0806 07:07:50.461279   21485 main.go:141] libmachine: (addons-505457) Calling .Close
	I0806 07:07:50.461318   21485 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.105128466s)
	I0806 07:07:50.461348   21485 main.go:141] libmachine: Making call to close driver server
	I0806 07:07:50.461359   21485 main.go:141] libmachine: (addons-505457) Calling .Close
	I0806 07:07:50.461369   21485 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.036825363s)
	I0806 07:07:50.461388   21485 main.go:141] libmachine: Making call to close driver server
	I0806 07:07:50.461396   21485 main.go:141] libmachine: (addons-505457) Calling .Close
	I0806 07:07:50.461527   21485 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.500943175s)
	W0806 07:07:50.461562   21485 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0806 07:07:50.461589   21485 main.go:141] libmachine: Successfully made call to close driver server
	I0806 07:07:50.461625   21485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 07:07:50.461663   21485 main.go:141] libmachine: Making call to close driver server
	I0806 07:07:50.461692   21485 main.go:141] libmachine: (addons-505457) DBG | Closing plugin on server side
	I0806 07:07:50.461702   21485 main.go:141] libmachine: (addons-505457) Calling .Close
	I0806 07:07:50.461720   21485 main.go:141] libmachine: Successfully made call to close driver server
	I0806 07:07:50.461727   21485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 07:07:50.461735   21485 main.go:141] libmachine: Making call to close driver server
	I0806 07:07:50.461741   21485 main.go:141] libmachine: (addons-505457) Calling .Close
	I0806 07:07:50.461862   21485 main.go:141] libmachine: (addons-505457) DBG | Closing plugin on server side
	I0806 07:07:50.461886   21485 main.go:141] libmachine: Successfully made call to close driver server
	I0806 07:07:50.461903   21485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 07:07:50.461910   21485 main.go:141] libmachine: Making call to close driver server
	I0806 07:07:50.461671   21485 main.go:141] libmachine: Successfully made call to close driver server
	I0806 07:07:50.461932   21485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 07:07:50.461942   21485 main.go:141] libmachine: Making call to close driver server
	I0806 07:07:50.461949   21485 main.go:141] libmachine: (addons-505457) Calling .Close
	I0806 07:07:50.461969   21485 main.go:141] libmachine: (addons-505457) DBG | Closing plugin on server side
	I0806 07:07:50.461917   21485 main.go:141] libmachine: (addons-505457) Calling .Close
	I0806 07:07:50.461999   21485 main.go:141] libmachine: Successfully made call to close driver server
	I0806 07:07:50.462016   21485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 07:07:50.461594   21485 retry.go:31] will retry after 220.627211ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0806 07:07:50.462025   21485 addons.go:475] Verifying addon metrics-server=true in "addons-505457"
	I0806 07:07:50.461639   21485 main.go:141] libmachine: (addons-505457) DBG | Closing plugin on server side
	I0806 07:07:50.462196   21485 main.go:141] libmachine: (addons-505457) DBG | Closing plugin on server side
	I0806 07:07:50.462221   21485 main.go:141] libmachine: Successfully made call to close driver server
	I0806 07:07:50.462228   21485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 07:07:50.462257   21485 main.go:141] libmachine: (addons-505457) DBG | Closing plugin on server side
	I0806 07:07:50.462282   21485 main.go:141] libmachine: Successfully made call to close driver server
	I0806 07:07:50.462289   21485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 07:07:50.462451   21485 main.go:141] libmachine: (addons-505457) DBG | Closing plugin on server side
	I0806 07:07:50.462482   21485 main.go:141] libmachine: Successfully made call to close driver server
	I0806 07:07:50.462509   21485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 07:07:50.462517   21485 main.go:141] libmachine: Making call to close driver server
	I0806 07:07:50.462525   21485 main.go:141] libmachine: (addons-505457) Calling .Close
	I0806 07:07:50.462618   21485 main.go:141] libmachine: (addons-505457) DBG | Closing plugin on server side
	I0806 07:07:50.462643   21485 main.go:141] libmachine: Successfully made call to close driver server
	I0806 07:07:50.462649   21485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 07:07:50.464123   21485 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-505457 service yakd-dashboard -n yakd-dashboard
	
	I0806 07:07:50.464691   21485 main.go:141] libmachine: (addons-505457) DBG | Closing plugin on server side
	I0806 07:07:50.464713   21485 main.go:141] libmachine: Successfully made call to close driver server
	I0806 07:07:50.464731   21485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 07:07:50.464732   21485 main.go:141] libmachine: (addons-505457) DBG | Closing plugin on server side
	I0806 07:07:50.464741   21485 main.go:141] libmachine: Making call to close driver server
	I0806 07:07:50.464749   21485 main.go:141] libmachine: (addons-505457) Calling .Close
	I0806 07:07:50.464758   21485 main.go:141] libmachine: Successfully made call to close driver server
	I0806 07:07:50.464766   21485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 07:07:50.464775   21485 addons.go:475] Verifying addon ingress=true in "addons-505457"
	I0806 07:07:50.465116   21485 main.go:141] libmachine: Successfully made call to close driver server
	I0806 07:07:50.465393   21485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 07:07:50.465138   21485 main.go:141] libmachine: (addons-505457) DBG | Closing plugin on server side
	I0806 07:07:50.466500   21485 out.go:177] * Verifying ingress addon...
	I0806 07:07:50.468520   21485 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0806 07:07:50.497836   21485 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0806 07:07:50.497858   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:07:50.521103   21485 main.go:141] libmachine: Making call to close driver server
	I0806 07:07:50.521138   21485 main.go:141] libmachine: (addons-505457) Calling .Close
	I0806 07:07:50.521457   21485 main.go:141] libmachine: Successfully made call to close driver server
	I0806 07:07:50.521473   21485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 07:07:50.683631   21485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0806 07:07:50.851560   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:07:50.979163   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:07:51.369147   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:07:51.468552   21485 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-bxmng" in "kube-system" namespace has status "Ready":"False"
	I0806 07:07:51.487668   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:07:51.800272   21485 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.21172001s)
	I0806 07:07:51.800300   21485 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.217552899s)
	I0806 07:07:51.800325   21485 main.go:141] libmachine: Making call to close driver server
	I0806 07:07:51.800338   21485 main.go:141] libmachine: (addons-505457) Calling .Close
	I0806 07:07:51.800618   21485 main.go:141] libmachine: Successfully made call to close driver server
	I0806 07:07:51.800636   21485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 07:07:51.800646   21485 main.go:141] libmachine: Making call to close driver server
	I0806 07:07:51.800654   21485 main.go:141] libmachine: (addons-505457) Calling .Close
	I0806 07:07:51.800941   21485 main.go:141] libmachine: (addons-505457) DBG | Closing plugin on server side
	I0806 07:07:51.800946   21485 main.go:141] libmachine: Successfully made call to close driver server
	I0806 07:07:51.800957   21485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 07:07:51.800967   21485 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-505457"
	I0806 07:07:51.802636   21485 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0806 07:07:51.802648   21485 out.go:177] * Verifying csi-hostpath-driver addon...
	I0806 07:07:51.804627   21485 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0806 07:07:51.805228   21485 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0806 07:07:51.805892   21485 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0806 07:07:51.805905   21485 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0806 07:07:51.822936   21485 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0806 07:07:51.822955   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:07:51.854494   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:07:51.899694   21485 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0806 07:07:51.899716   21485 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0806 07:07:51.938391   21485 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0806 07:07:51.938414   21485 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0806 07:07:51.973996   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:07:51.988953   21485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0806 07:07:52.310684   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:07:52.338860   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:07:52.480745   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:07:52.513654   21485 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.829973895s)
	I0806 07:07:52.513701   21485 main.go:141] libmachine: Making call to close driver server
	I0806 07:07:52.513718   21485 main.go:141] libmachine: (addons-505457) Calling .Close
	I0806 07:07:52.514023   21485 main.go:141] libmachine: Successfully made call to close driver server
	I0806 07:07:52.514047   21485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 07:07:52.514065   21485 main.go:141] libmachine: Making call to close driver server
	I0806 07:07:52.514074   21485 main.go:141] libmachine: (addons-505457) Calling .Close
	I0806 07:07:52.514306   21485 main.go:141] libmachine: (addons-505457) DBG | Closing plugin on server side
	I0806 07:07:52.514340   21485 main.go:141] libmachine: Successfully made call to close driver server
	I0806 07:07:52.514354   21485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 07:07:52.813656   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:07:52.855946   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:07:52.943152   21485 main.go:141] libmachine: Making call to close driver server
	I0806 07:07:52.943181   21485 main.go:141] libmachine: (addons-505457) Calling .Close
	I0806 07:07:52.943478   21485 main.go:141] libmachine: Successfully made call to close driver server
	I0806 07:07:52.943498   21485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 07:07:52.943507   21485 main.go:141] libmachine: Making call to close driver server
	I0806 07:07:52.943515   21485 main.go:141] libmachine: (addons-505457) Calling .Close
	I0806 07:07:52.943521   21485 main.go:141] libmachine: (addons-505457) DBG | Closing plugin on server side
	I0806 07:07:52.943761   21485 main.go:141] libmachine: (addons-505457) DBG | Closing plugin on server side
	I0806 07:07:52.943808   21485 main.go:141] libmachine: Successfully made call to close driver server
	I0806 07:07:52.943827   21485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 07:07:52.945686   21485 addons.go:475] Verifying addon gcp-auth=true in "addons-505457"
	I0806 07:07:52.947511   21485 out.go:177] * Verifying gcp-auth addon...
	I0806 07:07:52.949472   21485 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0806 07:07:52.962311   21485 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0806 07:07:52.962329   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:07:52.980932   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:07:53.311899   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:07:53.338760   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:07:53.452597   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:07:53.478635   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:07:53.813371   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:07:53.839501   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:07:53.863931   21485 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-bxmng" in "kube-system" namespace has status "Ready":"False"
	I0806 07:07:53.953947   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:07:53.972707   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:07:54.311448   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:07:54.341745   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:07:54.453642   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:07:54.473348   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:07:54.811602   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:07:54.839275   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:07:54.955131   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:07:54.979042   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:07:55.311378   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:07:55.339117   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:07:55.453933   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:07:55.473277   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:07:55.810459   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:07:55.839203   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:07:55.953682   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:07:55.973269   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:07:56.310822   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:07:56.339632   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:07:56.367924   21485 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-bxmng" in "kube-system" namespace has status "Ready":"False"
	I0806 07:07:56.454727   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:07:56.772990   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:07:56.810956   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:07:56.838627   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:07:56.953679   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:07:56.973779   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:07:57.312588   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:07:57.339136   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:07:57.455228   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:07:57.473132   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:07:57.811952   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:07:57.839238   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:07:57.953762   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:07:57.974199   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:07:58.311575   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:07:58.347089   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:07:58.453344   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:07:58.473535   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:07:58.811336   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:07:58.839265   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:07:58.867104   21485 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-bxmng" in "kube-system" namespace has status "Ready":"False"
	I0806 07:07:58.953842   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:07:58.974861   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:07:59.311080   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:07:59.339431   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:07:59.453058   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:07:59.473186   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:07:59.810926   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:07:59.839124   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:07:59.953330   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:07:59.973095   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:00.310724   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:00.339395   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:00.363726   21485 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-bxmng" in "kube-system" namespace has status "Ready":"True"
	I0806 07:08:00.363753   21485 pod_ready.go:81] duration metric: took 11.005968814s for pod "nvidia-device-plugin-daemonset-bxmng" in "kube-system" namespace to be "Ready" ...
	I0806 07:08:00.363769   21485 pod_ready.go:38] duration metric: took 13.994766516s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 07:08:00.363782   21485 api_server.go:52] waiting for apiserver process to appear ...
	I0806 07:08:00.363830   21485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 07:08:00.381009   21485 api_server.go:72] duration metric: took 18.839123062s to wait for apiserver process to appear ...
	I0806 07:08:00.381033   21485 api_server.go:88] waiting for apiserver healthz status ...
	I0806 07:08:00.381057   21485 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I0806 07:08:00.385284   21485 api_server.go:279] https://192.168.39.6:8443/healthz returned 200:
	ok
	I0806 07:08:00.386308   21485 api_server.go:141] control plane version: v1.30.3
	I0806 07:08:00.386325   21485 api_server.go:131] duration metric: took 5.285561ms to wait for apiserver health ...
	I0806 07:08:00.386333   21485 system_pods.go:43] waiting for kube-system pods to appear ...
	I0806 07:08:00.395028   21485 system_pods.go:59] 18 kube-system pods found
	I0806 07:08:00.395054   21485 system_pods.go:61] "coredns-7db6d8ff4d-k5d8d" [9205d3f6-ae5e-473a-8760-1e9bb9cff77d] Running
	I0806 07:08:00.395062   21485 system_pods.go:61] "csi-hostpath-attacher-0" [82460523-4e86-4f8c-a043-d6075df14822] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0806 07:08:00.395068   21485 system_pods.go:61] "csi-hostpath-resizer-0" [c481f083-8e0b-4fc2-935e-770096142e18] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0806 07:08:00.395082   21485 system_pods.go:61] "csi-hostpathplugin-jv7h6" [289bff98-7069-4ae9-bc1e-af9315c603cb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0806 07:08:00.395090   21485 system_pods.go:61] "etcd-addons-505457" [d5cbe284-1ded-4e91-8525-e27ad1df2767] Running
	I0806 07:08:00.395094   21485 system_pods.go:61] "kube-apiserver-addons-505457" [dfd3ca17-01fc-405b-a1e9-ba0c93da4327] Running
	I0806 07:08:00.395098   21485 system_pods.go:61] "kube-controller-manager-addons-505457" [1450ab97-3dd4-47f7-9bf1-0eaeb31520f2] Running
	I0806 07:08:00.395103   21485 system_pods.go:61] "kube-ingress-dns-minikube" [a14eefbb-7e96-4b00-bce1-8cbeb64e7f88] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0806 07:08:00.395109   21485 system_pods.go:61] "kube-proxy-t7djg" [dbaeea31-08de-4723-9ec5-f51d376650b5] Running
	I0806 07:08:00.395113   21485 system_pods.go:61] "kube-scheduler-addons-505457" [c045203a-aae6-462a-b8d0-70d051cbf2f6] Running
	I0806 07:08:00.395120   21485 system_pods.go:61] "metrics-server-c59844bb4-dghj8" [a73b51d1-3f19-4608-b37f-2d92f56fec02] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0806 07:08:00.395124   21485 system_pods.go:61] "nvidia-device-plugin-daemonset-bxmng" [84dbfa29-9673-477b-a4b1-2e17cc5b745b] Running
	I0806 07:08:00.395131   21485 system_pods.go:61] "registry-698f998955-4xwg8" [d2496fc4-46dd-4a87-b8e6-51cc90641be4] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0806 07:08:00.395136   21485 system_pods.go:61] "registry-proxy-rpkh9" [273a082a-adbe-45cf-93b4-da0d67d0c071] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0806 07:08:00.395145   21485 system_pods.go:61] "snapshot-controller-745499f584-c2sg5" [154847d3-3f68-46bd-b9a2-f94537000c5d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0806 07:08:00.395154   21485 system_pods.go:61] "snapshot-controller-745499f584-xz4nt" [71e5e5ee-6e5b-409b-a1af-5e9d590e7609] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0806 07:08:00.395158   21485 system_pods.go:61] "storage-provisioner" [9631af58-26e8-4f64-a332-ab89aa6934ff] Running
	I0806 07:08:00.395166   21485 system_pods.go:61] "tiller-deploy-6677d64bcd-n2x6k" [d1db42b6-6642-4121-ae56-573e9f26e727] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0806 07:08:00.395171   21485 system_pods.go:74] duration metric: took 8.833455ms to wait for pod list to return data ...
	I0806 07:08:00.395180   21485 default_sa.go:34] waiting for default service account to be created ...
	I0806 07:08:00.397298   21485 default_sa.go:45] found service account: "default"
	I0806 07:08:00.397312   21485 default_sa.go:55] duration metric: took 2.127726ms for default service account to be created ...
	I0806 07:08:00.397319   21485 system_pods.go:116] waiting for k8s-apps to be running ...
	I0806 07:08:00.405261   21485 system_pods.go:86] 18 kube-system pods found
	I0806 07:08:00.405281   21485 system_pods.go:89] "coredns-7db6d8ff4d-k5d8d" [9205d3f6-ae5e-473a-8760-1e9bb9cff77d] Running
	I0806 07:08:00.405289   21485 system_pods.go:89] "csi-hostpath-attacher-0" [82460523-4e86-4f8c-a043-d6075df14822] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0806 07:08:00.405295   21485 system_pods.go:89] "csi-hostpath-resizer-0" [c481f083-8e0b-4fc2-935e-770096142e18] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0806 07:08:00.405304   21485 system_pods.go:89] "csi-hostpathplugin-jv7h6" [289bff98-7069-4ae9-bc1e-af9315c603cb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0806 07:08:00.405310   21485 system_pods.go:89] "etcd-addons-505457" [d5cbe284-1ded-4e91-8525-e27ad1df2767] Running
	I0806 07:08:00.405316   21485 system_pods.go:89] "kube-apiserver-addons-505457" [dfd3ca17-01fc-405b-a1e9-ba0c93da4327] Running
	I0806 07:08:00.405321   21485 system_pods.go:89] "kube-controller-manager-addons-505457" [1450ab97-3dd4-47f7-9bf1-0eaeb31520f2] Running
	I0806 07:08:00.405330   21485 system_pods.go:89] "kube-ingress-dns-minikube" [a14eefbb-7e96-4b00-bce1-8cbeb64e7f88] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0806 07:08:00.405334   21485 system_pods.go:89] "kube-proxy-t7djg" [dbaeea31-08de-4723-9ec5-f51d376650b5] Running
	I0806 07:08:00.405339   21485 system_pods.go:89] "kube-scheduler-addons-505457" [c045203a-aae6-462a-b8d0-70d051cbf2f6] Running
	I0806 07:08:00.405347   21485 system_pods.go:89] "metrics-server-c59844bb4-dghj8" [a73b51d1-3f19-4608-b37f-2d92f56fec02] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0806 07:08:00.405351   21485 system_pods.go:89] "nvidia-device-plugin-daemonset-bxmng" [84dbfa29-9673-477b-a4b1-2e17cc5b745b] Running
	I0806 07:08:00.405361   21485 system_pods.go:89] "registry-698f998955-4xwg8" [d2496fc4-46dd-4a87-b8e6-51cc90641be4] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0806 07:08:00.405368   21485 system_pods.go:89] "registry-proxy-rpkh9" [273a082a-adbe-45cf-93b4-da0d67d0c071] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0806 07:08:00.405375   21485 system_pods.go:89] "snapshot-controller-745499f584-c2sg5" [154847d3-3f68-46bd-b9a2-f94537000c5d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0806 07:08:00.405384   21485 system_pods.go:89] "snapshot-controller-745499f584-xz4nt" [71e5e5ee-6e5b-409b-a1af-5e9d590e7609] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0806 07:08:00.405391   21485 system_pods.go:89] "storage-provisioner" [9631af58-26e8-4f64-a332-ab89aa6934ff] Running
	I0806 07:08:00.405397   21485 system_pods.go:89] "tiller-deploy-6677d64bcd-n2x6k" [d1db42b6-6642-4121-ae56-573e9f26e727] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0806 07:08:00.405403   21485 system_pods.go:126] duration metric: took 8.080452ms to wait for k8s-apps to be running ...
	I0806 07:08:00.405410   21485 system_svc.go:44] waiting for kubelet service to be running ....
	I0806 07:08:00.405480   21485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 07:08:00.420585   21485 system_svc.go:56] duration metric: took 15.169517ms WaitForService to wait for kubelet
	I0806 07:08:00.420610   21485 kubeadm.go:582] duration metric: took 18.878726694s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0806 07:08:00.420628   21485 node_conditions.go:102] verifying NodePressure condition ...
	I0806 07:08:00.423903   21485 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0806 07:08:00.423931   21485 node_conditions.go:123] node cpu capacity is 2
	I0806 07:08:00.423944   21485 node_conditions.go:105] duration metric: took 3.312018ms to run NodePressure ...
	I0806 07:08:00.423954   21485 start.go:241] waiting for startup goroutines ...
	I0806 07:08:00.452838   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:00.472896   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:00.811059   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:00.840993   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:00.953522   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:00.973116   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:01.311615   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:01.339164   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:01.453606   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:01.473201   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:01.810986   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:01.838643   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:01.953717   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:01.974212   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:02.311187   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:02.339450   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:02.453222   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:02.473200   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:02.811125   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:02.839354   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:02.953054   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:02.972518   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:03.325464   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:03.339022   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:03.453719   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:03.473098   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:03.812895   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:03.841060   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:03.952950   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:03.972848   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:04.311697   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:04.339572   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:04.454362   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:04.473683   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:04.810299   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:04.839114   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:04.953770   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:04.973350   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:05.565924   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:05.568667   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:05.569011   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:05.569203   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:05.810130   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:05.839051   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:05.953677   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:05.972962   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:06.311237   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:06.339184   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:06.454749   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:06.474213   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:06.811128   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:06.839518   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:06.953747   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:06.973784   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:07.311440   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:07.339572   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:07.453503   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:07.474016   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:07.810452   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:07.839287   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:07.953159   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:07.972576   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:08.310743   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:08.338829   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:08.453713   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:08.474180   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:08.813014   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:08.846924   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:08.954298   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:08.972721   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:09.310126   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:09.338576   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:09.453381   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:09.472759   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:09.810628   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:09.838698   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:09.953998   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:09.972700   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:10.310030   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:10.338269   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:10.453591   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:10.473217   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:10.812216   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:10.838643   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:10.960349   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:10.977498   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:11.310667   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:11.339909   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:11.453084   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:11.474459   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:11.811335   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:11.839233   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:11.955541   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:11.973230   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:12.311659   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:12.338552   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:12.453111   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:12.472770   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:12.811639   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:12.838801   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:12.954434   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:12.973989   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:13.309997   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:13.341598   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:13.453586   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:13.475156   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:13.811446   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:13.844640   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:13.954669   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:13.973161   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:14.310464   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:14.338635   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:14.454167   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:14.472717   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:14.810316   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:14.839481   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:14.953666   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:14.973190   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:15.310776   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:15.338353   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:15.453354   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:15.473643   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:15.810578   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:15.839367   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:15.952826   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:15.973454   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:16.311492   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:16.339282   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:16.454142   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:16.473417   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:16.811370   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:16.838639   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:16.953829   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:16.973239   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:17.311702   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:17.341585   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:17.452692   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:17.472939   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:17.811206   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:17.839337   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:17.956305   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:17.973300   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:18.319385   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:18.338422   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:18.453744   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:18.474225   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:18.899456   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:18.904515   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:18.953060   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:18.972505   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:19.312336   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:19.338971   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:19.454136   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:19.473339   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:19.810831   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:19.850534   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:19.953828   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:19.972606   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:20.312401   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:20.338831   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:20.453298   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:20.473211   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:20.811352   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:20.838597   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:20.953087   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:20.972937   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:21.310642   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:21.355525   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:21.453968   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:21.473138   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:21.813975   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:21.839971   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:21.953662   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:21.973108   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:22.310932   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:22.338358   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:22.453025   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:22.473512   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:22.810674   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:22.839326   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:22.953994   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:22.972896   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:23.311857   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:23.338941   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:23.452680   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:23.474492   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:23.815228   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:23.839230   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:24.235612   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:24.235684   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:24.310750   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:24.338981   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:24.452942   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:24.472957   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:24.810784   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:24.838325   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:24.952970   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:24.972777   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:25.317198   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:25.351173   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:25.453590   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:25.474232   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:25.810019   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:25.838863   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:25.953229   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:25.973144   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:26.313787   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:26.339291   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:26.453079   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:26.472801   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:26.811946   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:26.839224   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:26.953936   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:26.974571   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:27.311482   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:27.338829   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:27.452828   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:27.472717   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:27.810206   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:27.838557   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:27.953386   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:27.972965   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:28.310251   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:28.338785   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:28.452601   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:28.472808   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:28.810761   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:28.838030   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:28.952931   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:28.973661   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:29.311274   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:29.340439   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:29.453311   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:29.473116   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:29.810736   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:29.838764   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:30.337383   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:30.337585   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:30.337798   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:30.339898   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:30.452999   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:30.472867   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:30.811575   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:30.839219   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:30.953481   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:30.973345   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:31.310945   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:31.339101   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:31.454440   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:31.475057   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:31.810886   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:31.838294   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:31.958319   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:31.973399   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:32.311094   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:32.338768   21485 kapi.go:107] duration metric: took 46.004823729s to wait for kubernetes.io/minikube-addons=registry ...
	I0806 07:08:32.453728   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:32.473412   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:32.811193   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:32.952943   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:32.973368   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:33.311669   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:33.453801   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:33.472436   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:33.811173   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:33.954203   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:33.972949   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:34.310541   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:34.452953   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:34.472678   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:34.810454   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:34.953889   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:34.973937   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:35.310521   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:35.453332   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:35.472821   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:35.811120   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:35.953809   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:35.972607   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:36.311763   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:36.453355   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:36.473139   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:36.811439   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:36.953868   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:36.972528   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:37.311562   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:37.452449   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:37.472961   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:37.811536   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:37.952614   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:37.973272   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:38.311032   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:38.453566   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:38.473207   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:38.812377   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:38.953566   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:38.973414   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:39.314114   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:39.453715   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:39.473433   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:39.811255   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:39.953050   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:39.972825   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:40.311090   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:40.453592   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:40.473791   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:40.810583   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:40.953484   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:40.973434   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:41.310674   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:41.453530   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:41.473025   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:41.811270   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:41.953567   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:41.973679   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:42.312013   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:42.453749   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:42.473878   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:42.810614   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:42.952980   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:42.973096   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:43.310805   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:43.454843   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:43.472687   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:43.815491   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:43.952551   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:43.973139   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:44.314470   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:44.453210   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:44.473149   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:44.811388   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:44.953339   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:44.973618   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:45.310782   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:45.454754   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:45.472696   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:45.816741   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:45.954702   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:45.973561   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:46.311775   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:46.458146   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:46.493168   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:46.811114   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:46.953582   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:46.973625   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:47.313301   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:47.453487   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:47.473585   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:47.817051   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:47.958766   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:47.979731   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:48.311209   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:48.458867   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:48.478222   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:48.810646   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:48.952814   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:48.972499   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:49.311288   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:49.454269   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:49.475968   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:49.811234   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:49.952778   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:49.973908   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:50.311316   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:50.457195   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:50.486281   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:51.205999   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:51.212478   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:51.213032   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:51.311304   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:51.462174   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:51.472731   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:51.811274   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:51.952970   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:51.972646   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:52.314441   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:52.452613   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:52.473298   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:52.815186   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:52.956525   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:52.974175   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:53.310388   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:53.453095   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:53.483943   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:53.810022   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:53.953671   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:53.973565   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:54.311328   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:54.453685   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:54.473496   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:54.810865   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:54.953806   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:54.973680   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:55.311292   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:55.453747   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:55.473223   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:55.810427   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:55.953377   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:55.973854   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:56.311253   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:56.452566   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:56.472933   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:56.810767   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:56.966554   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:56.980122   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:57.317844   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:57.453130   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:57.478271   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:57.816962   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:57.954021   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:57.975251   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:58.343137   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:58.453787   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:58.473337   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:58.812265   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:58.961727   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:58.974622   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:59.312580   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:59.455645   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:59.474168   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:59.810416   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:59.952690   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:59.973506   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:09:00.310993   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:09:00.454499   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:09:00.481734   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:09:00.812013   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:09:00.954649   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:09:00.973562   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:09:01.311008   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:09:01.454899   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:09:01.474609   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:09:01.813004   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:09:01.953802   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:09:01.974133   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:09:02.311852   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:09:02.453596   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:09:02.474307   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:09:02.810356   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:09:02.952876   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:09:02.973110   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:09:03.310392   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:09:03.453473   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:09:03.474309   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:09:03.822283   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:09:03.954282   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:09:03.973036   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:09:04.317275   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:09:04.454132   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:09:04.474449   21485 kapi.go:107] duration metric: took 1m14.005924522s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0806 07:09:04.811853   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:09:04.954758   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:09:05.312002   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:09:05.453947   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:09:05.810590   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:09:05.954631   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:09:06.311283   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:09:06.452847   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:09:06.811454   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:09:06.953584   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:09:07.310644   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:09:07.453522   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:09:07.810603   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:09:07.954652   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:09:08.420187   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:09:08.453451   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:09:08.817525   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:09:08.953486   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:09:09.321942   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:09:09.453521   21485 kapi.go:107] duration metric: took 1m16.504043823s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0806 07:09:09.455540   21485 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-505457 cluster.
	I0806 07:09:09.457101   21485 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0806 07:09:09.458584   21485 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0806 07:09:09.810654   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:09:10.310727   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:09:10.810354   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:09:11.310146   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:09:11.813554   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:09:12.322052   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:09:12.810716   21485 kapi.go:107] duration metric: took 1m21.005480553s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0806 07:09:12.812639   21485 out.go:177] * Enabled addons: nvidia-device-plugin, ingress-dns, cloud-spanner, storage-provisioner, default-storageclass, metrics-server, helm-tiller, inspektor-gadget, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0806 07:09:12.813793   21485 addons.go:510] duration metric: took 1m31.27186756s for enable addons: enabled=[nvidia-device-plugin ingress-dns cloud-spanner storage-provisioner default-storageclass metrics-server helm-tiller inspektor-gadget yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0806 07:09:12.813829   21485 start.go:246] waiting for cluster config update ...
	I0806 07:09:12.813850   21485 start.go:255] writing updated cluster config ...
	I0806 07:09:12.814088   21485 ssh_runner.go:195] Run: rm -f paused
	I0806 07:09:12.866843   21485 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0806 07:09:12.868866   21485 out.go:177] * Done! kubectl is now configured to use "addons-505457" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 06 07:12:20 addons-505457 crio[684]: time="2024-08-06 07:12:20.682617121Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722928340682591886,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589584,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=250db983-2f6c-4deb-820f-9e8a56f128fb name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 07:12:20 addons-505457 crio[684]: time="2024-08-06 07:12:20.683491992Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=39e5f951-3cac-4f09-9366-03cb65b6bf19 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 07:12:20 addons-505457 crio[684]: time="2024-08-06 07:12:20.683572315Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=39e5f951-3cac-4f09-9366-03cb65b6bf19 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 07:12:20 addons-505457 crio[684]: time="2024-08-06 07:12:20.683945244Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1bb169456345b80326bbe03ea5deb2194034ea3b0e9ae73d767584a86175ef8d,PodSandboxId:0ba76e457d98d104119a038a71a5ebea9e3a0887dc0d8436ac016292a3ce1b32,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722928333818262584,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-xg2dt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: da8616b2-cdae-4436-8bb8-2fd558ba9b6c,},Annotations:map[string]string{io.kubernetes.container.hash: fc3dd633,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5716fd9d0a7957f1d3d821ea36f879dd1040843efd37b7e07f2c5271a19e2c43,PodSandboxId:48323e9942a07b1b4d41f2c930f7d80f4b5d5b58862bb529b4b84889c3a8118e,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722928193681598452,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7fced162-c099-4abd-a6ab-20626a136899,},Annotations:map[string]string{io.kubernet
es.container.hash: acaffa5a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9776dddcced3ea27152001b269947202e4192c03c7b1afc5a43e5848f4749ef7,PodSandboxId:6e6b579adaf6bae36feaba849d1dfd126bebce03ab89556635405bbf8ceb54d0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722928156293700671,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ae3e743b-c04d-4206-a
40c-70c96ca55dea,},Annotations:map[string]string{io.kubernetes.container.hash: d096c7f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc63ed54019f6622542a4f78554ac1d61e333d9425e6933f71750e404ae4c498,PodSandboxId:b9c5a368647ba026206288b1729dd234647b32708ee8c41514b5e80a329a9a03,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1722928128367061773,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-p8gdd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9801f40a-24ab-4520-9b21-9a6dceda4450,},Anno
tations:map[string]string{io.kubernetes.container.hash: cde42411,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a60360e8ba07430103531158c7a4c8b2a18bee0bffe23935876e12eeea3daf54,PodSandboxId:ef6d9da8ab2fa845565d8fa73b9333d72f874431df49fdb611b88884dbaf4ad6,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1722928127778125029,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-xwgk9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 51f89
63a-02a1-4e65-95f5-ebff33be12a4,},Annotations:map[string]string{io.kubernetes.container.hash: 4c93d00a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e9ca59c5d5d44d3fb6e570442f4c5915f02bafd78a7dabf81c1eec5c26b3b9e,PodSandboxId:507a32da5aa23a6fdf8198b07f8c9b4ab3888e4f3d2357ad5f0c22447d314e48,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722928093784437366,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-dghj8,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: a73b51d1-3f19-4608-b37f-2d92f56fec02,},Annotations:map[string]string{io.kubernetes.container.hash: 8b558cb9,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7a623480af4afbb181b874f946e3af94aca0c310fbb8b0a43223db5c3f102b1,PodSandboxId:af6ab42fed69ba47b2ce7c9a00e8098e134066605db39b09e0dcd412fe7e097c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722928069099504187,Labels:map[string]string{io.kubernetes.container.name: stora
ge-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9631af58-26e8-4f64-a332-ab89aa6934ff,},Annotations:map[string]string{io.kubernetes.container.hash: 361e54aa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:027ea1c110b7f5dfde7f8887ce55fc8c76b57576dd8e0b413a859d725bc0f45e,PodSandboxId:de0ded25e176ea853db6796ed89e6315e0859501717484ecef9c0c0456cd09e4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722928065518161478,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.nam
e: coredns-7db6d8ff4d-k5d8d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9205d3f6-ae5e-473a-8760-1e9bb9cff77d,},Annotations:map[string]string{io.kubernetes.container.hash: 68374ea1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5080e5fce2c6cf845092e92aebf13550fc5412567f7c21694410eaebc73d84d,PodSandboxId:56f279ca84ed0636221eb9b70b3b986c6ba6f7e791473eedc394ac4175e1fbf5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb0
25d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722928063132630344,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-t7djg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbaeea31-08de-4723-9ec5-f51d376650b5,},Annotations:map[string]string{io.kubernetes.container.hash: 45c04c54,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e84eb866dc0b3dd79f91399a5714b5b3bbb8eb0ac746133af657c88755e4b88,PodSandboxId:ed75660e1f6f9eb40d6a1e0d06d5f737a0f9cd77ad3556ba4b43f73b61849fc5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacb
d422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722928042832090751,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-505457,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d3a4c70f5f1a98e373ce2b1bd632569,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd494e38d6a560269121e4eb69c609c6db24bea9862163ad00e525dc7e486a9a,PodSandboxId:16986c599fc34a7842ad40cbb800162bc70784c708c03a68dc81766b286122a8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a
899,State:CONTAINER_RUNNING,CreatedAt:1722928042786306734,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-505457,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9445896d2c8822cafd25b745df86bb0b,},Annotations:map[string]string{io.kubernetes.container.hash: d5831ede,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe12a18afe3dccc7a77e9b91dea2873a822a39628e809273739532bd630c0459,PodSandboxId:2648f6a872da28fbc81c2bea3a82c35080febf469813983949da621f3e8f3f8b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,
CreatedAt:1722928042760368393,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-505457,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 503a3788790fb79ef323dff2c5ec6ce1,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04bc771142517172d3d6312f52f17040d2235a82f36f5e627dea6620331173a9,PodSandboxId:c44c5af015412e22c8b5050d0749c16c27744783d64dd9df518b059fc6b699d3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING
,CreatedAt:1722928042755640737,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-505457,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba2495c51f84135afe3ac6ebc47db99e,},Annotations:map[string]string{io.kubernetes.container.hash: 974ca483,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=39e5f951-3cac-4f09-9366-03cb65b6bf19 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 07:12:20 addons-505457 crio[684]: time="2024-08-06 07:12:20.726188968Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d889c442-3bfd-4957-bb58-3e88fa36fbf7 name=/runtime.v1.RuntimeService/Version
	Aug 06 07:12:20 addons-505457 crio[684]: time="2024-08-06 07:12:20.726294620Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d889c442-3bfd-4957-bb58-3e88fa36fbf7 name=/runtime.v1.RuntimeService/Version
	Aug 06 07:12:20 addons-505457 crio[684]: time="2024-08-06 07:12:20.727291531Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c230ca9d-8a8f-4785-acf8-c84ccc8d24a1 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 07:12:20 addons-505457 crio[684]: time="2024-08-06 07:12:20.728979114Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722928340728949756,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589584,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c230ca9d-8a8f-4785-acf8-c84ccc8d24a1 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 07:12:20 addons-505457 crio[684]: time="2024-08-06 07:12:20.729551330Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8327303d-1c17-48e6-b53c-05bc69aad8be name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 07:12:20 addons-505457 crio[684]: time="2024-08-06 07:12:20.729627339Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8327303d-1c17-48e6-b53c-05bc69aad8be name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 07:12:20 addons-505457 crio[684]: time="2024-08-06 07:12:20.729977855Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1bb169456345b80326bbe03ea5deb2194034ea3b0e9ae73d767584a86175ef8d,PodSandboxId:0ba76e457d98d104119a038a71a5ebea9e3a0887dc0d8436ac016292a3ce1b32,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722928333818262584,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-xg2dt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: da8616b2-cdae-4436-8bb8-2fd558ba9b6c,},Annotations:map[string]string{io.kubernetes.container.hash: fc3dd633,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5716fd9d0a7957f1d3d821ea36f879dd1040843efd37b7e07f2c5271a19e2c43,PodSandboxId:48323e9942a07b1b4d41f2c930f7d80f4b5d5b58862bb529b4b84889c3a8118e,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722928193681598452,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7fced162-c099-4abd-a6ab-20626a136899,},Annotations:map[string]string{io.kubernet
es.container.hash: acaffa5a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9776dddcced3ea27152001b269947202e4192c03c7b1afc5a43e5848f4749ef7,PodSandboxId:6e6b579adaf6bae36feaba849d1dfd126bebce03ab89556635405bbf8ceb54d0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722928156293700671,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ae3e743b-c04d-4206-a
40c-70c96ca55dea,},Annotations:map[string]string{io.kubernetes.container.hash: d096c7f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc63ed54019f6622542a4f78554ac1d61e333d9425e6933f71750e404ae4c498,PodSandboxId:b9c5a368647ba026206288b1729dd234647b32708ee8c41514b5e80a329a9a03,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1722928128367061773,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-p8gdd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9801f40a-24ab-4520-9b21-9a6dceda4450,},Anno
tations:map[string]string{io.kubernetes.container.hash: cde42411,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a60360e8ba07430103531158c7a4c8b2a18bee0bffe23935876e12eeea3daf54,PodSandboxId:ef6d9da8ab2fa845565d8fa73b9333d72f874431df49fdb611b88884dbaf4ad6,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1722928127778125029,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-xwgk9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 51f89
63a-02a1-4e65-95f5-ebff33be12a4,},Annotations:map[string]string{io.kubernetes.container.hash: 4c93d00a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e9ca59c5d5d44d3fb6e570442f4c5915f02bafd78a7dabf81c1eec5c26b3b9e,PodSandboxId:507a32da5aa23a6fdf8198b07f8c9b4ab3888e4f3d2357ad5f0c22447d314e48,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722928093784437366,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-dghj8,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: a73b51d1-3f19-4608-b37f-2d92f56fec02,},Annotations:map[string]string{io.kubernetes.container.hash: 8b558cb9,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7a623480af4afbb181b874f946e3af94aca0c310fbb8b0a43223db5c3f102b1,PodSandboxId:af6ab42fed69ba47b2ce7c9a00e8098e134066605db39b09e0dcd412fe7e097c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722928069099504187,Labels:map[string]string{io.kubernetes.container.name: stora
ge-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9631af58-26e8-4f64-a332-ab89aa6934ff,},Annotations:map[string]string{io.kubernetes.container.hash: 361e54aa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:027ea1c110b7f5dfde7f8887ce55fc8c76b57576dd8e0b413a859d725bc0f45e,PodSandboxId:de0ded25e176ea853db6796ed89e6315e0859501717484ecef9c0c0456cd09e4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722928065518161478,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.nam
e: coredns-7db6d8ff4d-k5d8d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9205d3f6-ae5e-473a-8760-1e9bb9cff77d,},Annotations:map[string]string{io.kubernetes.container.hash: 68374ea1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5080e5fce2c6cf845092e92aebf13550fc5412567f7c21694410eaebc73d84d,PodSandboxId:56f279ca84ed0636221eb9b70b3b986c6ba6f7e791473eedc394ac4175e1fbf5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb0
25d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722928063132630344,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-t7djg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbaeea31-08de-4723-9ec5-f51d376650b5,},Annotations:map[string]string{io.kubernetes.container.hash: 45c04c54,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e84eb866dc0b3dd79f91399a5714b5b3bbb8eb0ac746133af657c88755e4b88,PodSandboxId:ed75660e1f6f9eb40d6a1e0d06d5f737a0f9cd77ad3556ba4b43f73b61849fc5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacb
d422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722928042832090751,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-505457,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d3a4c70f5f1a98e373ce2b1bd632569,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd494e38d6a560269121e4eb69c609c6db24bea9862163ad00e525dc7e486a9a,PodSandboxId:16986c599fc34a7842ad40cbb800162bc70784c708c03a68dc81766b286122a8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a
899,State:CONTAINER_RUNNING,CreatedAt:1722928042786306734,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-505457,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9445896d2c8822cafd25b745df86bb0b,},Annotations:map[string]string{io.kubernetes.container.hash: d5831ede,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe12a18afe3dccc7a77e9b91dea2873a822a39628e809273739532bd630c0459,PodSandboxId:2648f6a872da28fbc81c2bea3a82c35080febf469813983949da621f3e8f3f8b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,
CreatedAt:1722928042760368393,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-505457,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 503a3788790fb79ef323dff2c5ec6ce1,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04bc771142517172d3d6312f52f17040d2235a82f36f5e627dea6620331173a9,PodSandboxId:c44c5af015412e22c8b5050d0749c16c27744783d64dd9df518b059fc6b699d3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING
,CreatedAt:1722928042755640737,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-505457,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba2495c51f84135afe3ac6ebc47db99e,},Annotations:map[string]string{io.kubernetes.container.hash: 974ca483,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8327303d-1c17-48e6-b53c-05bc69aad8be name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 07:12:20 addons-505457 crio[684]: time="2024-08-06 07:12:20.769661134Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=48e7c710-d376-4555-875b-46624fde3e07 name=/runtime.v1.RuntimeService/Version
	Aug 06 07:12:20 addons-505457 crio[684]: time="2024-08-06 07:12:20.769782545Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=48e7c710-d376-4555-875b-46624fde3e07 name=/runtime.v1.RuntimeService/Version
	Aug 06 07:12:20 addons-505457 crio[684]: time="2024-08-06 07:12:20.770859817Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3f45dbba-f6ff-4711-a704-4283bf154c9d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 07:12:20 addons-505457 crio[684]: time="2024-08-06 07:12:20.772050743Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722928340772026909,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589584,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3f45dbba-f6ff-4711-a704-4283bf154c9d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 07:12:20 addons-505457 crio[684]: time="2024-08-06 07:12:20.772627139Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8c19c5ff-4621-49ee-95f8-bdbb69d8fc15 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 07:12:20 addons-505457 crio[684]: time="2024-08-06 07:12:20.772702181Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8c19c5ff-4621-49ee-95f8-bdbb69d8fc15 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 07:12:20 addons-505457 crio[684]: time="2024-08-06 07:12:20.773072531Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1bb169456345b80326bbe03ea5deb2194034ea3b0e9ae73d767584a86175ef8d,PodSandboxId:0ba76e457d98d104119a038a71a5ebea9e3a0887dc0d8436ac016292a3ce1b32,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722928333818262584,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-xg2dt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: da8616b2-cdae-4436-8bb8-2fd558ba9b6c,},Annotations:map[string]string{io.kubernetes.container.hash: fc3dd633,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5716fd9d0a7957f1d3d821ea36f879dd1040843efd37b7e07f2c5271a19e2c43,PodSandboxId:48323e9942a07b1b4d41f2c930f7d80f4b5d5b58862bb529b4b84889c3a8118e,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722928193681598452,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7fced162-c099-4abd-a6ab-20626a136899,},Annotations:map[string]string{io.kubernet
es.container.hash: acaffa5a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9776dddcced3ea27152001b269947202e4192c03c7b1afc5a43e5848f4749ef7,PodSandboxId:6e6b579adaf6bae36feaba849d1dfd126bebce03ab89556635405bbf8ceb54d0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722928156293700671,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ae3e743b-c04d-4206-a
40c-70c96ca55dea,},Annotations:map[string]string{io.kubernetes.container.hash: d096c7f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc63ed54019f6622542a4f78554ac1d61e333d9425e6933f71750e404ae4c498,PodSandboxId:b9c5a368647ba026206288b1729dd234647b32708ee8c41514b5e80a329a9a03,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1722928128367061773,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-p8gdd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9801f40a-24ab-4520-9b21-9a6dceda4450,},Anno
tations:map[string]string{io.kubernetes.container.hash: cde42411,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a60360e8ba07430103531158c7a4c8b2a18bee0bffe23935876e12eeea3daf54,PodSandboxId:ef6d9da8ab2fa845565d8fa73b9333d72f874431df49fdb611b88884dbaf4ad6,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1722928127778125029,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-xwgk9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 51f89
63a-02a1-4e65-95f5-ebff33be12a4,},Annotations:map[string]string{io.kubernetes.container.hash: 4c93d00a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e9ca59c5d5d44d3fb6e570442f4c5915f02bafd78a7dabf81c1eec5c26b3b9e,PodSandboxId:507a32da5aa23a6fdf8198b07f8c9b4ab3888e4f3d2357ad5f0c22447d314e48,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722928093784437366,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-dghj8,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: a73b51d1-3f19-4608-b37f-2d92f56fec02,},Annotations:map[string]string{io.kubernetes.container.hash: 8b558cb9,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7a623480af4afbb181b874f946e3af94aca0c310fbb8b0a43223db5c3f102b1,PodSandboxId:af6ab42fed69ba47b2ce7c9a00e8098e134066605db39b09e0dcd412fe7e097c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722928069099504187,Labels:map[string]string{io.kubernetes.container.name: stora
ge-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9631af58-26e8-4f64-a332-ab89aa6934ff,},Annotations:map[string]string{io.kubernetes.container.hash: 361e54aa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:027ea1c110b7f5dfde7f8887ce55fc8c76b57576dd8e0b413a859d725bc0f45e,PodSandboxId:de0ded25e176ea853db6796ed89e6315e0859501717484ecef9c0c0456cd09e4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722928065518161478,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.nam
e: coredns-7db6d8ff4d-k5d8d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9205d3f6-ae5e-473a-8760-1e9bb9cff77d,},Annotations:map[string]string{io.kubernetes.container.hash: 68374ea1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5080e5fce2c6cf845092e92aebf13550fc5412567f7c21694410eaebc73d84d,PodSandboxId:56f279ca84ed0636221eb9b70b3b986c6ba6f7e791473eedc394ac4175e1fbf5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb0
25d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722928063132630344,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-t7djg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbaeea31-08de-4723-9ec5-f51d376650b5,},Annotations:map[string]string{io.kubernetes.container.hash: 45c04c54,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e84eb866dc0b3dd79f91399a5714b5b3bbb8eb0ac746133af657c88755e4b88,PodSandboxId:ed75660e1f6f9eb40d6a1e0d06d5f737a0f9cd77ad3556ba4b43f73b61849fc5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacb
d422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722928042832090751,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-505457,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d3a4c70f5f1a98e373ce2b1bd632569,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd494e38d6a560269121e4eb69c609c6db24bea9862163ad00e525dc7e486a9a,PodSandboxId:16986c599fc34a7842ad40cbb800162bc70784c708c03a68dc81766b286122a8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a
899,State:CONTAINER_RUNNING,CreatedAt:1722928042786306734,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-505457,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9445896d2c8822cafd25b745df86bb0b,},Annotations:map[string]string{io.kubernetes.container.hash: d5831ede,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe12a18afe3dccc7a77e9b91dea2873a822a39628e809273739532bd630c0459,PodSandboxId:2648f6a872da28fbc81c2bea3a82c35080febf469813983949da621f3e8f3f8b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,
CreatedAt:1722928042760368393,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-505457,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 503a3788790fb79ef323dff2c5ec6ce1,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04bc771142517172d3d6312f52f17040d2235a82f36f5e627dea6620331173a9,PodSandboxId:c44c5af015412e22c8b5050d0749c16c27744783d64dd9df518b059fc6b699d3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING
,CreatedAt:1722928042755640737,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-505457,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba2495c51f84135afe3ac6ebc47db99e,},Annotations:map[string]string{io.kubernetes.container.hash: 974ca483,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8c19c5ff-4621-49ee-95f8-bdbb69d8fc15 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 07:12:20 addons-505457 crio[684]: time="2024-08-06 07:12:20.810165749Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cb3fdba1-8a77-475a-9ff5-7e19455a457d name=/runtime.v1.RuntimeService/Version
	Aug 06 07:12:20 addons-505457 crio[684]: time="2024-08-06 07:12:20.810259721Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cb3fdba1-8a77-475a-9ff5-7e19455a457d name=/runtime.v1.RuntimeService/Version
	Aug 06 07:12:20 addons-505457 crio[684]: time="2024-08-06 07:12:20.811624383Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=aa876ba9-a06c-471d-aa8b-25fe0c7e225c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 07:12:20 addons-505457 crio[684]: time="2024-08-06 07:12:20.813299798Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722928340813268188,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589584,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=aa876ba9-a06c-471d-aa8b-25fe0c7e225c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 07:12:20 addons-505457 crio[684]: time="2024-08-06 07:12:20.814074116Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5580a0d3-99c1-46be-a921-e8426bf6bbaf name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 07:12:20 addons-505457 crio[684]: time="2024-08-06 07:12:20.814451781Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5580a0d3-99c1-46be-a921-e8426bf6bbaf name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 07:12:20 addons-505457 crio[684]: time="2024-08-06 07:12:20.814864362Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1bb169456345b80326bbe03ea5deb2194034ea3b0e9ae73d767584a86175ef8d,PodSandboxId:0ba76e457d98d104119a038a71a5ebea9e3a0887dc0d8436ac016292a3ce1b32,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722928333818262584,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-xg2dt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: da8616b2-cdae-4436-8bb8-2fd558ba9b6c,},Annotations:map[string]string{io.kubernetes.container.hash: fc3dd633,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5716fd9d0a7957f1d3d821ea36f879dd1040843efd37b7e07f2c5271a19e2c43,PodSandboxId:48323e9942a07b1b4d41f2c930f7d80f4b5d5b58862bb529b4b84889c3a8118e,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722928193681598452,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7fced162-c099-4abd-a6ab-20626a136899,},Annotations:map[string]string{io.kubernet
es.container.hash: acaffa5a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9776dddcced3ea27152001b269947202e4192c03c7b1afc5a43e5848f4749ef7,PodSandboxId:6e6b579adaf6bae36feaba849d1dfd126bebce03ab89556635405bbf8ceb54d0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722928156293700671,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ae3e743b-c04d-4206-a
40c-70c96ca55dea,},Annotations:map[string]string{io.kubernetes.container.hash: d096c7f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc63ed54019f6622542a4f78554ac1d61e333d9425e6933f71750e404ae4c498,PodSandboxId:b9c5a368647ba026206288b1729dd234647b32708ee8c41514b5e80a329a9a03,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1722928128367061773,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-p8gdd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9801f40a-24ab-4520-9b21-9a6dceda4450,},Anno
tations:map[string]string{io.kubernetes.container.hash: cde42411,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a60360e8ba07430103531158c7a4c8b2a18bee0bffe23935876e12eeea3daf54,PodSandboxId:ef6d9da8ab2fa845565d8fa73b9333d72f874431df49fdb611b88884dbaf4ad6,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1722928127778125029,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-xwgk9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 51f89
63a-02a1-4e65-95f5-ebff33be12a4,},Annotations:map[string]string{io.kubernetes.container.hash: 4c93d00a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e9ca59c5d5d44d3fb6e570442f4c5915f02bafd78a7dabf81c1eec5c26b3b9e,PodSandboxId:507a32da5aa23a6fdf8198b07f8c9b4ab3888e4f3d2357ad5f0c22447d314e48,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722928093784437366,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-dghj8,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: a73b51d1-3f19-4608-b37f-2d92f56fec02,},Annotations:map[string]string{io.kubernetes.container.hash: 8b558cb9,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7a623480af4afbb181b874f946e3af94aca0c310fbb8b0a43223db5c3f102b1,PodSandboxId:af6ab42fed69ba47b2ce7c9a00e8098e134066605db39b09e0dcd412fe7e097c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722928069099504187,Labels:map[string]string{io.kubernetes.container.name: stora
ge-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9631af58-26e8-4f64-a332-ab89aa6934ff,},Annotations:map[string]string{io.kubernetes.container.hash: 361e54aa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:027ea1c110b7f5dfde7f8887ce55fc8c76b57576dd8e0b413a859d725bc0f45e,PodSandboxId:de0ded25e176ea853db6796ed89e6315e0859501717484ecef9c0c0456cd09e4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722928065518161478,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.nam
e: coredns-7db6d8ff4d-k5d8d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9205d3f6-ae5e-473a-8760-1e9bb9cff77d,},Annotations:map[string]string{io.kubernetes.container.hash: 68374ea1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5080e5fce2c6cf845092e92aebf13550fc5412567f7c21694410eaebc73d84d,PodSandboxId:56f279ca84ed0636221eb9b70b3b986c6ba6f7e791473eedc394ac4175e1fbf5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb0
25d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722928063132630344,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-t7djg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbaeea31-08de-4723-9ec5-f51d376650b5,},Annotations:map[string]string{io.kubernetes.container.hash: 45c04c54,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e84eb866dc0b3dd79f91399a5714b5b3bbb8eb0ac746133af657c88755e4b88,PodSandboxId:ed75660e1f6f9eb40d6a1e0d06d5f737a0f9cd77ad3556ba4b43f73b61849fc5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacb
d422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722928042832090751,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-505457,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d3a4c70f5f1a98e373ce2b1bd632569,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd494e38d6a560269121e4eb69c609c6db24bea9862163ad00e525dc7e486a9a,PodSandboxId:16986c599fc34a7842ad40cbb800162bc70784c708c03a68dc81766b286122a8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a
899,State:CONTAINER_RUNNING,CreatedAt:1722928042786306734,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-505457,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9445896d2c8822cafd25b745df86bb0b,},Annotations:map[string]string{io.kubernetes.container.hash: d5831ede,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe12a18afe3dccc7a77e9b91dea2873a822a39628e809273739532bd630c0459,PodSandboxId:2648f6a872da28fbc81c2bea3a82c35080febf469813983949da621f3e8f3f8b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,
CreatedAt:1722928042760368393,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-505457,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 503a3788790fb79ef323dff2c5ec6ce1,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04bc771142517172d3d6312f52f17040d2235a82f36f5e627dea6620331173a9,PodSandboxId:c44c5af015412e22c8b5050d0749c16c27744783d64dd9df518b059fc6b699d3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING
,CreatedAt:1722928042755640737,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-505457,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba2495c51f84135afe3ac6ebc47db99e,},Annotations:map[string]string{io.kubernetes.container.hash: 974ca483,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5580a0d3-99c1-46be-a921-e8426bf6bbaf name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	1bb169456345b       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        7 seconds ago       Running             hello-world-app           0                   0ba76e457d98d       hello-world-app-6778b5fc9f-xg2dt
	5716fd9d0a795       docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9                              2 minutes ago       Running             nginx                     0                   48323e9942a07       nginx
	9776dddcced3e       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   6e6b579adaf6b       busybox
	dc63ed54019f6       684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66                                                             3 minutes ago       Exited              patch                     1                   b9c5a368647ba       ingress-nginx-admission-patch-p8gdd
	a60360e8ba074       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2   3 minutes ago       Exited              create                    0                   ef6d9da8ab2fa       ingress-nginx-admission-create-xwgk9
	5e9ca59c5d5d4       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872        4 minutes ago       Running             metrics-server            0                   507a32da5aa23       metrics-server-c59844bb4-dghj8
	e7a623480af4a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   af6ab42fed69b       storage-provisioner
	027ea1c110b7f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                             4 minutes ago       Running             coredns                   0                   de0ded25e176e       coredns-7db6d8ff4d-k5d8d
	a5080e5fce2c6       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                                             4 minutes ago       Running             kube-proxy                0                   56f279ca84ed0       kube-proxy-t7djg
	9e84eb866dc0b       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                                             4 minutes ago       Running             kube-scheduler            0                   ed75660e1f6f9       kube-scheduler-addons-505457
	fd494e38d6a56       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                             4 minutes ago       Running             etcd                      0                   16986c599fc34       etcd-addons-505457
	fe12a18afe3dc       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                                             4 minutes ago       Running             kube-controller-manager   0                   2648f6a872da2       kube-controller-manager-addons-505457
	04bc771142517       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                                             4 minutes ago       Running             kube-apiserver            0                   c44c5af015412       kube-apiserver-addons-505457
	
	
	==> coredns [027ea1c110b7f5dfde7f8887ce55fc8c76b57576dd8e0b413a859d725bc0f45e] <==
	[INFO] 10.244.0.6:45223 - 17083 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00009545s
	[INFO] 10.244.0.6:55889 - 15484 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000061371s
	[INFO] 10.244.0.6:55889 - 34938 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000137638s
	[INFO] 10.244.0.6:49821 - 6750 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000126139s
	[INFO] 10.244.0.6:49821 - 10332 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000121551s
	[INFO] 10.244.0.6:38435 - 52788 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000088756s
	[INFO] 10.244.0.6:38435 - 19285 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000094909s
	[INFO] 10.244.0.6:37065 - 53195 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000081967s
	[INFO] 10.244.0.6:37065 - 23496 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000177732s
	[INFO] 10.244.0.6:33496 - 49199 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000042554s
	[INFO] 10.244.0.6:33496 - 39725 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000094066s
	[INFO] 10.244.0.6:57368 - 27338 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000084884s
	[INFO] 10.244.0.6:57368 - 16584 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000072126s
	[INFO] 10.244.0.6:59234 - 18212 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000078351s
	[INFO] 10.244.0.6:59234 - 56613 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000129033s
	[INFO] 10.244.0.22:42783 - 24608 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000394637s
	[INFO] 10.244.0.22:55478 - 3385 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000090641s
	[INFO] 10.244.0.22:34909 - 6297 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000135472s
	[INFO] 10.244.0.22:45266 - 44235 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00011482s
	[INFO] 10.244.0.22:37960 - 23912 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000090019s
	[INFO] 10.244.0.22:54657 - 9934 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000131092s
	[INFO] 10.244.0.22:48901 - 42081 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.000715231s
	[INFO] 10.244.0.22:59032 - 41980 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000554558s
	[INFO] 10.244.0.24:38626 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000293895s
	[INFO] 10.244.0.24:58900 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000173161s
	
	
	==> describe nodes <==
	Name:               addons-505457
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-505457
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e92cb06692f5ea1ba801d10d148e5e92e807f9c8
	                    minikube.k8s.io/name=addons-505457
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_06T07_07_29_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-505457
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 06 Aug 2024 07:07:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-505457
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 06 Aug 2024 07:12:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 06 Aug 2024 07:11:34 +0000   Tue, 06 Aug 2024 07:07:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 06 Aug 2024 07:11:34 +0000   Tue, 06 Aug 2024 07:07:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 06 Aug 2024 07:11:34 +0000   Tue, 06 Aug 2024 07:07:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 06 Aug 2024 07:11:34 +0000   Tue, 06 Aug 2024 07:07:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.6
	  Hostname:    addons-505457
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 eebd60decccf4374b4c2eb3bc40a4c03
	  System UUID:                eebd60de-cccf-4374-b4c2-eb3bc40a4c03
	  Boot ID:                    2b63510c-2480-4f43-b2ca-32ce6bc2e06f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m8s
	  default                     hello-world-app-6778b5fc9f-xg2dt         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m32s
	  kube-system                 coredns-7db6d8ff4d-k5d8d                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m40s
	  kube-system                 etcd-addons-505457                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m53s
	  kube-system                 kube-apiserver-addons-505457             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m53s
	  kube-system                 kube-controller-manager-addons-505457    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m53s
	  kube-system                 kube-proxy-t7djg                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m40s
	  kube-system                 kube-scheduler-addons-505457             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m53s
	  kube-system                 metrics-server-c59844bb4-dghj8           100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         4m34s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (9%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m36s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m59s (x8 over 4m59s)  kubelet          Node addons-505457 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m59s (x8 over 4m59s)  kubelet          Node addons-505457 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m59s (x7 over 4m59s)  kubelet          Node addons-505457 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m59s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m53s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m53s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m53s                  kubelet          Node addons-505457 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m53s                  kubelet          Node addons-505457 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m53s                  kubelet          Node addons-505457 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m52s                  kubelet          Node addons-505457 status is now: NodeReady
	  Normal  RegisteredNode           4m41s                  node-controller  Node addons-505457 event: Registered Node addons-505457 in Controller
	
	
	==> dmesg <==
	[  +8.257190] kauditd_printk_skb: 56 callbacks suppressed
	[Aug 6 07:08] kauditd_printk_skb: 2 callbacks suppressed
	[ +17.615453] kauditd_printk_skb: 25 callbacks suppressed
	[ +12.359288] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.589367] kauditd_printk_skb: 20 callbacks suppressed
	[  +5.156882] kauditd_printk_skb: 61 callbacks suppressed
	[Aug 6 07:09] kauditd_printk_skb: 48 callbacks suppressed
	[  +5.680626] kauditd_printk_skb: 9 callbacks suppressed
	[  +7.007253] kauditd_printk_skb: 18 callbacks suppressed
	[ +10.275422] kauditd_printk_skb: 31 callbacks suppressed
	[ +11.779606] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.010598] kauditd_printk_skb: 3 callbacks suppressed
	[  +5.092813] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.206837] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.131887] kauditd_printk_skb: 9 callbacks suppressed
	[Aug 6 07:10] kauditd_printk_skb: 15 callbacks suppressed
	[  +6.081335] kauditd_printk_skb: 20 callbacks suppressed
	[ +14.428360] kauditd_printk_skb: 2 callbacks suppressed
	[ +13.727466] kauditd_printk_skb: 15 callbacks suppressed
	[ +14.216837] kauditd_printk_skb: 33 callbacks suppressed
	[  +5.172191] kauditd_printk_skb: 12 callbacks suppressed
	[Aug 6 07:11] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.262435] kauditd_printk_skb: 16 callbacks suppressed
	[Aug 6 07:12] kauditd_printk_skb: 20 callbacks suppressed
	[  +5.027138] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [fd494e38d6a560269121e4eb69c609c6db24bea9862163ad00e525dc7e486a9a] <==
	{"level":"info","ts":"2024-08-06T07:08:29.195209Z","caller":"traceutil/trace.go:171","msg":"trace[118727515] transaction","detail":"{read_only:false; response_revision:971; number_of_response:1; }","duration":"106.593798ms","start":"2024-08-06T07:08:29.08859Z","end":"2024-08-06T07:08:29.195184Z","steps":["trace[118727515] 'process raft request'  (duration: 106.491635ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-06T07:08:30.325106Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"380.921377ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11155"}
	{"level":"info","ts":"2024-08-06T07:08:30.325257Z","caller":"traceutil/trace.go:171","msg":"trace[212798188] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:977; }","duration":"381.029138ms","start":"2024-08-06T07:08:29.944129Z","end":"2024-08-06T07:08:30.325159Z","steps":["trace[212798188] 'range keys from in-memory index tree'  (duration: 380.802911ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-06T07:08:30.325292Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-06T07:08:29.944113Z","time spent":"381.170472ms","remote":"127.0.0.1:46178","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":3,"response size":11177,"request content":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" "}
	{"level":"warn","ts":"2024-08-06T07:08:30.325485Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"362.5004ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14065"}
	{"level":"info","ts":"2024-08-06T07:08:30.325522Z","caller":"traceutil/trace.go:171","msg":"trace[520414956] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:977; }","duration":"362.568443ms","start":"2024-08-06T07:08:29.962947Z","end":"2024-08-06T07:08:30.325515Z","steps":["trace[520414956] 'range keys from in-memory index tree'  (duration: 362.39537ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-06T07:08:30.32554Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-06T07:08:29.962924Z","time spent":"362.61104ms","remote":"127.0.0.1:46178","response type":"/etcdserverpb.KV/Range","request count":0,"request size":62,"response count":3,"response size":14087,"request content":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" "}
	{"level":"info","ts":"2024-08-06T07:08:51.177387Z","caller":"traceutil/trace.go:171","msg":"trace[468014521] transaction","detail":"{read_only:false; response_revision:1069; number_of_response:1; }","duration":"573.06147ms","start":"2024-08-06T07:08:50.604256Z","end":"2024-08-06T07:08:51.177318Z","steps":["trace[468014521] 'process raft request'  (duration: 572.94634ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-06T07:08:51.177605Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-06T07:08:50.604238Z","time spent":"573.296944ms","remote":"127.0.0.1:46232","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4349,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/jobs/ingress-nginx/ingress-nginx-admission-create\" mod_revision:739 > success:<request_put:<key:\"/registry/jobs/ingress-nginx/ingress-nginx-admission-create\" value_size:4282 >> failure:<request_range:<key:\"/registry/jobs/ingress-nginx/ingress-nginx-admission-create\" > >"}
	{"level":"warn","ts":"2024-08-06T07:08:51.178362Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"380.854709ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:85465"}
	{"level":"info","ts":"2024-08-06T07:08:51.178496Z","caller":"traceutil/trace.go:171","msg":"trace[1138503758] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1069; }","duration":"381.013693ms","start":"2024-08-06T07:08:50.797473Z","end":"2024-08-06T07:08:51.178487Z","steps":["trace[1138503758] 'agreement among raft nodes before linearized reading'  (duration: 380.707204ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-06T07:08:51.178578Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-06T07:08:50.797459Z","time spent":"381.075348ms","remote":"127.0.0.1:46178","response type":"/etcdserverpb.KV/Range","request count":0,"request size":58,"response count":18,"response size":85487,"request content":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" "}
	{"level":"info","ts":"2024-08-06T07:08:51.178123Z","caller":"traceutil/trace.go:171","msg":"trace[511861839] linearizableReadLoop","detail":"{readStateIndex:1103; appliedIndex:1103; }","duration":"380.613012ms","start":"2024-08-06T07:08:50.797498Z","end":"2024-08-06T07:08:51.178111Z","steps":["trace[511861839] 'read index received'  (duration: 380.58321ms)","trace[511861839] 'applied index is now lower than readState.Index'  (duration: 28.906µs)"],"step_count":2}
	{"level":"warn","ts":"2024-08-06T07:08:51.179306Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"218.447366ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14552"}
	{"level":"info","ts":"2024-08-06T07:08:51.188903Z","caller":"traceutil/trace.go:171","msg":"trace[1328550162] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1069; }","duration":"228.038199ms","start":"2024-08-06T07:08:50.960851Z","end":"2024-08-06T07:08:51.188889Z","steps":["trace[1328550162] 'agreement among raft nodes before linearized reading'  (duration: 218.3629ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-06T07:08:51.189199Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"236.613628ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11155"}
	{"level":"info","ts":"2024-08-06T07:08:51.18923Z","caller":"traceutil/trace.go:171","msg":"trace[837689716] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1069; }","duration":"246.789901ms","start":"2024-08-06T07:08:50.942431Z","end":"2024-08-06T07:08:51.189221Z","steps":["trace[837689716] 'agreement among raft nodes before linearized reading'  (duration: 236.613673ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-06T07:08:58.32804Z","caller":"traceutil/trace.go:171","msg":"trace[401382471] transaction","detail":"{read_only:false; response_revision:1139; number_of_response:1; }","duration":"301.976614ms","start":"2024-08-06T07:08:58.026047Z","end":"2024-08-06T07:08:58.328024Z","steps":["trace[401382471] 'process raft request'  (duration: 301.872041ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-06T07:08:58.328144Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-06T07:08:58.026027Z","time spent":"302.056122ms","remote":"127.0.0.1:46160","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1122 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-08-06T07:09:00.790383Z","caller":"traceutil/trace.go:171","msg":"trace[1821302382] transaction","detail":"{read_only:false; response_revision:1145; number_of_response:1; }","duration":"271.24571ms","start":"2024-08-06T07:09:00.519109Z","end":"2024-08-06T07:09:00.790354Z","steps":["trace[1821302382] 'process raft request'  (duration: 271.151944ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-06T07:09:08.404606Z","caller":"traceutil/trace.go:171","msg":"trace[755152330] linearizableReadLoop","detail":"{readStateIndex:1212; appliedIndex:1211; }","duration":"107.344571ms","start":"2024-08-06T07:09:08.297203Z","end":"2024-08-06T07:09:08.404547Z","steps":["trace[755152330] 'read index received'  (duration: 107.185396ms)","trace[755152330] 'applied index is now lower than readState.Index'  (duration: 158.764µs)"],"step_count":2}
	{"level":"info","ts":"2024-08-06T07:09:08.404816Z","caller":"traceutil/trace.go:171","msg":"trace[1294783742] transaction","detail":"{read_only:false; response_revision:1174; number_of_response:1; }","duration":"277.002001ms","start":"2024-08-06T07:09:08.127804Z","end":"2024-08-06T07:09:08.404806Z","steps":["trace[1294783742] 'process raft request'  (duration: 276.625015ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-06T07:09:08.405203Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.95536ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:85516"}
	{"level":"info","ts":"2024-08-06T07:09:08.406363Z","caller":"traceutil/trace.go:171","msg":"trace[2103059547] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1174; }","duration":"109.171725ms","start":"2024-08-06T07:09:08.297177Z","end":"2024-08-06T07:09:08.406348Z","steps":["trace[2103059547] 'agreement among raft nodes before linearized reading'  (duration: 107.817011ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-06T07:09:24.182231Z","caller":"traceutil/trace.go:171","msg":"trace[821864348] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1262; }","duration":"208.278475ms","start":"2024-08-06T07:09:23.973934Z","end":"2024-08-06T07:09:24.182212Z","steps":["trace[821864348] 'process raft request'  (duration: 208.123671ms)"],"step_count":1}
	
	
	==> kernel <==
	 07:12:21 up 5 min,  0 users,  load average: 0.63, 1.00, 0.54
	Linux addons-505457 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [04bc771142517172d3d6312f52f17040d2235a82f36f5e627dea6620331173a9] <==
	E0806 07:09:18.531704       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.57.22:443/apis/metrics.k8s.io/v1beta1: Get "https://10.97.57.22:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.97.57.22:443: connect: connection refused
	I0806 07:09:18.602719       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0806 07:09:23.328501       1 conn.go:339] Error on socket receive: read tcp 192.168.39.6:8443->192.168.39.1:59332: use of closed network connection
	E0806 07:09:23.529077       1 conn.go:339] Error on socket receive: read tcp 192.168.39.6:8443->192.168.39.1:59356: use of closed network connection
	I0806 07:09:48.917153       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0806 07:09:49.117619       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.105.159.40"}
	I0806 07:09:50.301277       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0806 07:09:51.340281       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	E0806 07:10:25.257108       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0806 07:10:33.241010       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0806 07:10:54.000176       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0806 07:10:54.000495       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0806 07:10:54.017557       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0806 07:10:54.018072       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0806 07:10:54.038520       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0806 07:10:54.038576       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0806 07:10:54.051181       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0806 07:10:54.051243       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0806 07:10:54.070226       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0806 07:10:54.070289       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0806 07:10:55.038992       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0806 07:10:55.070631       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0806 07:10:55.105427       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0806 07:11:00.751837       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.105.159.121"}
	I0806 07:12:10.880148       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.107.227.173"}
	
	
	==> kube-controller-manager [fe12a18afe3dccc7a77e9b91dea2873a822a39628e809273739532bd630c0459] <==
	W0806 07:11:14.130991       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0806 07:11:14.131059       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0806 07:11:22.476185       1 namespace_controller.go:182] "Namespace has been deleted" logger="namespace-controller" namespace="headlamp"
	W0806 07:11:25.202923       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0806 07:11:25.203042       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0806 07:11:31.284446       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0806 07:11:31.284508       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0806 07:11:35.789662       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0806 07:11:35.789861       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0806 07:11:42.984606       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0806 07:11:42.984656       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0806 07:11:52.959722       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0806 07:11:52.959951       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0806 07:12:10.716050       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="24.907411ms"
	I0806 07:12:10.736514       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="20.205552ms"
	I0806 07:12:10.736684       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="40.533µs"
	I0806 07:12:10.736800       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="84.038µs"
	I0806 07:12:10.753060       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="131.103µs"
	I0806 07:12:12.850664       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-6d9bd977d4" duration="4.274µs"
	I0806 07:12:12.851027       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create"
	I0806 07:12:12.858855       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch"
	I0806 07:12:14.211933       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="10.212548ms"
	I0806 07:12:14.212805       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="90.754µs"
	W0806 07:12:19.617282       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0806 07:12:19.617411       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [a5080e5fce2c6cf845092e92aebf13550fc5412567f7c21694410eaebc73d84d] <==
	I0806 07:07:44.052551       1 server_linux.go:69] "Using iptables proxy"
	I0806 07:07:44.072152       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.6"]
	I0806 07:07:44.163100       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0806 07:07:44.163152       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0806 07:07:44.163169       1 server_linux.go:165] "Using iptables Proxier"
	I0806 07:07:44.165614       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0806 07:07:44.165912       1 server.go:872] "Version info" version="v1.30.3"
	I0806 07:07:44.165940       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0806 07:07:44.167042       1 config.go:192] "Starting service config controller"
	I0806 07:07:44.167074       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0806 07:07:44.167098       1 config.go:101] "Starting endpoint slice config controller"
	I0806 07:07:44.167101       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0806 07:07:44.167515       1 config.go:319] "Starting node config controller"
	I0806 07:07:44.167545       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0806 07:07:44.267923       1 shared_informer.go:320] Caches are synced for node config
	I0806 07:07:44.267967       1 shared_informer.go:320] Caches are synced for service config
	I0806 07:07:44.267987       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [9e84eb866dc0b3dd79f91399a5714b5b3bbb8eb0ac746133af657c88755e4b88] <==
	W0806 07:07:25.506413       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0806 07:07:25.506450       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0806 07:07:25.507256       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0806 07:07:25.507295       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0806 07:07:26.502861       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0806 07:07:26.502909       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0806 07:07:26.507117       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0806 07:07:26.507193       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0806 07:07:26.584806       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0806 07:07:26.584958       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0806 07:07:26.633651       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0806 07:07:26.633725       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0806 07:07:26.640480       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0806 07:07:26.640560       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0806 07:07:26.656068       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0806 07:07:26.656155       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0806 07:07:26.664937       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0806 07:07:26.664981       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0806 07:07:26.666702       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0806 07:07:26.666799       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0806 07:07:26.692636       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0806 07:07:26.692855       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0806 07:07:26.737250       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0806 07:07:26.737436       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0806 07:07:29.801936       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 06 07:12:10 addons-505457 kubelet[1282]: I0806 07:12:10.721004    1282 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c380ed2-c9c3-4bb8-bc87-93140e9aa99d" containerName="headlamp"
	Aug 06 07:12:10 addons-505457 kubelet[1282]: I0806 07:12:10.721009    1282 memory_manager.go:354] "RemoveStaleState removing state" podUID="593abc57-452f-4481-a6d6-3d7d7d32e1cb" containerName="helm-test"
	Aug 06 07:12:10 addons-505457 kubelet[1282]: I0806 07:12:10.835368    1282 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zd99\" (UniqueName: \"kubernetes.io/projected/da8616b2-cdae-4436-8bb8-2fd558ba9b6c-kube-api-access-8zd99\") pod \"hello-world-app-6778b5fc9f-xg2dt\" (UID: \"da8616b2-cdae-4436-8bb8-2fd558ba9b6c\") " pod="default/hello-world-app-6778b5fc9f-xg2dt"
	Aug 06 07:12:11 addons-505457 kubelet[1282]: I0806 07:12:11.843264    1282 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gh9nm\" (UniqueName: \"kubernetes.io/projected/a14eefbb-7e96-4b00-bce1-8cbeb64e7f88-kube-api-access-gh9nm\") pod \"a14eefbb-7e96-4b00-bce1-8cbeb64e7f88\" (UID: \"a14eefbb-7e96-4b00-bce1-8cbeb64e7f88\") "
	Aug 06 07:12:11 addons-505457 kubelet[1282]: I0806 07:12:11.846643    1282 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a14eefbb-7e96-4b00-bce1-8cbeb64e7f88-kube-api-access-gh9nm" (OuterVolumeSpecName: "kube-api-access-gh9nm") pod "a14eefbb-7e96-4b00-bce1-8cbeb64e7f88" (UID: "a14eefbb-7e96-4b00-bce1-8cbeb64e7f88"). InnerVolumeSpecName "kube-api-access-gh9nm". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 06 07:12:11 addons-505457 kubelet[1282]: I0806 07:12:11.944307    1282 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-gh9nm\" (UniqueName: \"kubernetes.io/projected/a14eefbb-7e96-4b00-bce1-8cbeb64e7f88-kube-api-access-gh9nm\") on node \"addons-505457\" DevicePath \"\""
	Aug 06 07:12:12 addons-505457 kubelet[1282]: I0806 07:12:12.166229    1282 scope.go:117] "RemoveContainer" containerID="8a8d7d60bbd4d73d6176e7c271554a7d743caa7d996bc47d5f3ffe1674efa5aa"
	Aug 06 07:12:12 addons-505457 kubelet[1282]: I0806 07:12:12.200888    1282 scope.go:117] "RemoveContainer" containerID="8a8d7d60bbd4d73d6176e7c271554a7d743caa7d996bc47d5f3ffe1674efa5aa"
	Aug 06 07:12:12 addons-505457 kubelet[1282]: E0806 07:12:12.201809    1282 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a8d7d60bbd4d73d6176e7c271554a7d743caa7d996bc47d5f3ffe1674efa5aa\": container with ID starting with 8a8d7d60bbd4d73d6176e7c271554a7d743caa7d996bc47d5f3ffe1674efa5aa not found: ID does not exist" containerID="8a8d7d60bbd4d73d6176e7c271554a7d743caa7d996bc47d5f3ffe1674efa5aa"
	Aug 06 07:12:12 addons-505457 kubelet[1282]: I0806 07:12:12.201860    1282 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a8d7d60bbd4d73d6176e7c271554a7d743caa7d996bc47d5f3ffe1674efa5aa"} err="failed to get container status \"8a8d7d60bbd4d73d6176e7c271554a7d743caa7d996bc47d5f3ffe1674efa5aa\": rpc error: code = NotFound desc = could not find container \"8a8d7d60bbd4d73d6176e7c271554a7d743caa7d996bc47d5f3ffe1674efa5aa\": container with ID starting with 8a8d7d60bbd4d73d6176e7c271554a7d743caa7d996bc47d5f3ffe1674efa5aa not found: ID does not exist"
	Aug 06 07:12:12 addons-505457 kubelet[1282]: I0806 07:12:12.620120    1282 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a14eefbb-7e96-4b00-bce1-8cbeb64e7f88" path="/var/lib/kubelet/pods/a14eefbb-7e96-4b00-bce1-8cbeb64e7f88/volumes"
	Aug 06 07:12:14 addons-505457 kubelet[1282]: I0806 07:12:14.198925    1282 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-6778b5fc9f-xg2dt" podStartSLOduration=1.674053104 podStartE2EDuration="4.198903135s" podCreationTimestamp="2024-08-06 07:12:10 +0000 UTC" firstStartedPulling="2024-08-06 07:12:11.283373862 +0000 UTC m=+282.824831584" lastFinishedPulling="2024-08-06 07:12:13.808223891 +0000 UTC m=+285.349681615" observedRunningTime="2024-08-06 07:12:14.197992466 +0000 UTC m=+285.739450203" watchObservedRunningTime="2024-08-06 07:12:14.198903135 +0000 UTC m=+285.740360877"
	Aug 06 07:12:14 addons-505457 kubelet[1282]: I0806 07:12:14.617283    1282 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51f8963a-02a1-4e65-95f5-ebff33be12a4" path="/var/lib/kubelet/pods/51f8963a-02a1-4e65-95f5-ebff33be12a4/volumes"
	Aug 06 07:12:14 addons-505457 kubelet[1282]: I0806 07:12:14.617809    1282 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9801f40a-24ab-4520-9b21-9a6dceda4450" path="/var/lib/kubelet/pods/9801f40a-24ab-4520-9b21-9a6dceda4450/volumes"
	Aug 06 07:12:16 addons-505457 kubelet[1282]: I0806 07:12:16.078781    1282 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/251be54e-83c5-40a7-8cbe-d9111a8a8db4-webhook-cert\") pod \"251be54e-83c5-40a7-8cbe-d9111a8a8db4\" (UID: \"251be54e-83c5-40a7-8cbe-d9111a8a8db4\") "
	Aug 06 07:12:16 addons-505457 kubelet[1282]: I0806 07:12:16.078859    1282 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-krmmg\" (UniqueName: \"kubernetes.io/projected/251be54e-83c5-40a7-8cbe-d9111a8a8db4-kube-api-access-krmmg\") pod \"251be54e-83c5-40a7-8cbe-d9111a8a8db4\" (UID: \"251be54e-83c5-40a7-8cbe-d9111a8a8db4\") "
	Aug 06 07:12:16 addons-505457 kubelet[1282]: I0806 07:12:16.082630    1282 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/251be54e-83c5-40a7-8cbe-d9111a8a8db4-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "251be54e-83c5-40a7-8cbe-d9111a8a8db4" (UID: "251be54e-83c5-40a7-8cbe-d9111a8a8db4"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Aug 06 07:12:16 addons-505457 kubelet[1282]: I0806 07:12:16.085033    1282 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/251be54e-83c5-40a7-8cbe-d9111a8a8db4-kube-api-access-krmmg" (OuterVolumeSpecName: "kube-api-access-krmmg") pod "251be54e-83c5-40a7-8cbe-d9111a8a8db4" (UID: "251be54e-83c5-40a7-8cbe-d9111a8a8db4"). InnerVolumeSpecName "kube-api-access-krmmg". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 06 07:12:16 addons-505457 kubelet[1282]: I0806 07:12:16.179570    1282 reconciler_common.go:289] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/251be54e-83c5-40a7-8cbe-d9111a8a8db4-webhook-cert\") on node \"addons-505457\" DevicePath \"\""
	Aug 06 07:12:16 addons-505457 kubelet[1282]: I0806 07:12:16.179620    1282 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-krmmg\" (UniqueName: \"kubernetes.io/projected/251be54e-83c5-40a7-8cbe-d9111a8a8db4-kube-api-access-krmmg\") on node \"addons-505457\" DevicePath \"\""
	Aug 06 07:12:16 addons-505457 kubelet[1282]: I0806 07:12:16.195665    1282 scope.go:117] "RemoveContainer" containerID="11201a4af03b5eb8b0943e47bb1968f82dfcd6b920d6028f2835391931fbac96"
	Aug 06 07:12:16 addons-505457 kubelet[1282]: I0806 07:12:16.216940    1282 scope.go:117] "RemoveContainer" containerID="11201a4af03b5eb8b0943e47bb1968f82dfcd6b920d6028f2835391931fbac96"
	Aug 06 07:12:16 addons-505457 kubelet[1282]: E0806 07:12:16.217351    1282 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11201a4af03b5eb8b0943e47bb1968f82dfcd6b920d6028f2835391931fbac96\": container with ID starting with 11201a4af03b5eb8b0943e47bb1968f82dfcd6b920d6028f2835391931fbac96 not found: ID does not exist" containerID="11201a4af03b5eb8b0943e47bb1968f82dfcd6b920d6028f2835391931fbac96"
	Aug 06 07:12:16 addons-505457 kubelet[1282]: I0806 07:12:16.217401    1282 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11201a4af03b5eb8b0943e47bb1968f82dfcd6b920d6028f2835391931fbac96"} err="failed to get container status \"11201a4af03b5eb8b0943e47bb1968f82dfcd6b920d6028f2835391931fbac96\": rpc error: code = NotFound desc = could not find container \"11201a4af03b5eb8b0943e47bb1968f82dfcd6b920d6028f2835391931fbac96\": container with ID starting with 11201a4af03b5eb8b0943e47bb1968f82dfcd6b920d6028f2835391931fbac96 not found: ID does not exist"
	Aug 06 07:12:16 addons-505457 kubelet[1282]: I0806 07:12:16.617042    1282 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="251be54e-83c5-40a7-8cbe-d9111a8a8db4" path="/var/lib/kubelet/pods/251be54e-83c5-40a7-8cbe-d9111a8a8db4/volumes"
	
	
	==> storage-provisioner [e7a623480af4afbb181b874f946e3af94aca0c310fbb8b0a43223db5c3f102b1] <==
	I0806 07:07:50.577913       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0806 07:07:50.669439       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0806 07:07:50.669619       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0806 07:07:50.812822       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0806 07:07:50.830620       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6a5fb6f7-352f-4ca7-a941-e98024868ec4", APIVersion:"v1", ResourceVersion:"776", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-505457_9eebe1de-93a4-4fff-8399-d241320c1a27 became leader
	I0806 07:07:50.831054       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-505457_9eebe1de-93a4-4fff-8399-d241320c1a27!
	I0806 07:07:51.041058       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-505457_9eebe1de-93a4-4fff-8399-d241320c1a27!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-505457 -n addons-505457
helpers_test.go:261: (dbg) Run:  kubectl --context addons-505457 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (153.25s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (299.68s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 4.59825ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-dghj8" [a73b51d1-3f19-4608-b37f-2d92f56fec02] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.004868646s
addons_test.go:417: (dbg) Run:  kubectl --context addons-505457 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-505457 top pods -n kube-system: exit status 1 (86.120816ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/etcd-addons-505457, age: 2m10.004837814s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-505457 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-505457 top pods -n kube-system: exit status 1 (65.155192ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-k5d8d, age: 2m0.457952804s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-505457 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-505457 top pods -n kube-system: exit status 1 (65.990663ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-k5d8d, age: 2m6.823358574s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-505457 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-505457 top pods -n kube-system: exit status 1 (78.73236ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-k5d8d, age: 2m11.526184323s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-505457 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-505457 top pods -n kube-system: exit status 1 (65.52005ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-k5d8d, age: 2m23.804755688s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-505457 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-505457 top pods -n kube-system: exit status 1 (68.928066ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-k5d8d, age: 2m33.465933124s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-505457 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-505457 top pods -n kube-system: exit status 1 (66.125726ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-k5d8d, age: 3m4.403826003s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-505457 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-505457 top pods -n kube-system: exit status 1 (62.916484ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-k5d8d, age: 3m43.743966754s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-505457 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-505457 top pods -n kube-system: exit status 1 (66.139347ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-k5d8d, age: 4m43.23708903s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-505457 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-505457 top pods -n kube-system: exit status 1 (64.052298ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-k5d8d, age: 5m34.361849184s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-505457 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-505457 top pods -n kube-system: exit status 1 (66.965334ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-k5d8d, age: 6m47.913372375s

                                                
                                                
** /stderr **
addons_test.go:431: failed checking metric server: exit status 1
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-505457 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-505457 -n addons-505457
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-505457 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-505457 logs -n 25: (1.24097572s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-460250                                                                     | download-only-460250 | jenkins | v1.33.1 | 06 Aug 24 07:06 UTC | 06 Aug 24 07:06 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-030771 | jenkins | v1.33.1 | 06 Aug 24 07:06 UTC |                     |
	|         | binary-mirror-030771                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:41953                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-030771                                                                     | binary-mirror-030771 | jenkins | v1.33.1 | 06 Aug 24 07:06 UTC | 06 Aug 24 07:06 UTC |
	| addons  | enable dashboard -p                                                                         | addons-505457        | jenkins | v1.33.1 | 06 Aug 24 07:06 UTC |                     |
	|         | addons-505457                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-505457        | jenkins | v1.33.1 | 06 Aug 24 07:06 UTC |                     |
	|         | addons-505457                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-505457 --wait=true                                                                | addons-505457        | jenkins | v1.33.1 | 06 Aug 24 07:06 UTC | 06 Aug 24 07:09 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-505457 addons disable                                                                | addons-505457        | jenkins | v1.33.1 | 06 Aug 24 07:09 UTC | 06 Aug 24 07:09 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-505457 addons disable                                                                | addons-505457        | jenkins | v1.33.1 | 06 Aug 24 07:09 UTC | 06 Aug 24 07:09 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| ip      | addons-505457 ip                                                                            | addons-505457        | jenkins | v1.33.1 | 06 Aug 24 07:09 UTC | 06 Aug 24 07:09 UTC |
	| addons  | addons-505457 addons disable                                                                | addons-505457        | jenkins | v1.33.1 | 06 Aug 24 07:09 UTC | 06 Aug 24 07:09 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-505457        | jenkins | v1.33.1 | 06 Aug 24 07:09 UTC | 06 Aug 24 07:09 UTC |
	|         | addons-505457                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-505457 ssh curl -s                                                                   | addons-505457        | jenkins | v1.33.1 | 06 Aug 24 07:10 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ssh     | addons-505457 ssh cat                                                                       | addons-505457        | jenkins | v1.33.1 | 06 Aug 24 07:10 UTC | 06 Aug 24 07:10 UTC |
	|         | /opt/local-path-provisioner/pvc-20492753-e965-4561-96c4-8676548fb006_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-505457 addons disable                                                                | addons-505457        | jenkins | v1.33.1 | 06 Aug 24 07:10 UTC | 06 Aug 24 07:10 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-505457 addons                                                                        | addons-505457        | jenkins | v1.33.1 | 06 Aug 24 07:10 UTC | 06 Aug 24 07:10 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-505457 addons                                                                        | addons-505457        | jenkins | v1.33.1 | 06 Aug 24 07:10 UTC | 06 Aug 24 07:10 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-505457        | jenkins | v1.33.1 | 06 Aug 24 07:10 UTC | 06 Aug 24 07:10 UTC |
	|         | -p addons-505457                                                                            |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-505457        | jenkins | v1.33.1 | 06 Aug 24 07:10 UTC | 06 Aug 24 07:10 UTC |
	|         | addons-505457                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-505457        | jenkins | v1.33.1 | 06 Aug 24 07:10 UTC | 06 Aug 24 07:11 UTC |
	|         | -p addons-505457                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-505457 addons disable                                                                | addons-505457        | jenkins | v1.33.1 | 06 Aug 24 07:11 UTC | 06 Aug 24 07:11 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-505457 addons disable                                                                | addons-505457        | jenkins | v1.33.1 | 06 Aug 24 07:11 UTC | 06 Aug 24 07:11 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-505457 ip                                                                            | addons-505457        | jenkins | v1.33.1 | 06 Aug 24 07:12 UTC | 06 Aug 24 07:12 UTC |
	| addons  | addons-505457 addons disable                                                                | addons-505457        | jenkins | v1.33.1 | 06 Aug 24 07:12 UTC | 06 Aug 24 07:12 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-505457 addons disable                                                                | addons-505457        | jenkins | v1.33.1 | 06 Aug 24 07:12 UTC | 06 Aug 24 07:12 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-505457 addons                                                                        | addons-505457        | jenkins | v1.33.1 | 06 Aug 24 07:14 UTC | 06 Aug 24 07:14 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/06 07:06:45
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0806 07:06:45.744920   21485 out.go:291] Setting OutFile to fd 1 ...
	I0806 07:06:45.745137   21485 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 07:06:45.745150   21485 out.go:304] Setting ErrFile to fd 2...
	I0806 07:06:45.745156   21485 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 07:06:45.745355   21485 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19370-13057/.minikube/bin
	I0806 07:06:45.745930   21485 out.go:298] Setting JSON to false
	I0806 07:06:45.746731   21485 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2952,"bootTime":1722925054,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0806 07:06:45.746787   21485 start.go:139] virtualization: kvm guest
	I0806 07:06:45.748749   21485 out.go:177] * [addons-505457] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0806 07:06:45.749991   21485 notify.go:220] Checking for updates...
	I0806 07:06:45.750014   21485 out.go:177]   - MINIKUBE_LOCATION=19370
	I0806 07:06:45.751355   21485 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0806 07:06:45.752770   21485 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19370-13057/kubeconfig
	I0806 07:06:45.754015   21485 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19370-13057/.minikube
	I0806 07:06:45.755246   21485 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0806 07:06:45.756420   21485 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0806 07:06:45.758058   21485 driver.go:392] Setting default libvirt URI to qemu:///system
	I0806 07:06:45.789749   21485 out.go:177] * Using the kvm2 driver based on user configuration
	I0806 07:06:45.791133   21485 start.go:297] selected driver: kvm2
	I0806 07:06:45.791148   21485 start.go:901] validating driver "kvm2" against <nil>
	I0806 07:06:45.791159   21485 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0806 07:06:45.791816   21485 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 07:06:45.791907   21485 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19370-13057/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0806 07:06:45.806561   21485 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0806 07:06:45.806630   21485 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0806 07:06:45.806906   21485 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0806 07:06:45.806941   21485 cni.go:84] Creating CNI manager for ""
	I0806 07:06:45.806951   21485 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0806 07:06:45.806965   21485 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0806 07:06:45.807054   21485 start.go:340] cluster config:
	{Name:addons-505457 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-505457 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 07:06:45.807199   21485 iso.go:125] acquiring lock: {Name:mke58d71de28babe305c5eb0c31c274e9713bb3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 07:06:45.809402   21485 out.go:177] * Starting "addons-505457" primary control-plane node in "addons-505457" cluster
	I0806 07:06:45.810968   21485 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0806 07:06:45.811026   21485 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0806 07:06:45.811047   21485 cache.go:56] Caching tarball of preloaded images
	I0806 07:06:45.811135   21485 preload.go:172] Found /home/jenkins/minikube-integration/19370-13057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0806 07:06:45.811149   21485 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0806 07:06:45.811459   21485 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/addons-505457/config.json ...
	I0806 07:06:45.811482   21485 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/addons-505457/config.json: {Name:mk989ff9cc1dd8ed035392b6461550a03a68e416 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 07:06:45.811677   21485 start.go:360] acquireMachinesLock for addons-505457: {Name:mkc602ac9c8cc3360dcc0fcb8104303837aaab61 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 07:06:45.811748   21485 start.go:364] duration metric: took 52.327µs to acquireMachinesLock for "addons-505457"
	I0806 07:06:45.811767   21485 start.go:93] Provisioning new machine with config: &{Name:addons-505457 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:addons-505457 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0806 07:06:45.811836   21485 start.go:125] createHost starting for "" (driver="kvm2")
	I0806 07:06:45.813505   21485 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0806 07:06:45.813646   21485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:06:45.813692   21485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:06:45.827860   21485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41229
	I0806 07:06:45.828259   21485 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:06:45.828786   21485 main.go:141] libmachine: Using API Version  1
	I0806 07:06:45.828810   21485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:06:45.829104   21485 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:06:45.829350   21485 main.go:141] libmachine: (addons-505457) Calling .GetMachineName
	I0806 07:06:45.829483   21485 main.go:141] libmachine: (addons-505457) Calling .DriverName
	I0806 07:06:45.829692   21485 start.go:159] libmachine.API.Create for "addons-505457" (driver="kvm2")
	I0806 07:06:45.829721   21485 client.go:168] LocalClient.Create starting
	I0806 07:06:45.829757   21485 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem
	I0806 07:06:45.913833   21485 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem
	I0806 07:06:46.025286   21485 main.go:141] libmachine: Running pre-create checks...
	I0806 07:06:46.025308   21485 main.go:141] libmachine: (addons-505457) Calling .PreCreateCheck
	I0806 07:06:46.025762   21485 main.go:141] libmachine: (addons-505457) Calling .GetConfigRaw
	I0806 07:06:46.026193   21485 main.go:141] libmachine: Creating machine...
	I0806 07:06:46.026212   21485 main.go:141] libmachine: (addons-505457) Calling .Create
	I0806 07:06:46.026340   21485 main.go:141] libmachine: (addons-505457) Creating KVM machine...
	I0806 07:06:46.027455   21485 main.go:141] libmachine: (addons-505457) DBG | found existing default KVM network
	I0806 07:06:46.028114   21485 main.go:141] libmachine: (addons-505457) DBG | I0806 07:06:46.027973   21507 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ad0}
	I0806 07:06:46.028146   21485 main.go:141] libmachine: (addons-505457) DBG | created network xml: 
	I0806 07:06:46.028227   21485 main.go:141] libmachine: (addons-505457) DBG | <network>
	I0806 07:06:46.028247   21485 main.go:141] libmachine: (addons-505457) DBG |   <name>mk-addons-505457</name>
	I0806 07:06:46.028265   21485 main.go:141] libmachine: (addons-505457) DBG |   <dns enable='no'/>
	I0806 07:06:46.028289   21485 main.go:141] libmachine: (addons-505457) DBG |   
	I0806 07:06:46.028302   21485 main.go:141] libmachine: (addons-505457) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0806 07:06:46.028309   21485 main.go:141] libmachine: (addons-505457) DBG |     <dhcp>
	I0806 07:06:46.028324   21485 main.go:141] libmachine: (addons-505457) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0806 07:06:46.028346   21485 main.go:141] libmachine: (addons-505457) DBG |     </dhcp>
	I0806 07:06:46.028359   21485 main.go:141] libmachine: (addons-505457) DBG |   </ip>
	I0806 07:06:46.028381   21485 main.go:141] libmachine: (addons-505457) DBG |   
	I0806 07:06:46.028393   21485 main.go:141] libmachine: (addons-505457) DBG | </network>
	I0806 07:06:46.028402   21485 main.go:141] libmachine: (addons-505457) DBG | 
	I0806 07:06:46.033352   21485 main.go:141] libmachine: (addons-505457) DBG | trying to create private KVM network mk-addons-505457 192.168.39.0/24...
	I0806 07:06:46.096189   21485 main.go:141] libmachine: (addons-505457) Setting up store path in /home/jenkins/minikube-integration/19370-13057/.minikube/machines/addons-505457 ...
	I0806 07:06:46.096241   21485 main.go:141] libmachine: (addons-505457) Building disk image from file:///home/jenkins/minikube-integration/19370-13057/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0806 07:06:46.096255   21485 main.go:141] libmachine: (addons-505457) DBG | private KVM network mk-addons-505457 192.168.39.0/24 created
	I0806 07:06:46.096275   21485 main.go:141] libmachine: (addons-505457) DBG | I0806 07:06:46.096120   21507 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19370-13057/.minikube
	I0806 07:06:46.096304   21485 main.go:141] libmachine: (addons-505457) Downloading /home/jenkins/minikube-integration/19370-13057/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19370-13057/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0806 07:06:46.361297   21485 main.go:141] libmachine: (addons-505457) DBG | I0806 07:06:46.361138   21507 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/addons-505457/id_rsa...
	I0806 07:06:46.491500   21485 main.go:141] libmachine: (addons-505457) DBG | I0806 07:06:46.491355   21507 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/addons-505457/addons-505457.rawdisk...
	I0806 07:06:46.491530   21485 main.go:141] libmachine: (addons-505457) DBG | Writing magic tar header
	I0806 07:06:46.491544   21485 main.go:141] libmachine: (addons-505457) DBG | Writing SSH key tar header
	I0806 07:06:46.491557   21485 main.go:141] libmachine: (addons-505457) DBG | I0806 07:06:46.491467   21507 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19370-13057/.minikube/machines/addons-505457 ...
	I0806 07:06:46.491631   21485 main.go:141] libmachine: (addons-505457) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/addons-505457
	I0806 07:06:46.491668   21485 main.go:141] libmachine: (addons-505457) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19370-13057/.minikube/machines
	I0806 07:06:46.491681   21485 main.go:141] libmachine: (addons-505457) Setting executable bit set on /home/jenkins/minikube-integration/19370-13057/.minikube/machines/addons-505457 (perms=drwx------)
	I0806 07:06:46.491695   21485 main.go:141] libmachine: (addons-505457) Setting executable bit set on /home/jenkins/minikube-integration/19370-13057/.minikube/machines (perms=drwxr-xr-x)
	I0806 07:06:46.491705   21485 main.go:141] libmachine: (addons-505457) Setting executable bit set on /home/jenkins/minikube-integration/19370-13057/.minikube (perms=drwxr-xr-x)
	I0806 07:06:46.491719   21485 main.go:141] libmachine: (addons-505457) Setting executable bit set on /home/jenkins/minikube-integration/19370-13057 (perms=drwxrwxr-x)
	I0806 07:06:46.491733   21485 main.go:141] libmachine: (addons-505457) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0806 07:06:46.491748   21485 main.go:141] libmachine: (addons-505457) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0806 07:06:46.491764   21485 main.go:141] libmachine: (addons-505457) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19370-13057/.minikube
	I0806 07:06:46.491774   21485 main.go:141] libmachine: (addons-505457) Creating domain...
	I0806 07:06:46.491790   21485 main.go:141] libmachine: (addons-505457) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19370-13057
	I0806 07:06:46.491802   21485 main.go:141] libmachine: (addons-505457) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0806 07:06:46.491815   21485 main.go:141] libmachine: (addons-505457) DBG | Checking permissions on dir: /home/jenkins
	I0806 07:06:46.491826   21485 main.go:141] libmachine: (addons-505457) DBG | Checking permissions on dir: /home
	I0806 07:06:46.491839   21485 main.go:141] libmachine: (addons-505457) DBG | Skipping /home - not owner
	I0806 07:06:46.492873   21485 main.go:141] libmachine: (addons-505457) define libvirt domain using xml: 
	I0806 07:06:46.492900   21485 main.go:141] libmachine: (addons-505457) <domain type='kvm'>
	I0806 07:06:46.492915   21485 main.go:141] libmachine: (addons-505457)   <name>addons-505457</name>
	I0806 07:06:46.492924   21485 main.go:141] libmachine: (addons-505457)   <memory unit='MiB'>4000</memory>
	I0806 07:06:46.492938   21485 main.go:141] libmachine: (addons-505457)   <vcpu>2</vcpu>
	I0806 07:06:46.492943   21485 main.go:141] libmachine: (addons-505457)   <features>
	I0806 07:06:46.492949   21485 main.go:141] libmachine: (addons-505457)     <acpi/>
	I0806 07:06:46.492959   21485 main.go:141] libmachine: (addons-505457)     <apic/>
	I0806 07:06:46.492966   21485 main.go:141] libmachine: (addons-505457)     <pae/>
	I0806 07:06:46.492974   21485 main.go:141] libmachine: (addons-505457)     
	I0806 07:06:46.492985   21485 main.go:141] libmachine: (addons-505457)   </features>
	I0806 07:06:46.492999   21485 main.go:141] libmachine: (addons-505457)   <cpu mode='host-passthrough'>
	I0806 07:06:46.493007   21485 main.go:141] libmachine: (addons-505457)   
	I0806 07:06:46.493016   21485 main.go:141] libmachine: (addons-505457)   </cpu>
	I0806 07:06:46.493024   21485 main.go:141] libmachine: (addons-505457)   <os>
	I0806 07:06:46.493029   21485 main.go:141] libmachine: (addons-505457)     <type>hvm</type>
	I0806 07:06:46.493035   21485 main.go:141] libmachine: (addons-505457)     <boot dev='cdrom'/>
	I0806 07:06:46.493048   21485 main.go:141] libmachine: (addons-505457)     <boot dev='hd'/>
	I0806 07:06:46.493061   21485 main.go:141] libmachine: (addons-505457)     <bootmenu enable='no'/>
	I0806 07:06:46.493083   21485 main.go:141] libmachine: (addons-505457)   </os>
	I0806 07:06:46.493095   21485 main.go:141] libmachine: (addons-505457)   <devices>
	I0806 07:06:46.493103   21485 main.go:141] libmachine: (addons-505457)     <disk type='file' device='cdrom'>
	I0806 07:06:46.493116   21485 main.go:141] libmachine: (addons-505457)       <source file='/home/jenkins/minikube-integration/19370-13057/.minikube/machines/addons-505457/boot2docker.iso'/>
	I0806 07:06:46.493124   21485 main.go:141] libmachine: (addons-505457)       <target dev='hdc' bus='scsi'/>
	I0806 07:06:46.493129   21485 main.go:141] libmachine: (addons-505457)       <readonly/>
	I0806 07:06:46.493138   21485 main.go:141] libmachine: (addons-505457)     </disk>
	I0806 07:06:46.493152   21485 main.go:141] libmachine: (addons-505457)     <disk type='file' device='disk'>
	I0806 07:06:46.493169   21485 main.go:141] libmachine: (addons-505457)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0806 07:06:46.493184   21485 main.go:141] libmachine: (addons-505457)       <source file='/home/jenkins/minikube-integration/19370-13057/.minikube/machines/addons-505457/addons-505457.rawdisk'/>
	I0806 07:06:46.493195   21485 main.go:141] libmachine: (addons-505457)       <target dev='hda' bus='virtio'/>
	I0806 07:06:46.493206   21485 main.go:141] libmachine: (addons-505457)     </disk>
	I0806 07:06:46.493214   21485 main.go:141] libmachine: (addons-505457)     <interface type='network'>
	I0806 07:06:46.493221   21485 main.go:141] libmachine: (addons-505457)       <source network='mk-addons-505457'/>
	I0806 07:06:46.493231   21485 main.go:141] libmachine: (addons-505457)       <model type='virtio'/>
	I0806 07:06:46.493242   21485 main.go:141] libmachine: (addons-505457)     </interface>
	I0806 07:06:46.493254   21485 main.go:141] libmachine: (addons-505457)     <interface type='network'>
	I0806 07:06:46.493265   21485 main.go:141] libmachine: (addons-505457)       <source network='default'/>
	I0806 07:06:46.493277   21485 main.go:141] libmachine: (addons-505457)       <model type='virtio'/>
	I0806 07:06:46.493286   21485 main.go:141] libmachine: (addons-505457)     </interface>
	I0806 07:06:46.493344   21485 main.go:141] libmachine: (addons-505457)     <serial type='pty'>
	I0806 07:06:46.493361   21485 main.go:141] libmachine: (addons-505457)       <target port='0'/>
	I0806 07:06:46.493377   21485 main.go:141] libmachine: (addons-505457)     </serial>
	I0806 07:06:46.493390   21485 main.go:141] libmachine: (addons-505457)     <console type='pty'>
	I0806 07:06:46.493403   21485 main.go:141] libmachine: (addons-505457)       <target type='serial' port='0'/>
	I0806 07:06:46.493411   21485 main.go:141] libmachine: (addons-505457)     </console>
	I0806 07:06:46.493431   21485 main.go:141] libmachine: (addons-505457)     <rng model='virtio'>
	I0806 07:06:46.493443   21485 main.go:141] libmachine: (addons-505457)       <backend model='random'>/dev/random</backend>
	I0806 07:06:46.493461   21485 main.go:141] libmachine: (addons-505457)     </rng>
	I0806 07:06:46.493471   21485 main.go:141] libmachine: (addons-505457)     
	I0806 07:06:46.493479   21485 main.go:141] libmachine: (addons-505457)     
	I0806 07:06:46.493489   21485 main.go:141] libmachine: (addons-505457)   </devices>
	I0806 07:06:46.493498   21485 main.go:141] libmachine: (addons-505457) </domain>
	I0806 07:06:46.493505   21485 main.go:141] libmachine: (addons-505457) 
	I0806 07:06:46.500107   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined MAC address 52:54:00:1a:ac:0c in network default
	I0806 07:06:46.500697   21485 main.go:141] libmachine: (addons-505457) Ensuring networks are active...
	I0806 07:06:46.500724   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:06:46.501398   21485 main.go:141] libmachine: (addons-505457) Ensuring network default is active
	I0806 07:06:46.501766   21485 main.go:141] libmachine: (addons-505457) Ensuring network mk-addons-505457 is active
	I0806 07:06:46.502335   21485 main.go:141] libmachine: (addons-505457) Getting domain xml...
	I0806 07:06:46.502988   21485 main.go:141] libmachine: (addons-505457) Creating domain...
	I0806 07:06:47.913123   21485 main.go:141] libmachine: (addons-505457) Waiting to get IP...
	I0806 07:06:47.913921   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:06:47.914198   21485 main.go:141] libmachine: (addons-505457) DBG | unable to find current IP address of domain addons-505457 in network mk-addons-505457
	I0806 07:06:47.914237   21485 main.go:141] libmachine: (addons-505457) DBG | I0806 07:06:47.914183   21507 retry.go:31] will retry after 303.819298ms: waiting for machine to come up
	I0806 07:06:48.219834   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:06:48.220268   21485 main.go:141] libmachine: (addons-505457) DBG | unable to find current IP address of domain addons-505457 in network mk-addons-505457
	I0806 07:06:48.220314   21485 main.go:141] libmachine: (addons-505457) DBG | I0806 07:06:48.220265   21507 retry.go:31] will retry after 289.016505ms: waiting for machine to come up
	I0806 07:06:48.510799   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:06:48.511227   21485 main.go:141] libmachine: (addons-505457) DBG | unable to find current IP address of domain addons-505457 in network mk-addons-505457
	I0806 07:06:48.511251   21485 main.go:141] libmachine: (addons-505457) DBG | I0806 07:06:48.511179   21507 retry.go:31] will retry after 313.891709ms: waiting for machine to come up
	I0806 07:06:48.826515   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:06:48.827049   21485 main.go:141] libmachine: (addons-505457) DBG | unable to find current IP address of domain addons-505457 in network mk-addons-505457
	I0806 07:06:48.827071   21485 main.go:141] libmachine: (addons-505457) DBG | I0806 07:06:48.827018   21507 retry.go:31] will retry after 378.126122ms: waiting for machine to come up
	I0806 07:06:49.206422   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:06:49.206870   21485 main.go:141] libmachine: (addons-505457) DBG | unable to find current IP address of domain addons-505457 in network mk-addons-505457
	I0806 07:06:49.206896   21485 main.go:141] libmachine: (addons-505457) DBG | I0806 07:06:49.206834   21507 retry.go:31] will retry after 633.845271ms: waiting for machine to come up
	I0806 07:06:49.842758   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:06:49.843234   21485 main.go:141] libmachine: (addons-505457) DBG | unable to find current IP address of domain addons-505457 in network mk-addons-505457
	I0806 07:06:49.843263   21485 main.go:141] libmachine: (addons-505457) DBG | I0806 07:06:49.843175   21507 retry.go:31] will retry after 776.815657ms: waiting for machine to come up
	I0806 07:06:50.620976   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:06:50.621424   21485 main.go:141] libmachine: (addons-505457) DBG | unable to find current IP address of domain addons-505457 in network mk-addons-505457
	I0806 07:06:50.621450   21485 main.go:141] libmachine: (addons-505457) DBG | I0806 07:06:50.621376   21507 retry.go:31] will retry after 1.059906345s: waiting for machine to come up
	I0806 07:06:51.683023   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:06:51.683531   21485 main.go:141] libmachine: (addons-505457) DBG | unable to find current IP address of domain addons-505457 in network mk-addons-505457
	I0806 07:06:51.683553   21485 main.go:141] libmachine: (addons-505457) DBG | I0806 07:06:51.683482   21507 retry.go:31] will retry after 1.356188516s: waiting for machine to come up
	I0806 07:06:53.043011   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:06:53.043468   21485 main.go:141] libmachine: (addons-505457) DBG | unable to find current IP address of domain addons-505457 in network mk-addons-505457
	I0806 07:06:53.043489   21485 main.go:141] libmachine: (addons-505457) DBG | I0806 07:06:53.043433   21507 retry.go:31] will retry after 1.355832507s: waiting for machine to come up
	I0806 07:06:54.401010   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:06:54.401401   21485 main.go:141] libmachine: (addons-505457) DBG | unable to find current IP address of domain addons-505457 in network mk-addons-505457
	I0806 07:06:54.401431   21485 main.go:141] libmachine: (addons-505457) DBG | I0806 07:06:54.401345   21507 retry.go:31] will retry after 1.545085748s: waiting for machine to come up
	I0806 07:06:55.947981   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:06:55.948495   21485 main.go:141] libmachine: (addons-505457) DBG | unable to find current IP address of domain addons-505457 in network mk-addons-505457
	I0806 07:06:55.948523   21485 main.go:141] libmachine: (addons-505457) DBG | I0806 07:06:55.948456   21507 retry.go:31] will retry after 2.636113139s: waiting for machine to come up
	I0806 07:06:58.588161   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:06:58.588721   21485 main.go:141] libmachine: (addons-505457) DBG | unable to find current IP address of domain addons-505457 in network mk-addons-505457
	I0806 07:06:58.588749   21485 main.go:141] libmachine: (addons-505457) DBG | I0806 07:06:58.588683   21507 retry.go:31] will retry after 3.160002907s: waiting for machine to come up
	I0806 07:07:01.750669   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:01.751223   21485 main.go:141] libmachine: (addons-505457) DBG | unable to find current IP address of domain addons-505457 in network mk-addons-505457
	I0806 07:07:01.751248   21485 main.go:141] libmachine: (addons-505457) DBG | I0806 07:07:01.751194   21507 retry.go:31] will retry after 3.715915148s: waiting for machine to come up
	I0806 07:07:05.470996   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:05.471282   21485 main.go:141] libmachine: (addons-505457) DBG | unable to find current IP address of domain addons-505457 in network mk-addons-505457
	I0806 07:07:05.471308   21485 main.go:141] libmachine: (addons-505457) DBG | I0806 07:07:05.471243   21507 retry.go:31] will retry after 4.206631754s: waiting for machine to come up
	I0806 07:07:09.682654   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:09.683157   21485 main.go:141] libmachine: (addons-505457) Found IP for machine: 192.168.39.6
	I0806 07:07:09.683197   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has current primary IP address 192.168.39.6 and MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:09.683211   21485 main.go:141] libmachine: (addons-505457) Reserving static IP address...
	I0806 07:07:09.683636   21485 main.go:141] libmachine: (addons-505457) DBG | unable to find host DHCP lease matching {name: "addons-505457", mac: "52:54:00:d5:7a:7b", ip: "192.168.39.6"} in network mk-addons-505457
	I0806 07:07:09.753707   21485 main.go:141] libmachine: (addons-505457) DBG | Getting to WaitForSSH function...
	I0806 07:07:09.753736   21485 main.go:141] libmachine: (addons-505457) Reserved static IP address: 192.168.39.6
	I0806 07:07:09.753782   21485 main.go:141] libmachine: (addons-505457) Waiting for SSH to be available...
	I0806 07:07:09.756309   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:09.756812   21485 main.go:141] libmachine: (addons-505457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:7a:7b", ip: ""} in network mk-addons-505457: {Iface:virbr1 ExpiryTime:2024-08-06 08:07:01 +0000 UTC Type:0 Mac:52:54:00:d5:7a:7b Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:minikube Clientid:01:52:54:00:d5:7a:7b}
	I0806 07:07:09.756845   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined IP address 192.168.39.6 and MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:09.757022   21485 main.go:141] libmachine: (addons-505457) DBG | Using SSH client type: external
	I0806 07:07:09.757043   21485 main.go:141] libmachine: (addons-505457) DBG | Using SSH private key: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/addons-505457/id_rsa (-rw-------)
	I0806 07:07:09.757079   21485 main.go:141] libmachine: (addons-505457) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.6 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19370-13057/.minikube/machines/addons-505457/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0806 07:07:09.757093   21485 main.go:141] libmachine: (addons-505457) DBG | About to run SSH command:
	I0806 07:07:09.757105   21485 main.go:141] libmachine: (addons-505457) DBG | exit 0
	I0806 07:07:09.896534   21485 main.go:141] libmachine: (addons-505457) DBG | SSH cmd err, output: <nil>: 
	I0806 07:07:09.896811   21485 main.go:141] libmachine: (addons-505457) KVM machine creation complete!
	I0806 07:07:09.897099   21485 main.go:141] libmachine: (addons-505457) Calling .GetConfigRaw
	I0806 07:07:09.897614   21485 main.go:141] libmachine: (addons-505457) Calling .DriverName
	I0806 07:07:09.897870   21485 main.go:141] libmachine: (addons-505457) Calling .DriverName
	I0806 07:07:09.898059   21485 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0806 07:07:09.898075   21485 main.go:141] libmachine: (addons-505457) Calling .GetState
	I0806 07:07:09.899322   21485 main.go:141] libmachine: Detecting operating system of created instance...
	I0806 07:07:09.899336   21485 main.go:141] libmachine: Waiting for SSH to be available...
	I0806 07:07:09.899342   21485 main.go:141] libmachine: Getting to WaitForSSH function...
	I0806 07:07:09.899347   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHHostname
	I0806 07:07:09.901405   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:09.901763   21485 main.go:141] libmachine: (addons-505457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:7a:7b", ip: ""} in network mk-addons-505457: {Iface:virbr1 ExpiryTime:2024-08-06 08:07:01 +0000 UTC Type:0 Mac:52:54:00:d5:7a:7b Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:addons-505457 Clientid:01:52:54:00:d5:7a:7b}
	I0806 07:07:09.901805   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined IP address 192.168.39.6 and MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:09.901925   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHPort
	I0806 07:07:09.902106   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHKeyPath
	I0806 07:07:09.902268   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHKeyPath
	I0806 07:07:09.902412   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHUsername
	I0806 07:07:09.902594   21485 main.go:141] libmachine: Using SSH client type: native
	I0806 07:07:09.902762   21485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0806 07:07:09.902773   21485 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0806 07:07:10.015723   21485 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 07:07:10.015747   21485 main.go:141] libmachine: Detecting the provisioner...
	I0806 07:07:10.015759   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHHostname
	I0806 07:07:10.018158   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:10.018530   21485 main.go:141] libmachine: (addons-505457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:7a:7b", ip: ""} in network mk-addons-505457: {Iface:virbr1 ExpiryTime:2024-08-06 08:07:01 +0000 UTC Type:0 Mac:52:54:00:d5:7a:7b Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:addons-505457 Clientid:01:52:54:00:d5:7a:7b}
	I0806 07:07:10.018567   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined IP address 192.168.39.6 and MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:10.018675   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHPort
	I0806 07:07:10.018861   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHKeyPath
	I0806 07:07:10.019005   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHKeyPath
	I0806 07:07:10.019132   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHUsername
	I0806 07:07:10.019303   21485 main.go:141] libmachine: Using SSH client type: native
	I0806 07:07:10.019478   21485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0806 07:07:10.019491   21485 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0806 07:07:10.133113   21485 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0806 07:07:10.133187   21485 main.go:141] libmachine: found compatible host: buildroot
	I0806 07:07:10.133198   21485 main.go:141] libmachine: Provisioning with buildroot...
	I0806 07:07:10.133207   21485 main.go:141] libmachine: (addons-505457) Calling .GetMachineName
	I0806 07:07:10.133472   21485 buildroot.go:166] provisioning hostname "addons-505457"
	I0806 07:07:10.133501   21485 main.go:141] libmachine: (addons-505457) Calling .GetMachineName
	I0806 07:07:10.133736   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHHostname
	I0806 07:07:10.136322   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:10.136596   21485 main.go:141] libmachine: (addons-505457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:7a:7b", ip: ""} in network mk-addons-505457: {Iface:virbr1 ExpiryTime:2024-08-06 08:07:01 +0000 UTC Type:0 Mac:52:54:00:d5:7a:7b Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:addons-505457 Clientid:01:52:54:00:d5:7a:7b}
	I0806 07:07:10.136635   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined IP address 192.168.39.6 and MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:10.136849   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHPort
	I0806 07:07:10.137048   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHKeyPath
	I0806 07:07:10.137179   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHKeyPath
	I0806 07:07:10.137343   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHUsername
	I0806 07:07:10.137505   21485 main.go:141] libmachine: Using SSH client type: native
	I0806 07:07:10.137664   21485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0806 07:07:10.137676   21485 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-505457 && echo "addons-505457" | sudo tee /etc/hostname
	I0806 07:07:10.267193   21485 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-505457
	
	I0806 07:07:10.267215   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHHostname
	I0806 07:07:10.269789   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:10.270099   21485 main.go:141] libmachine: (addons-505457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:7a:7b", ip: ""} in network mk-addons-505457: {Iface:virbr1 ExpiryTime:2024-08-06 08:07:01 +0000 UTC Type:0 Mac:52:54:00:d5:7a:7b Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:addons-505457 Clientid:01:52:54:00:d5:7a:7b}
	I0806 07:07:10.270127   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined IP address 192.168.39.6 and MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:10.270274   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHPort
	I0806 07:07:10.270445   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHKeyPath
	I0806 07:07:10.270626   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHKeyPath
	I0806 07:07:10.270771   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHUsername
	I0806 07:07:10.271035   21485 main.go:141] libmachine: Using SSH client type: native
	I0806 07:07:10.271219   21485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0806 07:07:10.271233   21485 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-505457' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-505457/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-505457' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0806 07:07:10.393141   21485 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 07:07:10.393174   21485 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19370-13057/.minikube CaCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19370-13057/.minikube}
	I0806 07:07:10.393214   21485 buildroot.go:174] setting up certificates
	I0806 07:07:10.393228   21485 provision.go:84] configureAuth start
	I0806 07:07:10.393237   21485 main.go:141] libmachine: (addons-505457) Calling .GetMachineName
	I0806 07:07:10.393536   21485 main.go:141] libmachine: (addons-505457) Calling .GetIP
	I0806 07:07:10.396273   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:10.396769   21485 main.go:141] libmachine: (addons-505457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:7a:7b", ip: ""} in network mk-addons-505457: {Iface:virbr1 ExpiryTime:2024-08-06 08:07:01 +0000 UTC Type:0 Mac:52:54:00:d5:7a:7b Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:addons-505457 Clientid:01:52:54:00:d5:7a:7b}
	I0806 07:07:10.396797   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined IP address 192.168.39.6 and MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:10.397001   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHHostname
	I0806 07:07:10.399088   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:10.399417   21485 main.go:141] libmachine: (addons-505457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:7a:7b", ip: ""} in network mk-addons-505457: {Iface:virbr1 ExpiryTime:2024-08-06 08:07:01 +0000 UTC Type:0 Mac:52:54:00:d5:7a:7b Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:addons-505457 Clientid:01:52:54:00:d5:7a:7b}
	I0806 07:07:10.399441   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined IP address 192.168.39.6 and MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:10.399582   21485 provision.go:143] copyHostCerts
	I0806 07:07:10.399650   21485 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem (1078 bytes)
	I0806 07:07:10.399771   21485 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem (1123 bytes)
	I0806 07:07:10.399857   21485 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem (1675 bytes)
	I0806 07:07:10.399916   21485 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem org=jenkins.addons-505457 san=[127.0.0.1 192.168.39.6 addons-505457 localhost minikube]
	I0806 07:07:10.558240   21485 provision.go:177] copyRemoteCerts
	I0806 07:07:10.558299   21485 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0806 07:07:10.558332   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHHostname
	I0806 07:07:10.561302   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:10.561613   21485 main.go:141] libmachine: (addons-505457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:7a:7b", ip: ""} in network mk-addons-505457: {Iface:virbr1 ExpiryTime:2024-08-06 08:07:01 +0000 UTC Type:0 Mac:52:54:00:d5:7a:7b Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:addons-505457 Clientid:01:52:54:00:d5:7a:7b}
	I0806 07:07:10.561638   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined IP address 192.168.39.6 and MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:10.561847   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHPort
	I0806 07:07:10.562072   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHKeyPath
	I0806 07:07:10.562271   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHUsername
	I0806 07:07:10.562397   21485 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/addons-505457/id_rsa Username:docker}
	I0806 07:07:10.651363   21485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0806 07:07:10.676106   21485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0806 07:07:10.700880   21485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0806 07:07:10.725132   21485 provision.go:87] duration metric: took 331.892152ms to configureAuth
	I0806 07:07:10.725265   21485 buildroot.go:189] setting minikube options for container-runtime
	I0806 07:07:10.725455   21485 config.go:182] Loaded profile config "addons-505457": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 07:07:10.725566   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHHostname
	I0806 07:07:10.728296   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:10.728666   21485 main.go:141] libmachine: (addons-505457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:7a:7b", ip: ""} in network mk-addons-505457: {Iface:virbr1 ExpiryTime:2024-08-06 08:07:01 +0000 UTC Type:0 Mac:52:54:00:d5:7a:7b Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:addons-505457 Clientid:01:52:54:00:d5:7a:7b}
	I0806 07:07:10.728695   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined IP address 192.168.39.6 and MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:10.728880   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHPort
	I0806 07:07:10.729049   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHKeyPath
	I0806 07:07:10.729198   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHKeyPath
	I0806 07:07:10.729294   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHUsername
	I0806 07:07:10.729399   21485 main.go:141] libmachine: Using SSH client type: native
	I0806 07:07:10.729548   21485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0806 07:07:10.729563   21485 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0806 07:07:11.019239   21485 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0806 07:07:11.019261   21485 main.go:141] libmachine: Checking connection to Docker...
	I0806 07:07:11.019269   21485 main.go:141] libmachine: (addons-505457) Calling .GetURL
	I0806 07:07:11.020572   21485 main.go:141] libmachine: (addons-505457) DBG | Using libvirt version 6000000
	I0806 07:07:11.022645   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:11.022912   21485 main.go:141] libmachine: (addons-505457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:7a:7b", ip: ""} in network mk-addons-505457: {Iface:virbr1 ExpiryTime:2024-08-06 08:07:01 +0000 UTC Type:0 Mac:52:54:00:d5:7a:7b Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:addons-505457 Clientid:01:52:54:00:d5:7a:7b}
	I0806 07:07:11.022945   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined IP address 192.168.39.6 and MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:11.023140   21485 main.go:141] libmachine: Docker is up and running!
	I0806 07:07:11.023154   21485 main.go:141] libmachine: Reticulating splines...
	I0806 07:07:11.023162   21485 client.go:171] duration metric: took 25.193432238s to LocalClient.Create
	I0806 07:07:11.023188   21485 start.go:167] duration metric: took 25.193495675s to libmachine.API.Create "addons-505457"
	I0806 07:07:11.023199   21485 start.go:293] postStartSetup for "addons-505457" (driver="kvm2")
	I0806 07:07:11.023213   21485 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0806 07:07:11.023237   21485 main.go:141] libmachine: (addons-505457) Calling .DriverName
	I0806 07:07:11.023463   21485 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0806 07:07:11.023485   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHHostname
	I0806 07:07:11.025659   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:11.025946   21485 main.go:141] libmachine: (addons-505457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:7a:7b", ip: ""} in network mk-addons-505457: {Iface:virbr1 ExpiryTime:2024-08-06 08:07:01 +0000 UTC Type:0 Mac:52:54:00:d5:7a:7b Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:addons-505457 Clientid:01:52:54:00:d5:7a:7b}
	I0806 07:07:11.025972   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined IP address 192.168.39.6 and MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:11.026072   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHPort
	I0806 07:07:11.026248   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHKeyPath
	I0806 07:07:11.026415   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHUsername
	I0806 07:07:11.026541   21485 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/addons-505457/id_rsa Username:docker}
	I0806 07:07:11.115415   21485 ssh_runner.go:195] Run: cat /etc/os-release
	I0806 07:07:11.119552   21485 info.go:137] Remote host: Buildroot 2023.02.9
	I0806 07:07:11.119573   21485 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-13057/.minikube/addons for local assets ...
	I0806 07:07:11.119682   21485 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-13057/.minikube/files for local assets ...
	I0806 07:07:11.119723   21485 start.go:296] duration metric: took 96.516549ms for postStartSetup
	I0806 07:07:11.119764   21485 main.go:141] libmachine: (addons-505457) Calling .GetConfigRaw
	I0806 07:07:11.120384   21485 main.go:141] libmachine: (addons-505457) Calling .GetIP
	I0806 07:07:11.122996   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:11.123369   21485 main.go:141] libmachine: (addons-505457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:7a:7b", ip: ""} in network mk-addons-505457: {Iface:virbr1 ExpiryTime:2024-08-06 08:07:01 +0000 UTC Type:0 Mac:52:54:00:d5:7a:7b Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:addons-505457 Clientid:01:52:54:00:d5:7a:7b}
	I0806 07:07:11.123396   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined IP address 192.168.39.6 and MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:11.123595   21485 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/addons-505457/config.json ...
	I0806 07:07:11.123822   21485 start.go:128] duration metric: took 25.311976507s to createHost
	I0806 07:07:11.123846   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHHostname
	I0806 07:07:11.126008   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:11.126284   21485 main.go:141] libmachine: (addons-505457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:7a:7b", ip: ""} in network mk-addons-505457: {Iface:virbr1 ExpiryTime:2024-08-06 08:07:01 +0000 UTC Type:0 Mac:52:54:00:d5:7a:7b Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:addons-505457 Clientid:01:52:54:00:d5:7a:7b}
	I0806 07:07:11.126309   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined IP address 192.168.39.6 and MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:11.126504   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHPort
	I0806 07:07:11.126653   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHKeyPath
	I0806 07:07:11.126760   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHKeyPath
	I0806 07:07:11.126915   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHUsername
	I0806 07:07:11.127065   21485 main.go:141] libmachine: Using SSH client type: native
	I0806 07:07:11.127262   21485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0806 07:07:11.127273   21485 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0806 07:07:11.241267   21485 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722928031.217396244
	
	I0806 07:07:11.241309   21485 fix.go:216] guest clock: 1722928031.217396244
	I0806 07:07:11.241319   21485 fix.go:229] Guest: 2024-08-06 07:07:11.217396244 +0000 UTC Remote: 2024-08-06 07:07:11.123835233 +0000 UTC m=+25.411716346 (delta=93.561011ms)
	I0806 07:07:11.241363   21485 fix.go:200] guest clock delta is within tolerance: 93.561011ms
	I0806 07:07:11.241372   21485 start.go:83] releasing machines lock for "addons-505457", held for 25.429613061s
	I0806 07:07:11.241405   21485 main.go:141] libmachine: (addons-505457) Calling .DriverName
	I0806 07:07:11.241691   21485 main.go:141] libmachine: (addons-505457) Calling .GetIP
	I0806 07:07:11.245016   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:11.245431   21485 main.go:141] libmachine: (addons-505457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:7a:7b", ip: ""} in network mk-addons-505457: {Iface:virbr1 ExpiryTime:2024-08-06 08:07:01 +0000 UTC Type:0 Mac:52:54:00:d5:7a:7b Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:addons-505457 Clientid:01:52:54:00:d5:7a:7b}
	I0806 07:07:11.245457   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined IP address 192.168.39.6 and MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:11.245663   21485 main.go:141] libmachine: (addons-505457) Calling .DriverName
	I0806 07:07:11.246337   21485 main.go:141] libmachine: (addons-505457) Calling .DriverName
	I0806 07:07:11.246560   21485 main.go:141] libmachine: (addons-505457) Calling .DriverName
	I0806 07:07:11.246655   21485 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0806 07:07:11.246686   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHHostname
	I0806 07:07:11.246795   21485 ssh_runner.go:195] Run: cat /version.json
	I0806 07:07:11.246821   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHHostname
	I0806 07:07:11.249742   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:11.249801   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:11.250227   21485 main.go:141] libmachine: (addons-505457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:7a:7b", ip: ""} in network mk-addons-505457: {Iface:virbr1 ExpiryTime:2024-08-06 08:07:01 +0000 UTC Type:0 Mac:52:54:00:d5:7a:7b Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:addons-505457 Clientid:01:52:54:00:d5:7a:7b}
	I0806 07:07:11.250248   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined IP address 192.168.39.6 and MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:11.250272   21485 main.go:141] libmachine: (addons-505457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:7a:7b", ip: ""} in network mk-addons-505457: {Iface:virbr1 ExpiryTime:2024-08-06 08:07:01 +0000 UTC Type:0 Mac:52:54:00:d5:7a:7b Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:addons-505457 Clientid:01:52:54:00:d5:7a:7b}
	I0806 07:07:11.250287   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined IP address 192.168.39.6 and MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:11.250402   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHPort
	I0806 07:07:11.250541   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHPort
	I0806 07:07:11.250695   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHKeyPath
	I0806 07:07:11.250707   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHKeyPath
	I0806 07:07:11.250857   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHUsername
	I0806 07:07:11.250882   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHUsername
	I0806 07:07:11.251014   21485 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/addons-505457/id_rsa Username:docker}
	I0806 07:07:11.251018   21485 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/addons-505457/id_rsa Username:docker}
	I0806 07:07:11.356388   21485 ssh_runner.go:195] Run: systemctl --version
	I0806 07:07:11.362561   21485 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0806 07:07:11.521417   21485 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0806 07:07:11.527258   21485 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0806 07:07:11.527328   21485 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0806 07:07:11.546564   21485 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0806 07:07:11.546598   21485 start.go:495] detecting cgroup driver to use...
	I0806 07:07:11.546661   21485 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0806 07:07:11.563089   21485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 07:07:11.577568   21485 docker.go:217] disabling cri-docker service (if available) ...
	I0806 07:07:11.577635   21485 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0806 07:07:11.591904   21485 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0806 07:07:11.606015   21485 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0806 07:07:11.717025   21485 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0806 07:07:11.857443   21485 docker.go:233] disabling docker service ...
	I0806 07:07:11.857523   21485 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0806 07:07:11.875227   21485 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0806 07:07:11.889022   21485 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0806 07:07:12.041847   21485 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0806 07:07:12.167764   21485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0806 07:07:12.182018   21485 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 07:07:12.200600   21485 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0806 07:07:12.200673   21485 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 07:07:12.211140   21485 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0806 07:07:12.211209   21485 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 07:07:12.221989   21485 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 07:07:12.232576   21485 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 07:07:12.242915   21485 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0806 07:07:12.254666   21485 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 07:07:12.265714   21485 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 07:07:12.283601   21485 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 07:07:12.294216   21485 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0806 07:07:12.303945   21485 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0806 07:07:12.304000   21485 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0806 07:07:12.316706   21485 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0806 07:07:12.326696   21485 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 07:07:12.441365   21485 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0806 07:07:12.584552   21485 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0806 07:07:12.584637   21485 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0806 07:07:12.589764   21485 start.go:563] Will wait 60s for crictl version
	I0806 07:07:12.589848   21485 ssh_runner.go:195] Run: which crictl
	I0806 07:07:12.593691   21485 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0806 07:07:12.630920   21485 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0806 07:07:12.631014   21485 ssh_runner.go:195] Run: crio --version
	I0806 07:07:12.658831   21485 ssh_runner.go:195] Run: crio --version
	I0806 07:07:12.690543   21485 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0806 07:07:12.692016   21485 main.go:141] libmachine: (addons-505457) Calling .GetIP
	I0806 07:07:12.694965   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:12.695270   21485 main.go:141] libmachine: (addons-505457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:7a:7b", ip: ""} in network mk-addons-505457: {Iface:virbr1 ExpiryTime:2024-08-06 08:07:01 +0000 UTC Type:0 Mac:52:54:00:d5:7a:7b Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:addons-505457 Clientid:01:52:54:00:d5:7a:7b}
	I0806 07:07:12.695294   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined IP address 192.168.39.6 and MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:12.695515   21485 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0806 07:07:12.699698   21485 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 07:07:12.712115   21485 kubeadm.go:883] updating cluster {Name:addons-505457 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:addons-505457 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0806 07:07:12.712256   21485 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0806 07:07:12.712310   21485 ssh_runner.go:195] Run: sudo crictl images --output json
	I0806 07:07:12.744020   21485 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0806 07:07:12.744084   21485 ssh_runner.go:195] Run: which lz4
	I0806 07:07:12.748495   21485 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0806 07:07:12.752915   21485 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0806 07:07:12.752947   21485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0806 07:07:14.100542   21485 crio.go:462] duration metric: took 1.35214106s to copy over tarball
	I0806 07:07:14.100636   21485 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0806 07:07:16.444828   21485 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.344155382s)
	I0806 07:07:16.444868   21485 crio.go:469] duration metric: took 2.344294513s to extract the tarball
	I0806 07:07:16.444876   21485 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0806 07:07:16.483687   21485 ssh_runner.go:195] Run: sudo crictl images --output json
	I0806 07:07:16.533613   21485 crio.go:514] all images are preloaded for cri-o runtime.
	I0806 07:07:16.533640   21485 cache_images.go:84] Images are preloaded, skipping loading
	I0806 07:07:16.533660   21485 kubeadm.go:934] updating node { 192.168.39.6 8443 v1.30.3 crio true true} ...
	I0806 07:07:16.533764   21485 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-505457 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:addons-505457 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0806 07:07:16.533837   21485 ssh_runner.go:195] Run: crio config
	I0806 07:07:16.583416   21485 cni.go:84] Creating CNI manager for ""
	I0806 07:07:16.583439   21485 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0806 07:07:16.583447   21485 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0806 07:07:16.583468   21485 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.6 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-505457 NodeName:addons-505457 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.6"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.6 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0806 07:07:16.583588   21485 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.6
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-505457"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.6
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.6"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0806 07:07:16.583649   21485 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0806 07:07:16.594082   21485 binaries.go:44] Found k8s binaries, skipping transfer
	I0806 07:07:16.594157   21485 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0806 07:07:16.604306   21485 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0806 07:07:16.620616   21485 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0806 07:07:16.637142   21485 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0806 07:07:16.654373   21485 ssh_runner.go:195] Run: grep 192.168.39.6	control-plane.minikube.internal$ /etc/hosts
	I0806 07:07:16.658371   21485 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.6	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 07:07:16.671832   21485 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 07:07:16.789769   21485 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 07:07:16.807525   21485 certs.go:68] Setting up /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/addons-505457 for IP: 192.168.39.6
	I0806 07:07:16.807554   21485 certs.go:194] generating shared ca certs ...
	I0806 07:07:16.807589   21485 certs.go:226] acquiring lock for ca certs: {Name:mkf5c5aed91f4108054d9f2c742cad66c86afbed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 07:07:16.807758   21485 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.key
	I0806 07:07:16.959954   21485 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt ...
	I0806 07:07:16.959985   21485 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt: {Name:mk3e1a3a83bc9726eedf3bb6b14c66e1d057eef7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 07:07:16.960175   21485 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19370-13057/.minikube/ca.key ...
	I0806 07:07:16.960190   21485 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/.minikube/ca.key: {Name:mk03682aa44f086e8b8699e058741f3f6e561f04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 07:07:16.960283   21485 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.key
	I0806 07:07:17.430458   21485 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.crt ...
	I0806 07:07:17.430489   21485 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.crt: {Name:mk403c4b5dfc06b347e4e3bd6ec9130c8187c172 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 07:07:17.430644   21485 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.key ...
	I0806 07:07:17.430654   21485 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.key: {Name:mk7d37cf4d4ae9a8d1ef98ba079a9a37709655e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 07:07:17.430718   21485 certs.go:256] generating profile certs ...
	I0806 07:07:17.430769   21485 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/addons-505457/client.key
	I0806 07:07:17.430779   21485 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/addons-505457/client.crt with IP's: []
	I0806 07:07:18.001607   21485 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/addons-505457/client.crt ...
	I0806 07:07:18.001637   21485 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/addons-505457/client.crt: {Name:mk1a65d31a4aa6d8d190f854e2e2bbb382cce155 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 07:07:18.001795   21485 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/addons-505457/client.key ...
	I0806 07:07:18.001805   21485 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/addons-505457/client.key: {Name:mk3830dfb911cd9bfd155214549d8758ddc9d4d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 07:07:18.001872   21485 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/addons-505457/apiserver.key.97df24c3
	I0806 07:07:18.001890   21485 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/addons-505457/apiserver.crt.97df24c3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.6]
	I0806 07:07:18.134522   21485 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/addons-505457/apiserver.crt.97df24c3 ...
	I0806 07:07:18.134552   21485 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/addons-505457/apiserver.crt.97df24c3: {Name:mkddc23c4834db15fa89cfacac311a5e4c07ac65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 07:07:18.134707   21485 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/addons-505457/apiserver.key.97df24c3 ...
	I0806 07:07:18.134719   21485 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/addons-505457/apiserver.key.97df24c3: {Name:mk96c9748b8cee91e0a7e02d7a9be63a9310ac88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 07:07:18.134793   21485 certs.go:381] copying /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/addons-505457/apiserver.crt.97df24c3 -> /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/addons-505457/apiserver.crt
	I0806 07:07:18.134861   21485 certs.go:385] copying /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/addons-505457/apiserver.key.97df24c3 -> /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/addons-505457/apiserver.key
	I0806 07:07:18.134911   21485 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/addons-505457/proxy-client.key
	I0806 07:07:18.134927   21485 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/addons-505457/proxy-client.crt with IP's: []
	I0806 07:07:18.275587   21485 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/addons-505457/proxy-client.crt ...
	I0806 07:07:18.275620   21485 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/addons-505457/proxy-client.crt: {Name:mkcbfe8acad38f1d392edc65bf3721ae5beac0a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 07:07:18.275834   21485 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/addons-505457/proxy-client.key ...
	I0806 07:07:18.275855   21485 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/addons-505457/proxy-client.key: {Name:mkd2c740d57e918062af740fabeb121d8a5b731c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 07:07:18.276100   21485 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem (1675 bytes)
	I0806 07:07:18.276143   21485 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem (1078 bytes)
	I0806 07:07:18.276181   21485 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem (1123 bytes)
	I0806 07:07:18.276218   21485 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem (1675 bytes)
	I0806 07:07:18.276875   21485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0806 07:07:18.306210   21485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0806 07:07:18.331354   21485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0806 07:07:18.355074   21485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0806 07:07:18.380976   21485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/addons-505457/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0806 07:07:18.405330   21485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/addons-505457/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0806 07:07:18.428526   21485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/addons-505457/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0806 07:07:18.452442   21485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/addons-505457/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0806 07:07:18.477264   21485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0806 07:07:18.500601   21485 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0806 07:07:18.516830   21485 ssh_runner.go:195] Run: openssl version
	I0806 07:07:18.522587   21485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0806 07:07:18.533750   21485 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0806 07:07:18.538067   21485 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  6 07:07 /usr/share/ca-certificates/minikubeCA.pem
	I0806 07:07:18.538127   21485 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0806 07:07:18.543875   21485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0806 07:07:18.555097   21485 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0806 07:07:18.559259   21485 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0806 07:07:18.559316   21485 kubeadm.go:392] StartCluster: {Name:addons-505457 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 C
lusterName:addons-505457 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 07:07:18.559399   21485 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0806 07:07:18.559462   21485 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0806 07:07:18.600114   21485 cri.go:89] found id: ""
	I0806 07:07:18.600188   21485 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0806 07:07:18.611548   21485 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0806 07:07:18.622157   21485 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0806 07:07:18.632871   21485 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0806 07:07:18.632890   21485 kubeadm.go:157] found existing configuration files:
	
	I0806 07:07:18.632929   21485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0806 07:07:18.642369   21485 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0806 07:07:18.642438   21485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0806 07:07:18.652097   21485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0806 07:07:18.661721   21485 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0806 07:07:18.661774   21485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0806 07:07:18.671817   21485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0806 07:07:18.681993   21485 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0806 07:07:18.682056   21485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0806 07:07:18.692700   21485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0806 07:07:18.702466   21485 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0806 07:07:18.702539   21485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0806 07:07:18.713056   21485 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0806 07:07:18.774764   21485 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0806 07:07:18.774854   21485 kubeadm.go:310] [preflight] Running pre-flight checks
	I0806 07:07:18.911622   21485 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0806 07:07:18.911777   21485 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0806 07:07:18.911911   21485 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0806 07:07:19.137987   21485 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0806 07:07:19.328964   21485 out.go:204]   - Generating certificates and keys ...
	I0806 07:07:19.329056   21485 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0806 07:07:19.329138   21485 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0806 07:07:19.329232   21485 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0806 07:07:19.336003   21485 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0806 07:07:19.409839   21485 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0806 07:07:19.634639   21485 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0806 07:07:19.703596   21485 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0806 07:07:19.703710   21485 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-505457 localhost] and IPs [192.168.39.6 127.0.0.1 ::1]
	I0806 07:07:19.834637   21485 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0806 07:07:19.834787   21485 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-505457 localhost] and IPs [192.168.39.6 127.0.0.1 ::1]
	I0806 07:07:19.923168   21485 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0806 07:07:20.136623   21485 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0806 07:07:20.375620   21485 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0806 07:07:20.375919   21485 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0806 07:07:20.571258   21485 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0806 07:07:20.776735   21485 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0806 07:07:21.099140   21485 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0806 07:07:21.182126   21485 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0806 07:07:21.391199   21485 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0806 07:07:21.391843   21485 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0806 07:07:21.394719   21485 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0806 07:07:21.396813   21485 out.go:204]   - Booting up control plane ...
	I0806 07:07:21.396911   21485 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0806 07:07:21.397003   21485 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0806 07:07:21.399913   21485 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0806 07:07:21.417712   21485 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0806 07:07:21.418624   21485 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0806 07:07:21.418678   21485 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0806 07:07:21.555513   21485 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0806 07:07:21.555594   21485 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0806 07:07:22.556412   21485 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001622868s
	I0806 07:07:22.556504   21485 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0806 07:07:27.557223   21485 kubeadm.go:310] [api-check] The API server is healthy after 5.001751666s
	I0806 07:07:27.568314   21485 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0806 07:07:28.085164   21485 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0806 07:07:28.115735   21485 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0806 07:07:28.115965   21485 kubeadm.go:310] [mark-control-plane] Marking the node addons-505457 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0806 07:07:28.132722   21485 kubeadm.go:310] [bootstrap-token] Using token: 7h6mq1.bmdp83lhi0wmiffm
	I0806 07:07:28.134249   21485 out.go:204]   - Configuring RBAC rules ...
	I0806 07:07:28.134367   21485 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0806 07:07:28.144141   21485 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0806 07:07:28.153965   21485 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0806 07:07:28.157380   21485 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0806 07:07:28.163979   21485 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0806 07:07:28.167111   21485 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0806 07:07:28.282115   21485 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0806 07:07:28.715511   21485 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0806 07:07:29.280195   21485 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0806 07:07:29.280221   21485 kubeadm.go:310] 
	I0806 07:07:29.280292   21485 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0806 07:07:29.280299   21485 kubeadm.go:310] 
	I0806 07:07:29.280383   21485 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0806 07:07:29.280395   21485 kubeadm.go:310] 
	I0806 07:07:29.280427   21485 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0806 07:07:29.280529   21485 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0806 07:07:29.280620   21485 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0806 07:07:29.280631   21485 kubeadm.go:310] 
	I0806 07:07:29.280707   21485 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0806 07:07:29.280719   21485 kubeadm.go:310] 
	I0806 07:07:29.280756   21485 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0806 07:07:29.280762   21485 kubeadm.go:310] 
	I0806 07:07:29.280821   21485 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0806 07:07:29.280922   21485 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0806 07:07:29.280998   21485 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0806 07:07:29.281007   21485 kubeadm.go:310] 
	I0806 07:07:29.281138   21485 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0806 07:07:29.281210   21485 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0806 07:07:29.281223   21485 kubeadm.go:310] 
	I0806 07:07:29.281318   21485 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 7h6mq1.bmdp83lhi0wmiffm \
	I0806 07:07:29.281459   21485 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d36e5c1dfe989c7d561c809d7c06e0fc13926ac8f6d664cc5d6865ea5f79931b \
	I0806 07:07:29.281494   21485 kubeadm.go:310] 	--control-plane 
	I0806 07:07:29.281500   21485 kubeadm.go:310] 
	I0806 07:07:29.281623   21485 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0806 07:07:29.281644   21485 kubeadm.go:310] 
	I0806 07:07:29.281748   21485 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 7h6mq1.bmdp83lhi0wmiffm \
	I0806 07:07:29.281867   21485 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d36e5c1dfe989c7d561c809d7c06e0fc13926ac8f6d664cc5d6865ea5f79931b 
	I0806 07:07:29.282270   21485 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0806 07:07:29.282293   21485 cni.go:84] Creating CNI manager for ""
	I0806 07:07:29.282300   21485 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0806 07:07:29.284317   21485 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0806 07:07:29.285828   21485 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0806 07:07:29.299491   21485 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0806 07:07:29.319896   21485 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0806 07:07:29.320009   21485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-505457 minikube.k8s.io/updated_at=2024_08_06T07_07_29_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=e92cb06692f5ea1ba801d10d148e5e92e807f9c8 minikube.k8s.io/name=addons-505457 minikube.k8s.io/primary=true
	I0806 07:07:29.320015   21485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 07:07:29.349014   21485 ops.go:34] apiserver oom_adj: -16
	I0806 07:07:29.442739   21485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 07:07:29.943354   21485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 07:07:30.443455   21485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 07:07:30.943739   21485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 07:07:31.442843   21485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 07:07:31.943503   21485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 07:07:32.443798   21485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 07:07:32.943163   21485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 07:07:33.442842   21485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 07:07:33.943250   21485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 07:07:34.443105   21485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 07:07:34.942850   21485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 07:07:35.443012   21485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 07:07:35.942805   21485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 07:07:36.443209   21485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 07:07:36.943684   21485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 07:07:37.443420   21485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 07:07:37.943184   21485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 07:07:38.443379   21485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 07:07:38.943369   21485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 07:07:39.443575   21485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 07:07:39.943030   21485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 07:07:40.443253   21485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 07:07:40.943780   21485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 07:07:41.442874   21485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 07:07:41.541122   21485 kubeadm.go:1113] duration metric: took 12.221193425s to wait for elevateKubeSystemPrivileges
	I0806 07:07:41.541148   21485 kubeadm.go:394] duration metric: took 22.981835919s to StartCluster
	I0806 07:07:41.541163   21485 settings.go:142] acquiring lock: {Name:mkb0bfbcae3b4205ef43487a9de84cfd8afd2e5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 07:07:41.541278   21485 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19370-13057/kubeconfig
	I0806 07:07:41.541632   21485 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/kubeconfig: {Name:mk5c066914acf8e36556b29792f75b98076ca34a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 07:07:41.541814   21485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0806 07:07:41.541850   21485 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0806 07:07:41.541932   21485 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0806 07:07:41.542038   21485 addons.go:69] Setting yakd=true in profile "addons-505457"
	I0806 07:07:41.542080   21485 addons.go:234] Setting addon yakd=true in "addons-505457"
	I0806 07:07:41.542095   21485 addons.go:69] Setting inspektor-gadget=true in profile "addons-505457"
	I0806 07:07:41.542112   21485 config.go:182] Loaded profile config "addons-505457": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 07:07:41.542122   21485 addons.go:69] Setting storage-provisioner=true in profile "addons-505457"
	I0806 07:07:41.542137   21485 addons.go:234] Setting addon inspektor-gadget=true in "addons-505457"
	I0806 07:07:41.542146   21485 addons.go:69] Setting volumesnapshots=true in profile "addons-505457"
	I0806 07:07:41.542153   21485 addons.go:234] Setting addon storage-provisioner=true in "addons-505457"
	I0806 07:07:41.542121   21485 host.go:66] Checking if "addons-505457" exists ...
	I0806 07:07:41.542170   21485 host.go:66] Checking if "addons-505457" exists ...
	I0806 07:07:41.542175   21485 addons.go:69] Setting metrics-server=true in profile "addons-505457"
	I0806 07:07:41.542179   21485 host.go:66] Checking if "addons-505457" exists ...
	I0806 07:07:41.542181   21485 addons.go:69] Setting registry=true in profile "addons-505457"
	I0806 07:07:41.542192   21485 addons.go:234] Setting addon metrics-server=true in "addons-505457"
	I0806 07:07:41.542200   21485 addons.go:69] Setting default-storageclass=true in profile "addons-505457"
	I0806 07:07:41.542212   21485 addons.go:69] Setting cloud-spanner=true in profile "addons-505457"
	I0806 07:07:41.542219   21485 addons.go:69] Setting ingress=true in profile "addons-505457"
	I0806 07:07:41.542227   21485 addons.go:69] Setting helm-tiller=true in profile "addons-505457"
	I0806 07:07:41.542239   21485 addons.go:234] Setting addon ingress=true in "addons-505457"
	I0806 07:07:41.542204   21485 addons.go:234] Setting addon registry=true in "addons-505457"
	I0806 07:07:41.542249   21485 addons.go:234] Setting addon helm-tiller=true in "addons-505457"
	I0806 07:07:41.542252   21485 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-505457"
	I0806 07:07:41.542264   21485 host.go:66] Checking if "addons-505457" exists ...
	I0806 07:07:41.542270   21485 host.go:66] Checking if "addons-505457" exists ...
	I0806 07:07:41.542274   21485 host.go:66] Checking if "addons-505457" exists ...
	I0806 07:07:41.542394   21485 addons.go:69] Setting ingress-dns=true in profile "addons-505457"
	I0806 07:07:41.542418   21485 addons.go:234] Setting addon ingress-dns=true in "addons-505457"
	I0806 07:07:41.542448   21485 host.go:66] Checking if "addons-505457" exists ...
	I0806 07:07:41.542130   21485 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-505457"
	I0806 07:07:41.542621   21485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:07:41.542634   21485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:07:41.542240   21485 addons.go:234] Setting addon cloud-spanner=true in "addons-505457"
	I0806 07:07:41.542657   21485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:07:41.542664   21485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:07:41.542675   21485 host.go:66] Checking if "addons-505457" exists ...
	I0806 07:07:41.542621   21485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:07:41.542759   21485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:07:41.542770   21485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:07:41.542799   21485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:07:41.542679   21485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:07:41.542936   21485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:07:41.542983   21485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:07:41.542138   21485 addons.go:69] Setting volcano=true in profile "addons-505457"
	I0806 07:07:41.543187   21485 addons.go:234] Setting addon volcano=true in "addons-505457"
	I0806 07:07:41.543212   21485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:07:41.542667   21485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:07:41.543226   21485 host.go:66] Checking if "addons-505457" exists ...
	I0806 07:07:41.542191   21485 addons.go:69] Setting gcp-auth=true in profile "addons-505457"
	I0806 07:07:41.543501   21485 mustload.go:65] Loading cluster: addons-505457
	I0806 07:07:41.543585   21485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:07:41.543666   21485 config.go:182] Loaded profile config "addons-505457": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 07:07:41.543804   21485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:07:41.543846   21485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:07:41.544108   21485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:07:41.544165   21485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:07:41.542195   21485 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-505457"
	I0806 07:07:41.544408   21485 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-505457"
	I0806 07:07:41.544436   21485 host.go:66] Checking if "addons-505457" exists ...
	I0806 07:07:41.544792   21485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:07:41.544810   21485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:07:41.542164   21485 addons.go:234] Setting addon volumesnapshots=true in "addons-505457"
	I0806 07:07:41.542219   21485 host.go:66] Checking if "addons-505457" exists ...
	I0806 07:07:41.545036   21485 host.go:66] Checking if "addons-505457" exists ...
	I0806 07:07:41.542625   21485 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-505457"
	I0806 07:07:41.545383   21485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:07:41.545402   21485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:07:41.545428   21485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:07:41.545455   21485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:07:41.542681   21485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:07:41.542651   21485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:07:41.545947   21485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:07:41.542691   21485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:07:41.545914   21485 out.go:177] * Verifying Kubernetes components...
	I0806 07:07:41.542157   21485 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-505457"
	I0806 07:07:41.551630   21485 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-505457"
	I0806 07:07:41.551672   21485 host.go:66] Checking if "addons-505457" exists ...
	I0806 07:07:41.552053   21485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:07:41.552090   21485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:07:41.553024   21485 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 07:07:41.563666   21485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39891
	I0806 07:07:41.563871   21485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39921
	I0806 07:07:41.564319   21485 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:07:41.564430   21485 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:07:41.564990   21485 main.go:141] libmachine: Using API Version  1
	I0806 07:07:41.564998   21485 main.go:141] libmachine: Using API Version  1
	I0806 07:07:41.565008   21485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:07:41.565017   21485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:07:41.565097   21485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37547
	I0806 07:07:41.565134   21485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37995
	I0806 07:07:41.565513   21485 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:07:41.565790   21485 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:07:41.565883   21485 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:07:41.566160   21485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:07:41.566198   21485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:07:41.566411   21485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:07:41.566452   21485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:07:41.566665   21485 main.go:141] libmachine: Using API Version  1
	I0806 07:07:41.566694   21485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:07:41.566750   21485 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:07:41.567061   21485 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:07:41.567428   21485 main.go:141] libmachine: Using API Version  1
	I0806 07:07:41.567444   21485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:07:41.567579   21485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:07:41.567624   21485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:07:41.567818   21485 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:07:41.580894   21485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:07:41.580936   21485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:07:41.580949   21485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:07:41.580974   21485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:07:41.583527   21485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45161
	I0806 07:07:41.584469   21485 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:07:41.585198   21485 main.go:141] libmachine: Using API Version  1
	I0806 07:07:41.585217   21485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:07:41.586452   21485 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:07:41.587120   21485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:07:41.587164   21485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:07:41.590582   21485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40539
	I0806 07:07:41.590598   21485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41753
	I0806 07:07:41.590662   21485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41675
	I0806 07:07:41.591179   21485 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:07:41.591273   21485 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:07:41.591716   21485 main.go:141] libmachine: Using API Version  1
	I0806 07:07:41.591740   21485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:07:41.592081   21485 main.go:141] libmachine: Using API Version  1
	I0806 07:07:41.592098   21485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:07:41.592163   21485 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:07:41.592214   21485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38655
	I0806 07:07:41.592391   21485 main.go:141] libmachine: (addons-505457) Calling .GetState
	I0806 07:07:41.592788   21485 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:07:41.593308   21485 main.go:141] libmachine: Using API Version  1
	I0806 07:07:41.593324   21485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:07:41.593465   21485 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:07:41.594095   21485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:07:41.594122   21485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:07:41.594357   21485 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:07:41.594586   21485 main.go:141] libmachine: (addons-505457) Calling .GetState
	I0806 07:07:41.594980   21485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44035
	I0806 07:07:41.595090   21485 main.go:141] libmachine: (addons-505457) Calling .DriverName
	I0806 07:07:41.595196   21485 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:07:41.595421   21485 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:07:41.595912   21485 main.go:141] libmachine: Using API Version  1
	I0806 07:07:41.595928   21485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:07:41.596486   21485 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:07:41.596539   21485 main.go:141] libmachine: (addons-505457) Calling .DriverName
	I0806 07:07:41.596872   21485 main.go:141] libmachine: (addons-505457) Calling .GetState
	I0806 07:07:41.597188   21485 main.go:141] libmachine: Using API Version  1
	I0806 07:07:41.597325   21485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:07:41.597969   21485 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	I0806 07:07:41.598528   21485 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:07:41.598848   21485 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0806 07:07:41.599027   21485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:07:41.599277   21485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:07:41.600219   21485 host.go:66] Checking if "addons-505457" exists ...
	I0806 07:07:41.600608   21485 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0806 07:07:41.600624   21485 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0806 07:07:41.600641   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHHostname
	I0806 07:07:41.601175   21485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:07:41.601216   21485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:07:41.601663   21485 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0806 07:07:41.602745   21485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36999
	I0806 07:07:41.603353   21485 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:07:41.604072   21485 main.go:141] libmachine: Using API Version  1
	I0806 07:07:41.604087   21485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:07:41.604160   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:41.604591   21485 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:07:41.604712   21485 main.go:141] libmachine: (addons-505457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:7a:7b", ip: ""} in network mk-addons-505457: {Iface:virbr1 ExpiryTime:2024-08-06 08:07:01 +0000 UTC Type:0 Mac:52:54:00:d5:7a:7b Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:addons-505457 Clientid:01:52:54:00:d5:7a:7b}
	I0806 07:07:41.604727   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined IP address 192.168.39.6 and MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:41.604927   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHPort
	I0806 07:07:41.605211   21485 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0806 07:07:41.605296   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHKeyPath
	I0806 07:07:41.605429   21485 main.go:141] libmachine: (addons-505457) Calling .GetState
	I0806 07:07:41.605469   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHUsername
	I0806 07:07:41.605644   21485 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/addons-505457/id_rsa Username:docker}
	I0806 07:07:41.607410   21485 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0806 07:07:41.607428   21485 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0806 07:07:41.607445   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHHostname
	I0806 07:07:41.608468   21485 main.go:141] libmachine: (addons-505457) Calling .DriverName
	I0806 07:07:41.610110   21485 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0806 07:07:41.610641   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:41.611001   21485 main.go:141] libmachine: (addons-505457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:7a:7b", ip: ""} in network mk-addons-505457: {Iface:virbr1 ExpiryTime:2024-08-06 08:07:01 +0000 UTC Type:0 Mac:52:54:00:d5:7a:7b Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:addons-505457 Clientid:01:52:54:00:d5:7a:7b}
	I0806 07:07:41.611020   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined IP address 192.168.39.6 and MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:41.611307   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHPort
	I0806 07:07:41.611504   21485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34085
	I0806 07:07:41.611781   21485 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0806 07:07:41.611795   21485 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0806 07:07:41.611813   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHHostname
	I0806 07:07:41.612455   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHKeyPath
	I0806 07:07:41.612648   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHUsername
	I0806 07:07:41.612931   21485 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/addons-505457/id_rsa Username:docker}
	I0806 07:07:41.615080   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:41.615446   21485 main.go:141] libmachine: (addons-505457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:7a:7b", ip: ""} in network mk-addons-505457: {Iface:virbr1 ExpiryTime:2024-08-06 08:07:01 +0000 UTC Type:0 Mac:52:54:00:d5:7a:7b Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:addons-505457 Clientid:01:52:54:00:d5:7a:7b}
	I0806 07:07:41.615473   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined IP address 192.168.39.6 and MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:41.615733   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHPort
	I0806 07:07:41.615844   21485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45369
	I0806 07:07:41.615935   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHKeyPath
	I0806 07:07:41.616091   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHUsername
	I0806 07:07:41.616235   21485 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/addons-505457/id_rsa Username:docker}
	I0806 07:07:41.616551   21485 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:07:41.616639   21485 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:07:41.618585   21485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46569
	I0806 07:07:41.618730   21485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43195
	I0806 07:07:41.619007   21485 main.go:141] libmachine: Using API Version  1
	I0806 07:07:41.619028   21485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:07:41.619043   21485 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:07:41.619461   21485 main.go:141] libmachine: Using API Version  1
	I0806 07:07:41.619486   21485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:07:41.619538   21485 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:07:41.619574   21485 main.go:141] libmachine: Using API Version  1
	I0806 07:07:41.619588   21485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:07:41.620143   21485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:07:41.620179   21485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:07:41.620431   21485 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:07:41.620533   21485 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:07:41.620583   21485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42341
	I0806 07:07:41.620669   21485 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:07:41.620728   21485 main.go:141] libmachine: (addons-505457) Calling .GetState
	I0806 07:07:41.620861   21485 main.go:141] libmachine: (addons-505457) Calling .GetState
	I0806 07:07:41.620970   21485 main.go:141] libmachine: Using API Version  1
	I0806 07:07:41.620991   21485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:07:41.621719   21485 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:07:41.621853   21485 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:07:41.622275   21485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:07:41.622316   21485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:07:41.622493   21485 main.go:141] libmachine: Using API Version  1
	I0806 07:07:41.622514   21485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:07:41.623018   21485 main.go:141] libmachine: (addons-505457) Calling .DriverName
	I0806 07:07:41.624753   21485 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:07:41.625870   21485 addons.go:234] Setting addon default-storageclass=true in "addons-505457"
	I0806 07:07:41.625906   21485 host.go:66] Checking if "addons-505457" exists ...
	I0806 07:07:41.626273   21485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:07:41.626303   21485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:07:41.627735   21485 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	I0806 07:07:41.629159   21485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:07:41.629203   21485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:07:41.629506   21485 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0806 07:07:41.629527   21485 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0806 07:07:41.629550   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHHostname
	I0806 07:07:41.632920   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:41.633351   21485 main.go:141] libmachine: (addons-505457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:7a:7b", ip: ""} in network mk-addons-505457: {Iface:virbr1 ExpiryTime:2024-08-06 08:07:01 +0000 UTC Type:0 Mac:52:54:00:d5:7a:7b Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:addons-505457 Clientid:01:52:54:00:d5:7a:7b}
	I0806 07:07:41.633370   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined IP address 192.168.39.6 and MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:41.633666   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHPort
	I0806 07:07:41.633877   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHKeyPath
	I0806 07:07:41.634042   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHUsername
	I0806 07:07:41.634157   21485 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/addons-505457/id_rsa Username:docker}
	I0806 07:07:41.634512   21485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39553
	I0806 07:07:41.634987   21485 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:07:41.635077   21485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41101
	I0806 07:07:41.635520   21485 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:07:41.635696   21485 main.go:141] libmachine: Using API Version  1
	I0806 07:07:41.635708   21485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:07:41.636192   21485 main.go:141] libmachine: Using API Version  1
	I0806 07:07:41.636207   21485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:07:41.636537   21485 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:07:41.637074   21485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:07:41.637109   21485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:07:41.637606   21485 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:07:41.638174   21485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:07:41.638196   21485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:07:41.642462   21485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40551
	I0806 07:07:41.642879   21485 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:07:41.643389   21485 main.go:141] libmachine: Using API Version  1
	I0806 07:07:41.643411   21485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:07:41.643713   21485 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:07:41.643878   21485 main.go:141] libmachine: (addons-505457) Calling .DriverName
	I0806 07:07:41.647261   21485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35203
	I0806 07:07:41.647860   21485 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:07:41.648446   21485 main.go:141] libmachine: Using API Version  1
	I0806 07:07:41.648464   21485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:07:41.649025   21485 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:07:41.650841   21485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44253
	I0806 07:07:41.651142   21485 main.go:141] libmachine: (addons-505457) Calling .GetState
	I0806 07:07:41.651521   21485 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:07:41.652186   21485 main.go:141] libmachine: Using API Version  1
	I0806 07:07:41.652203   21485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:07:41.652743   21485 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:07:41.652953   21485 main.go:141] libmachine: (addons-505457) Calling .GetState
	I0806 07:07:41.653312   21485 main.go:141] libmachine: (addons-505457) Calling .DriverName
	I0806 07:07:41.654547   21485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35531
	I0806 07:07:41.654661   21485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32941
	I0806 07:07:41.655024   21485 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:07:41.655326   21485 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0806 07:07:41.655507   21485 main.go:141] libmachine: Using API Version  1
	I0806 07:07:41.655527   21485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:07:41.655556   21485 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:07:41.655757   21485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46429
	I0806 07:07:41.655914   21485 main.go:141] libmachine: (addons-505457) Calling .DriverName
	I0806 07:07:41.655940   21485 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:07:41.656082   21485 main.go:141] libmachine: Using API Version  1
	I0806 07:07:41.656097   21485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:07:41.656598   21485 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:07:41.656914   21485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:07:41.656954   21485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:07:41.657079   21485 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0806 07:07:41.657094   21485 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0806 07:07:41.657111   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHHostname
	I0806 07:07:41.657539   21485 main.go:141] libmachine: (addons-505457) Calling .GetState
	I0806 07:07:41.657572   21485 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:07:41.657858   21485 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.1
	I0806 07:07:41.658553   21485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34269
	I0806 07:07:41.659123   21485 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0806 07:07:41.658136   21485 main.go:141] libmachine: Using API Version  1
	I0806 07:07:41.659147   21485 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0806 07:07:41.659276   21485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:07:41.659289   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHHostname
	I0806 07:07:41.659886   21485 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:07:41.660422   21485 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:07:41.660533   21485 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-505457"
	I0806 07:07:41.660574   21485 host.go:66] Checking if "addons-505457" exists ...
	I0806 07:07:41.660996   21485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:07:41.661029   21485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:07:41.661094   21485 main.go:141] libmachine: Using API Version  1
	I0806 07:07:41.661112   21485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:07:41.661208   21485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43677
	I0806 07:07:41.661383   21485 main.go:141] libmachine: (addons-505457) Calling .GetState
	I0806 07:07:41.661476   21485 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:07:41.662033   21485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:07:41.662075   21485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:07:41.662597   21485 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:07:41.663697   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:41.663819   21485 main.go:141] libmachine: (addons-505457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:7a:7b", ip: ""} in network mk-addons-505457: {Iface:virbr1 ExpiryTime:2024-08-06 08:07:01 +0000 UTC Type:0 Mac:52:54:00:d5:7a:7b Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:addons-505457 Clientid:01:52:54:00:d5:7a:7b}
	I0806 07:07:41.663831   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined IP address 192.168.39.6 and MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:41.664130   21485 main.go:141] libmachine: (addons-505457) Calling .DriverName
	I0806 07:07:41.664399   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHPort
	I0806 07:07:41.664618   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHKeyPath
	I0806 07:07:41.664812   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHUsername
	I0806 07:07:41.665011   21485 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/addons-505457/id_rsa Username:docker}
	I0806 07:07:41.666554   21485 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0806 07:07:41.666754   21485 main.go:141] libmachine: Using API Version  1
	I0806 07:07:41.666772   21485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:07:41.667114   21485 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:07:41.667264   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:41.667484   21485 main.go:141] libmachine: (addons-505457) Calling .GetState
	I0806 07:07:41.668580   21485 main.go:141] libmachine: (addons-505457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:7a:7b", ip: ""} in network mk-addons-505457: {Iface:virbr1 ExpiryTime:2024-08-06 08:07:01 +0000 UTC Type:0 Mac:52:54:00:d5:7a:7b Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:addons-505457 Clientid:01:52:54:00:d5:7a:7b}
	I0806 07:07:41.668606   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined IP address 192.168.39.6 and MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:41.669586   21485 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0806 07:07:41.669735   21485 main.go:141] libmachine: (addons-505457) Calling .DriverName
	I0806 07:07:41.669798   21485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41645
	I0806 07:07:41.670154   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHPort
	I0806 07:07:41.670299   21485 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:07:41.670311   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHKeyPath
	I0806 07:07:41.670452   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHUsername
	I0806 07:07:41.670592   21485 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/addons-505457/id_rsa Username:docker}
	I0806 07:07:41.670697   21485 main.go:141] libmachine: Using API Version  1
	I0806 07:07:41.670715   21485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:07:41.671044   21485 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:07:41.671261   21485 main.go:141] libmachine: (addons-505457) Calling .GetState
	I0806 07:07:41.671375   21485 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0806 07:07:41.672837   21485 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0806 07:07:41.672858   21485 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0806 07:07:41.672885   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHHostname
	I0806 07:07:41.672950   21485 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0806 07:07:41.673806   21485 main.go:141] libmachine: (addons-505457) Calling .DriverName
	I0806 07:07:41.674434   21485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44747
	I0806 07:07:41.675077   21485 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:07:41.675175   21485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40973
	I0806 07:07:41.675923   21485 main.go:141] libmachine: Using API Version  1
	I0806 07:07:41.675941   21485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:07:41.676014   21485 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:07:41.676081   21485 out.go:177]   - Using image docker.io/registry:2.8.3
	I0806 07:07:41.676533   21485 main.go:141] libmachine: Using API Version  1
	I0806 07:07:41.676553   21485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:07:41.676613   21485 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:07:41.676774   21485 main.go:141] libmachine: (addons-505457) Calling .GetState
	I0806 07:07:41.676843   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:41.676969   21485 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0806 07:07:41.677038   21485 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:07:41.677089   21485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46549
	I0806 07:07:41.677713   21485 main.go:141] libmachine: (addons-505457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:7a:7b", ip: ""} in network mk-addons-505457: {Iface:virbr1 ExpiryTime:2024-08-06 08:07:01 +0000 UTC Type:0 Mac:52:54:00:d5:7a:7b Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:addons-505457 Clientid:01:52:54:00:d5:7a:7b}
	I0806 07:07:41.677733   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined IP address 192.168.39.6 and MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:41.677871   21485 main.go:141] libmachine: (addons-505457) Calling .GetState
	I0806 07:07:41.677933   21485 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:07:41.677996   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHPort
	I0806 07:07:41.678159   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHKeyPath
	I0806 07:07:41.678285   21485 main.go:141] libmachine: Using API Version  1
	I0806 07:07:41.678444   21485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:07:41.678533   21485 main.go:141] libmachine: (addons-505457) Calling .DriverName
	I0806 07:07:41.678592   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHUsername
	I0806 07:07:41.678743   21485 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/addons-505457/id_rsa Username:docker}
	I0806 07:07:41.679024   21485 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:07:41.679398   21485 main.go:141] libmachine: (addons-505457) Calling .GetState
	I0806 07:07:41.679586   21485 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0806 07:07:41.680495   21485 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0806 07:07:41.680560   21485 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0806 07:07:41.680978   21485 main.go:141] libmachine: (addons-505457) Calling .DriverName
	I0806 07:07:41.681039   21485 main.go:141] libmachine: (addons-505457) Calling .DriverName
	I0806 07:07:41.681226   21485 main.go:141] libmachine: Making call to close driver server
	I0806 07:07:41.681237   21485 main.go:141] libmachine: (addons-505457) Calling .Close
	I0806 07:07:41.681393   21485 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0806 07:07:41.681412   21485 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0806 07:07:41.681429   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHHostname
	I0806 07:07:41.681530   21485 main.go:141] libmachine: (addons-505457) DBG | Closing plugin on server side
	I0806 07:07:41.681545   21485 main.go:141] libmachine: Successfully made call to close driver server
	I0806 07:07:41.681556   21485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 07:07:41.681572   21485 main.go:141] libmachine: Making call to close driver server
	I0806 07:07:41.681579   21485 main.go:141] libmachine: (addons-505457) Calling .Close
	I0806 07:07:41.682704   21485 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 07:07:41.682778   21485 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0806 07:07:41.682790   21485 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0806 07:07:41.682805   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHHostname
	I0806 07:07:41.683400   21485 main.go:141] libmachine: Successfully made call to close driver server
	I0806 07:07:41.683414   21485 main.go:141] libmachine: Making call to close connection to plugin binary
	W0806 07:07:41.683476   21485 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0806 07:07:41.684141   21485 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0806 07:07:41.684160   21485 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0806 07:07:41.684174   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHHostname
	I0806 07:07:41.684707   21485 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0806 07:07:41.685118   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:41.685612   21485 main.go:141] libmachine: (addons-505457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:7a:7b", ip: ""} in network mk-addons-505457: {Iface:virbr1 ExpiryTime:2024-08-06 08:07:01 +0000 UTC Type:0 Mac:52:54:00:d5:7a:7b Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:addons-505457 Clientid:01:52:54:00:d5:7a:7b}
	I0806 07:07:41.685634   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined IP address 192.168.39.6 and MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:41.686151   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHPort
	I0806 07:07:41.686338   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:41.686373   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHKeyPath
	I0806 07:07:41.686511   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHUsername
	I0806 07:07:41.686654   21485 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/addons-505457/id_rsa Username:docker}
	I0806 07:07:41.687129   21485 main.go:141] libmachine: (addons-505457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:7a:7b", ip: ""} in network mk-addons-505457: {Iface:virbr1 ExpiryTime:2024-08-06 08:07:01 +0000 UTC Type:0 Mac:52:54:00:d5:7a:7b Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:addons-505457 Clientid:01:52:54:00:d5:7a:7b}
	I0806 07:07:41.687149   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined IP address 192.168.39.6 and MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:41.687310   21485 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0806 07:07:41.687342   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHPort
	I0806 07:07:41.687535   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHKeyPath
	I0806 07:07:41.687670   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHUsername
	I0806 07:07:41.687800   21485 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/addons-505457/id_rsa Username:docker}
	I0806 07:07:41.688037   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:41.688227   21485 main.go:141] libmachine: (addons-505457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:7a:7b", ip: ""} in network mk-addons-505457: {Iface:virbr1 ExpiryTime:2024-08-06 08:07:01 +0000 UTC Type:0 Mac:52:54:00:d5:7a:7b Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:addons-505457 Clientid:01:52:54:00:d5:7a:7b}
	I0806 07:07:41.688243   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined IP address 192.168.39.6 and MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:41.688411   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHPort
	I0806 07:07:41.688576   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHKeyPath
	I0806 07:07:41.688741   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHUsername
	I0806 07:07:41.689056   21485 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/addons-505457/id_rsa Username:docker}
	I0806 07:07:41.690206   21485 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0806 07:07:41.691580   21485 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0806 07:07:41.691601   21485 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0806 07:07:41.691618   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHHostname
	I0806 07:07:41.692846   21485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44127
	I0806 07:07:41.693496   21485 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:07:41.694185   21485 main.go:141] libmachine: Using API Version  1
	I0806 07:07:41.694205   21485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:07:41.694500   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:41.694716   21485 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:07:41.694984   21485 main.go:141] libmachine: (addons-505457) Calling .GetState
	I0806 07:07:41.695044   21485 main.go:141] libmachine: (addons-505457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:7a:7b", ip: ""} in network mk-addons-505457: {Iface:virbr1 ExpiryTime:2024-08-06 08:07:01 +0000 UTC Type:0 Mac:52:54:00:d5:7a:7b Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:addons-505457 Clientid:01:52:54:00:d5:7a:7b}
	I0806 07:07:41.695060   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined IP address 192.168.39.6 and MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:41.695149   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHPort
	I0806 07:07:41.695715   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHKeyPath
	I0806 07:07:41.695924   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHUsername
	I0806 07:07:41.696100   21485 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/addons-505457/id_rsa Username:docker}
	I0806 07:07:41.696729   21485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37875
	I0806 07:07:41.697050   21485 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:07:41.697263   21485 main.go:141] libmachine: (addons-505457) Calling .DriverName
	I0806 07:07:41.697620   21485 main.go:141] libmachine: Using API Version  1
	I0806 07:07:41.697636   21485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:07:41.698012   21485 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:07:41.698475   21485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:07:41.698512   21485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:07:41.699107   21485 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0806 07:07:41.700268   21485 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0806 07:07:41.700285   21485 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0806 07:07:41.700304   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHHostname
	I0806 07:07:41.706154   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:41.706529   21485 main.go:141] libmachine: (addons-505457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:7a:7b", ip: ""} in network mk-addons-505457: {Iface:virbr1 ExpiryTime:2024-08-06 08:07:01 +0000 UTC Type:0 Mac:52:54:00:d5:7a:7b Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:addons-505457 Clientid:01:52:54:00:d5:7a:7b}
	I0806 07:07:41.706554   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined IP address 192.168.39.6 and MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:41.706665   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHPort
	I0806 07:07:41.706857   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHKeyPath
	I0806 07:07:41.707047   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHUsername
	I0806 07:07:41.707209   21485 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/addons-505457/id_rsa Username:docker}
	I0806 07:07:41.710833   21485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39249
	I0806 07:07:41.711312   21485 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:07:41.711922   21485 main.go:141] libmachine: Using API Version  1
	I0806 07:07:41.711945   21485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:07:41.712300   21485 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:07:41.712649   21485 main.go:141] libmachine: (addons-505457) Calling .GetState
	I0806 07:07:41.714120   21485 main.go:141] libmachine: (addons-505457) Calling .DriverName
	I0806 07:07:41.714382   21485 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0806 07:07:41.714396   21485 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0806 07:07:41.714414   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHHostname
	I0806 07:07:41.715574   21485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37033
	I0806 07:07:41.716421   21485 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:07:41.717048   21485 main.go:141] libmachine: Using API Version  1
	I0806 07:07:41.717076   21485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:07:41.717562   21485 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:07:41.717777   21485 main.go:141] libmachine: (addons-505457) Calling .GetState
	I0806 07:07:41.718177   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:41.718611   21485 main.go:141] libmachine: (addons-505457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:7a:7b", ip: ""} in network mk-addons-505457: {Iface:virbr1 ExpiryTime:2024-08-06 08:07:01 +0000 UTC Type:0 Mac:52:54:00:d5:7a:7b Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:addons-505457 Clientid:01:52:54:00:d5:7a:7b}
	I0806 07:07:41.718629   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined IP address 192.168.39.6 and MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:41.718774   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHPort
	I0806 07:07:41.718964   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHKeyPath
	I0806 07:07:41.719137   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHUsername
	I0806 07:07:41.719326   21485 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/addons-505457/id_rsa Username:docker}
	I0806 07:07:41.720045   21485 main.go:141] libmachine: (addons-505457) Calling .DriverName
	I0806 07:07:41.721954   21485 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0806 07:07:41.723316   21485 out.go:177]   - Using image docker.io/busybox:stable
	I0806 07:07:41.724558   21485 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0806 07:07:41.724577   21485 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0806 07:07:41.724595   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHHostname
	I0806 07:07:41.727292   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:41.727628   21485 main.go:141] libmachine: (addons-505457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:7a:7b", ip: ""} in network mk-addons-505457: {Iface:virbr1 ExpiryTime:2024-08-06 08:07:01 +0000 UTC Type:0 Mac:52:54:00:d5:7a:7b Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:addons-505457 Clientid:01:52:54:00:d5:7a:7b}
	I0806 07:07:41.727652   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined IP address 192.168.39.6 and MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:41.727804   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHPort
	I0806 07:07:41.727966   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHKeyPath
	I0806 07:07:41.728102   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHUsername
	I0806 07:07:41.728398   21485 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/addons-505457/id_rsa Username:docker}
	W0806 07:07:41.729691   21485 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:53744->192.168.39.6:22: read: connection reset by peer
	I0806 07:07:41.729720   21485 retry.go:31] will retry after 290.754069ms: ssh: handshake failed: read tcp 192.168.39.1:53744->192.168.39.6:22: read: connection reset by peer
	I0806 07:07:42.044424   21485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0806 07:07:42.080531   21485 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0806 07:07:42.080553   21485 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0806 07:07:42.083204   21485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0806 07:07:42.133505   21485 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0806 07:07:42.133527   21485 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0806 07:07:42.173625   21485 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0806 07:07:42.173654   21485 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0806 07:07:42.190073   21485 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0806 07:07:42.190097   21485 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0806 07:07:42.212534   21485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0806 07:07:42.266356   21485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0806 07:07:42.273161   21485 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0806 07:07:42.273180   21485 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0806 07:07:42.277220   21485 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0806 07:07:42.277236   21485 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0806 07:07:42.295112   21485 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0806 07:07:42.295137   21485 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0806 07:07:42.335476   21485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0806 07:07:42.352718   21485 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0806 07:07:42.352748   21485 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0806 07:07:42.375897   21485 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0806 07:07:42.375924   21485 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0806 07:07:42.385412   21485 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0806 07:07:42.385438   21485 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0806 07:07:42.398619   21485 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 07:07:42.398675   21485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0806 07:07:42.417048   21485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0806 07:07:42.423670   21485 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0806 07:07:42.423691   21485 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0806 07:07:42.426028   21485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0806 07:07:42.520124   21485 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0806 07:07:42.520160   21485 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0806 07:07:42.573247   21485 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0806 07:07:42.573275   21485 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0806 07:07:42.593604   21485 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0806 07:07:42.593627   21485 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0806 07:07:42.608034   21485 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0806 07:07:42.608067   21485 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0806 07:07:42.614380   21485 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0806 07:07:42.614404   21485 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0806 07:07:42.638282   21485 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0806 07:07:42.638316   21485 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0806 07:07:42.680320   21485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0806 07:07:42.790023   21485 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0806 07:07:42.790053   21485 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0806 07:07:42.814619   21485 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0806 07:07:42.814645   21485 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0806 07:07:42.819943   21485 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0806 07:07:42.819962   21485 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0806 07:07:42.855550   21485 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0806 07:07:42.855569   21485 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0806 07:07:42.883794   21485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0806 07:07:42.884161   21485 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0806 07:07:42.884183   21485 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0806 07:07:42.996274   21485 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0806 07:07:42.996294   21485 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0806 07:07:43.035349   21485 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0806 07:07:43.035373   21485 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0806 07:07:43.069499   21485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0806 07:07:43.085520   21485 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0806 07:07:43.085553   21485 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0806 07:07:43.107768   21485 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0806 07:07:43.107795   21485 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0806 07:07:43.201204   21485 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0806 07:07:43.201227   21485 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0806 07:07:43.206067   21485 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0806 07:07:43.206091   21485 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0806 07:07:43.356151   21485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0806 07:07:43.405575   21485 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0806 07:07:43.405598   21485 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0806 07:07:43.424518   21485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0806 07:07:43.497902   21485 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0806 07:07:43.497924   21485 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0806 07:07:43.539251   21485 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0806 07:07:43.539275   21485 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0806 07:07:43.730367   21485 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.685909801s)
	I0806 07:07:43.730419   21485 main.go:141] libmachine: Making call to close driver server
	I0806 07:07:43.730429   21485 main.go:141] libmachine: (addons-505457) Calling .Close
	I0806 07:07:43.730765   21485 main.go:141] libmachine: Successfully made call to close driver server
	I0806 07:07:43.730786   21485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 07:07:43.730796   21485 main.go:141] libmachine: Making call to close driver server
	I0806 07:07:43.730804   21485 main.go:141] libmachine: (addons-505457) Calling .Close
	I0806 07:07:43.731073   21485 main.go:141] libmachine: Successfully made call to close driver server
	I0806 07:07:43.731090   21485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 07:07:43.951920   21485 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0806 07:07:43.951945   21485 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0806 07:07:43.960513   21485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0806 07:07:44.268677   21485 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0806 07:07:44.268702   21485 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0806 07:07:44.588498   21485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0806 07:07:46.326281   21485 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.11371277s)
	I0806 07:07:46.326281   21485 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.243048043s)
	I0806 07:07:46.326337   21485 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.059951011s)
	I0806 07:07:46.326336   21485 main.go:141] libmachine: Making call to close driver server
	I0806 07:07:46.326382   21485 main.go:141] libmachine: Making call to close driver server
	I0806 07:07:46.326405   21485 main.go:141] libmachine: (addons-505457) Calling .Close
	I0806 07:07:46.326425   21485 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.927777565s)
	I0806 07:07:46.326461   21485 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.927766408s)
	I0806 07:07:46.326481   21485 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0806 07:07:46.326385   21485 main.go:141] libmachine: (addons-505457) Calling .Close
	I0806 07:07:46.326494   21485 main.go:141] libmachine: Making call to close driver server
	I0806 07:07:46.326512   21485 main.go:141] libmachine: (addons-505457) Calling .Close
	I0806 07:07:46.326536   21485 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (3.90946426s)
	I0806 07:07:46.326389   21485 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.990886702s)
	I0806 07:07:46.326601   21485 main.go:141] libmachine: Making call to close driver server
	I0806 07:07:46.326615   21485 main.go:141] libmachine: (addons-505457) Calling .Close
	I0806 07:07:46.326558   21485 main.go:141] libmachine: Making call to close driver server
	I0806 07:07:46.326674   21485 main.go:141] libmachine: (addons-505457) Calling .Close
	I0806 07:07:46.327649   21485 node_ready.go:35] waiting up to 6m0s for node "addons-505457" to be "Ready" ...
	I0806 07:07:46.327828   21485 main.go:141] libmachine: Successfully made call to close driver server
	I0806 07:07:46.327838   21485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 07:07:46.327813   21485 main.go:141] libmachine: (addons-505457) DBG | Closing plugin on server side
	I0806 07:07:46.327863   21485 main.go:141] libmachine: (addons-505457) DBG | Closing plugin on server side
	I0806 07:07:46.327880   21485 main.go:141] libmachine: (addons-505457) DBG | Closing plugin on server side
	I0806 07:07:46.327881   21485 main.go:141] libmachine: Successfully made call to close driver server
	I0806 07:07:46.327890   21485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 07:07:46.327899   21485 main.go:141] libmachine: Making call to close driver server
	I0806 07:07:46.327900   21485 main.go:141] libmachine: Successfully made call to close driver server
	I0806 07:07:46.327907   21485 main.go:141] libmachine: (addons-505457) Calling .Close
	I0806 07:07:46.327907   21485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 07:07:46.327916   21485 main.go:141] libmachine: Making call to close driver server
	I0806 07:07:46.327922   21485 main.go:141] libmachine: (addons-505457) Calling .Close
	I0806 07:07:46.328019   21485 main.go:141] libmachine: Successfully made call to close driver server
	I0806 07:07:46.328031   21485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 07:07:46.328040   21485 main.go:141] libmachine: Making call to close driver server
	I0806 07:07:46.328048   21485 main.go:141] libmachine: (addons-505457) Calling .Close
	I0806 07:07:46.328136   21485 main.go:141] libmachine: (addons-505457) DBG | Closing plugin on server side
	I0806 07:07:46.328160   21485 main.go:141] libmachine: Successfully made call to close driver server
	I0806 07:07:46.328167   21485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 07:07:46.328173   21485 main.go:141] libmachine: (addons-505457) DBG | Closing plugin on server side
	I0806 07:07:46.328208   21485 main.go:141] libmachine: Successfully made call to close driver server
	I0806 07:07:46.328215   21485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 07:07:46.328222   21485 main.go:141] libmachine: Making call to close driver server
	I0806 07:07:46.328230   21485 main.go:141] libmachine: (addons-505457) Calling .Close
	I0806 07:07:46.327846   21485 main.go:141] libmachine: Making call to close driver server
	I0806 07:07:46.328264   21485 main.go:141] libmachine: (addons-505457) Calling .Close
	I0806 07:07:46.329169   21485 main.go:141] libmachine: (addons-505457) DBG | Closing plugin on server side
	I0806 07:07:46.329206   21485 main.go:141] libmachine: (addons-505457) DBG | Closing plugin on server side
	I0806 07:07:46.329236   21485 main.go:141] libmachine: Successfully made call to close driver server
	I0806 07:07:46.329244   21485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 07:07:46.329415   21485 main.go:141] libmachine: Successfully made call to close driver server
	I0806 07:07:46.329424   21485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 07:07:46.329457   21485 main.go:141] libmachine: (addons-505457) DBG | Closing plugin on server side
	I0806 07:07:46.329480   21485 main.go:141] libmachine: Successfully made call to close driver server
	I0806 07:07:46.329490   21485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 07:07:46.329499   21485 addons.go:475] Verifying addon registry=true in "addons-505457"
	I0806 07:07:46.329701   21485 main.go:141] libmachine: (addons-505457) DBG | Closing plugin on server side
	I0806 07:07:46.329726   21485 main.go:141] libmachine: Successfully made call to close driver server
	I0806 07:07:46.329733   21485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 07:07:46.331471   21485 out.go:177] * Verifying registry addon...
	I0806 07:07:46.333944   21485 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0806 07:07:46.360587   21485 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0806 07:07:46.360615   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:07:46.368956   21485 node_ready.go:49] node "addons-505457" has status "Ready":"True"
	I0806 07:07:46.368983   21485 node_ready.go:38] duration metric: took 41.307309ms for node "addons-505457" to be "Ready" ...
	I0806 07:07:46.368993   21485 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 07:07:46.421962   21485 main.go:141] libmachine: Making call to close driver server
	I0806 07:07:46.421986   21485 main.go:141] libmachine: (addons-505457) Calling .Close
	I0806 07:07:46.422408   21485 main.go:141] libmachine: (addons-505457) DBG | Closing plugin on server side
	I0806 07:07:46.422440   21485 main.go:141] libmachine: Successfully made call to close driver server
	I0806 07:07:46.422458   21485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 07:07:46.425315   21485 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-k5d8d" in "kube-system" namespace to be "Ready" ...
	I0806 07:07:46.930734   21485 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-505457" context rescaled to 1 replicas
	I0806 07:07:46.944892   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:07:47.368864   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:07:47.918575   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:07:48.448482   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:07:48.556719   21485 pod_ready.go:92] pod "coredns-7db6d8ff4d-k5d8d" in "kube-system" namespace has status "Ready":"True"
	I0806 07:07:48.556742   21485 pod_ready.go:81] duration metric: took 2.131404444s for pod "coredns-7db6d8ff4d-k5d8d" in "kube-system" namespace to be "Ready" ...
	I0806 07:07:48.556755   21485 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-l2x2b" in "kube-system" namespace to be "Ready" ...
	I0806 07:07:48.591054   21485 pod_ready.go:92] pod "coredns-7db6d8ff4d-l2x2b" in "kube-system" namespace has status "Ready":"True"
	I0806 07:07:48.591075   21485 pod_ready.go:81] duration metric: took 34.311425ms for pod "coredns-7db6d8ff4d-l2x2b" in "kube-system" namespace to be "Ready" ...
	I0806 07:07:48.591088   21485 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-505457" in "kube-system" namespace to be "Ready" ...
	I0806 07:07:48.634099   21485 pod_ready.go:92] pod "etcd-addons-505457" in "kube-system" namespace has status "Ready":"True"
	I0806 07:07:48.634122   21485 pod_ready.go:81] duration metric: took 43.026472ms for pod "etcd-addons-505457" in "kube-system" namespace to be "Ready" ...
	I0806 07:07:48.634141   21485 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-505457" in "kube-system" namespace to be "Ready" ...
	I0806 07:07:48.661157   21485 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0806 07:07:48.661193   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHHostname
	I0806 07:07:48.664411   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:48.664931   21485 main.go:141] libmachine: (addons-505457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:7a:7b", ip: ""} in network mk-addons-505457: {Iface:virbr1 ExpiryTime:2024-08-06 08:07:01 +0000 UTC Type:0 Mac:52:54:00:d5:7a:7b Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:addons-505457 Clientid:01:52:54:00:d5:7a:7b}
	I0806 07:07:48.664989   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined IP address 192.168.39.6 and MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:48.665156   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHPort
	I0806 07:07:48.665361   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHKeyPath
	I0806 07:07:48.665565   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHUsername
	I0806 07:07:48.665745   21485 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/addons-505457/id_rsa Username:docker}
	I0806 07:07:48.669109   21485 pod_ready.go:92] pod "kube-apiserver-addons-505457" in "kube-system" namespace has status "Ready":"True"
	I0806 07:07:48.669126   21485 pod_ready.go:81] duration metric: took 34.978654ms for pod "kube-apiserver-addons-505457" in "kube-system" namespace to be "Ready" ...
	I0806 07:07:48.669136   21485 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-505457" in "kube-system" namespace to be "Ready" ...
	I0806 07:07:48.680720   21485 pod_ready.go:92] pod "kube-controller-manager-addons-505457" in "kube-system" namespace has status "Ready":"True"
	I0806 07:07:48.680741   21485 pod_ready.go:81] duration metric: took 11.598929ms for pod "kube-controller-manager-addons-505457" in "kube-system" namespace to be "Ready" ...
	I0806 07:07:48.680752   21485 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-t7djg" in "kube-system" namespace to be "Ready" ...
	I0806 07:07:48.834605   21485 pod_ready.go:92] pod "kube-proxy-t7djg" in "kube-system" namespace has status "Ready":"True"
	I0806 07:07:48.834624   21485 pod_ready.go:81] duration metric: took 153.865794ms for pod "kube-proxy-t7djg" in "kube-system" namespace to be "Ready" ...
	I0806 07:07:48.834632   21485 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-505457" in "kube-system" namespace to be "Ready" ...
	I0806 07:07:48.845666   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:07:49.287006   21485 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0806 07:07:49.357744   21485 pod_ready.go:92] pod "kube-scheduler-addons-505457" in "kube-system" namespace has status "Ready":"True"
	I0806 07:07:49.357767   21485 pod_ready.go:81] duration metric: took 523.128862ms for pod "kube-scheduler-addons-505457" in "kube-system" namespace to be "Ready" ...
	I0806 07:07:49.357777   21485 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-bxmng" in "kube-system" namespace to be "Ready" ...
	I0806 07:07:49.381509   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:07:49.546771   21485 addons.go:234] Setting addon gcp-auth=true in "addons-505457"
	I0806 07:07:49.546826   21485 host.go:66] Checking if "addons-505457" exists ...
	I0806 07:07:49.547274   21485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:07:49.547309   21485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:07:49.562294   21485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40609
	I0806 07:07:49.562703   21485 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:07:49.563152   21485 main.go:141] libmachine: Using API Version  1
	I0806 07:07:49.563172   21485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:07:49.563482   21485 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:07:49.564032   21485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:07:49.564078   21485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:07:49.579548   21485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33635
	I0806 07:07:49.579943   21485 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:07:49.580453   21485 main.go:141] libmachine: Using API Version  1
	I0806 07:07:49.580485   21485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:07:49.580811   21485 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:07:49.581025   21485 main.go:141] libmachine: (addons-505457) Calling .GetState
	I0806 07:07:49.582517   21485 main.go:141] libmachine: (addons-505457) Calling .DriverName
	I0806 07:07:49.582726   21485 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0806 07:07:49.582752   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHHostname
	I0806 07:07:49.585398   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:49.585869   21485 main.go:141] libmachine: (addons-505457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:7a:7b", ip: ""} in network mk-addons-505457: {Iface:virbr1 ExpiryTime:2024-08-06 08:07:01 +0000 UTC Type:0 Mac:52:54:00:d5:7a:7b Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:addons-505457 Clientid:01:52:54:00:d5:7a:7b}
	I0806 07:07:49.585895   21485 main.go:141] libmachine: (addons-505457) DBG | domain addons-505457 has defined IP address 192.168.39.6 and MAC address 52:54:00:d5:7a:7b in network mk-addons-505457
	I0806 07:07:49.586008   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHPort
	I0806 07:07:49.586179   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHKeyPath
	I0806 07:07:49.586342   21485 main.go:141] libmachine: (addons-505457) Calling .GetSSHUsername
	I0806 07:07:49.586478   21485 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/addons-505457/id_rsa Username:docker}
	I0806 07:07:49.851246   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:07:50.371776   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:07:50.460951   21485 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.034886464s)
	I0806 07:07:50.461010   21485 main.go:141] libmachine: Making call to close driver server
	I0806 07:07:50.461021   21485 main.go:141] libmachine: (addons-505457) Calling .Close
	I0806 07:07:50.461022   21485 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.780669681s)
	I0806 07:07:50.461072   21485 main.go:141] libmachine: Making call to close driver server
	I0806 07:07:50.461129   21485 main.go:141] libmachine: (addons-505457) Calling .Close
	I0806 07:07:50.461151   21485 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.577331206s)
	I0806 07:07:50.461187   21485 main.go:141] libmachine: Making call to close driver server
	I0806 07:07:50.461205   21485 main.go:141] libmachine: (addons-505457) Calling .Close
	I0806 07:07:50.461233   21485 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.391692933s)
	I0806 07:07:50.461266   21485 main.go:141] libmachine: Making call to close driver server
	I0806 07:07:50.461279   21485 main.go:141] libmachine: (addons-505457) Calling .Close
	I0806 07:07:50.461318   21485 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.105128466s)
	I0806 07:07:50.461348   21485 main.go:141] libmachine: Making call to close driver server
	I0806 07:07:50.461359   21485 main.go:141] libmachine: (addons-505457) Calling .Close
	I0806 07:07:50.461369   21485 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.036825363s)
	I0806 07:07:50.461388   21485 main.go:141] libmachine: Making call to close driver server
	I0806 07:07:50.461396   21485 main.go:141] libmachine: (addons-505457) Calling .Close
	I0806 07:07:50.461527   21485 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.500943175s)
	W0806 07:07:50.461562   21485 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0806 07:07:50.461589   21485 main.go:141] libmachine: Successfully made call to close driver server
	I0806 07:07:50.461625   21485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 07:07:50.461663   21485 main.go:141] libmachine: Making call to close driver server
	I0806 07:07:50.461692   21485 main.go:141] libmachine: (addons-505457) DBG | Closing plugin on server side
	I0806 07:07:50.461702   21485 main.go:141] libmachine: (addons-505457) Calling .Close
	I0806 07:07:50.461720   21485 main.go:141] libmachine: Successfully made call to close driver server
	I0806 07:07:50.461727   21485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 07:07:50.461735   21485 main.go:141] libmachine: Making call to close driver server
	I0806 07:07:50.461741   21485 main.go:141] libmachine: (addons-505457) Calling .Close
	I0806 07:07:50.461862   21485 main.go:141] libmachine: (addons-505457) DBG | Closing plugin on server side
	I0806 07:07:50.461886   21485 main.go:141] libmachine: Successfully made call to close driver server
	I0806 07:07:50.461903   21485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 07:07:50.461910   21485 main.go:141] libmachine: Making call to close driver server
	I0806 07:07:50.461671   21485 main.go:141] libmachine: Successfully made call to close driver server
	I0806 07:07:50.461932   21485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 07:07:50.461942   21485 main.go:141] libmachine: Making call to close driver server
	I0806 07:07:50.461949   21485 main.go:141] libmachine: (addons-505457) Calling .Close
	I0806 07:07:50.461969   21485 main.go:141] libmachine: (addons-505457) DBG | Closing plugin on server side
	I0806 07:07:50.461917   21485 main.go:141] libmachine: (addons-505457) Calling .Close
	I0806 07:07:50.461999   21485 main.go:141] libmachine: Successfully made call to close driver server
	I0806 07:07:50.462016   21485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 07:07:50.461594   21485 retry.go:31] will retry after 220.627211ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0806 07:07:50.462025   21485 addons.go:475] Verifying addon metrics-server=true in "addons-505457"
	I0806 07:07:50.461639   21485 main.go:141] libmachine: (addons-505457) DBG | Closing plugin on server side
	I0806 07:07:50.462196   21485 main.go:141] libmachine: (addons-505457) DBG | Closing plugin on server side
	I0806 07:07:50.462221   21485 main.go:141] libmachine: Successfully made call to close driver server
	I0806 07:07:50.462228   21485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 07:07:50.462257   21485 main.go:141] libmachine: (addons-505457) DBG | Closing plugin on server side
	I0806 07:07:50.462282   21485 main.go:141] libmachine: Successfully made call to close driver server
	I0806 07:07:50.462289   21485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 07:07:50.462451   21485 main.go:141] libmachine: (addons-505457) DBG | Closing plugin on server side
	I0806 07:07:50.462482   21485 main.go:141] libmachine: Successfully made call to close driver server
	I0806 07:07:50.462509   21485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 07:07:50.462517   21485 main.go:141] libmachine: Making call to close driver server
	I0806 07:07:50.462525   21485 main.go:141] libmachine: (addons-505457) Calling .Close
	I0806 07:07:50.462618   21485 main.go:141] libmachine: (addons-505457) DBG | Closing plugin on server side
	I0806 07:07:50.462643   21485 main.go:141] libmachine: Successfully made call to close driver server
	I0806 07:07:50.462649   21485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 07:07:50.464123   21485 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-505457 service yakd-dashboard -n yakd-dashboard
	
	I0806 07:07:50.464691   21485 main.go:141] libmachine: (addons-505457) DBG | Closing plugin on server side
	I0806 07:07:50.464713   21485 main.go:141] libmachine: Successfully made call to close driver server
	I0806 07:07:50.464731   21485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 07:07:50.464732   21485 main.go:141] libmachine: (addons-505457) DBG | Closing plugin on server side
	I0806 07:07:50.464741   21485 main.go:141] libmachine: Making call to close driver server
	I0806 07:07:50.464749   21485 main.go:141] libmachine: (addons-505457) Calling .Close
	I0806 07:07:50.464758   21485 main.go:141] libmachine: Successfully made call to close driver server
	I0806 07:07:50.464766   21485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 07:07:50.464775   21485 addons.go:475] Verifying addon ingress=true in "addons-505457"
	I0806 07:07:50.465116   21485 main.go:141] libmachine: Successfully made call to close driver server
	I0806 07:07:50.465393   21485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 07:07:50.465138   21485 main.go:141] libmachine: (addons-505457) DBG | Closing plugin on server side
	I0806 07:07:50.466500   21485 out.go:177] * Verifying ingress addon...
	I0806 07:07:50.468520   21485 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0806 07:07:50.497836   21485 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0806 07:07:50.497858   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:07:50.521103   21485 main.go:141] libmachine: Making call to close driver server
	I0806 07:07:50.521138   21485 main.go:141] libmachine: (addons-505457) Calling .Close
	I0806 07:07:50.521457   21485 main.go:141] libmachine: Successfully made call to close driver server
	I0806 07:07:50.521473   21485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 07:07:50.683631   21485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0806 07:07:50.851560   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:07:50.979163   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:07:51.369147   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:07:51.468552   21485 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-bxmng" in "kube-system" namespace has status "Ready":"False"
	I0806 07:07:51.487668   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:07:51.800272   21485 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.21172001s)
	I0806 07:07:51.800300   21485 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.217552899s)
	I0806 07:07:51.800325   21485 main.go:141] libmachine: Making call to close driver server
	I0806 07:07:51.800338   21485 main.go:141] libmachine: (addons-505457) Calling .Close
	I0806 07:07:51.800618   21485 main.go:141] libmachine: Successfully made call to close driver server
	I0806 07:07:51.800636   21485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 07:07:51.800646   21485 main.go:141] libmachine: Making call to close driver server
	I0806 07:07:51.800654   21485 main.go:141] libmachine: (addons-505457) Calling .Close
	I0806 07:07:51.800941   21485 main.go:141] libmachine: (addons-505457) DBG | Closing plugin on server side
	I0806 07:07:51.800946   21485 main.go:141] libmachine: Successfully made call to close driver server
	I0806 07:07:51.800957   21485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 07:07:51.800967   21485 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-505457"
	I0806 07:07:51.802636   21485 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0806 07:07:51.802648   21485 out.go:177] * Verifying csi-hostpath-driver addon...
	I0806 07:07:51.804627   21485 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0806 07:07:51.805228   21485 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0806 07:07:51.805892   21485 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0806 07:07:51.805905   21485 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0806 07:07:51.822936   21485 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0806 07:07:51.822955   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:07:51.854494   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:07:51.899694   21485 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0806 07:07:51.899716   21485 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0806 07:07:51.938391   21485 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0806 07:07:51.938414   21485 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0806 07:07:51.973996   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:07:51.988953   21485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0806 07:07:52.310684   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:07:52.338860   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:07:52.480745   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:07:52.513654   21485 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.829973895s)
	I0806 07:07:52.513701   21485 main.go:141] libmachine: Making call to close driver server
	I0806 07:07:52.513718   21485 main.go:141] libmachine: (addons-505457) Calling .Close
	I0806 07:07:52.514023   21485 main.go:141] libmachine: Successfully made call to close driver server
	I0806 07:07:52.514047   21485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 07:07:52.514065   21485 main.go:141] libmachine: Making call to close driver server
	I0806 07:07:52.514074   21485 main.go:141] libmachine: (addons-505457) Calling .Close
	I0806 07:07:52.514306   21485 main.go:141] libmachine: (addons-505457) DBG | Closing plugin on server side
	I0806 07:07:52.514340   21485 main.go:141] libmachine: Successfully made call to close driver server
	I0806 07:07:52.514354   21485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 07:07:52.813656   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:07:52.855946   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:07:52.943152   21485 main.go:141] libmachine: Making call to close driver server
	I0806 07:07:52.943181   21485 main.go:141] libmachine: (addons-505457) Calling .Close
	I0806 07:07:52.943478   21485 main.go:141] libmachine: Successfully made call to close driver server
	I0806 07:07:52.943498   21485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 07:07:52.943507   21485 main.go:141] libmachine: Making call to close driver server
	I0806 07:07:52.943515   21485 main.go:141] libmachine: (addons-505457) Calling .Close
	I0806 07:07:52.943521   21485 main.go:141] libmachine: (addons-505457) DBG | Closing plugin on server side
	I0806 07:07:52.943761   21485 main.go:141] libmachine: (addons-505457) DBG | Closing plugin on server side
	I0806 07:07:52.943808   21485 main.go:141] libmachine: Successfully made call to close driver server
	I0806 07:07:52.943827   21485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 07:07:52.945686   21485 addons.go:475] Verifying addon gcp-auth=true in "addons-505457"
	I0806 07:07:52.947511   21485 out.go:177] * Verifying gcp-auth addon...
	I0806 07:07:52.949472   21485 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0806 07:07:52.962311   21485 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0806 07:07:52.962329   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:07:52.980932   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:07:53.311899   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:07:53.338760   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:07:53.452597   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:07:53.478635   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:07:53.813371   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:07:53.839501   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:07:53.863931   21485 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-bxmng" in "kube-system" namespace has status "Ready":"False"
	I0806 07:07:53.953947   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:07:53.972707   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:07:54.311448   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:07:54.341745   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:07:54.453642   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:07:54.473348   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:07:54.811602   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:07:54.839275   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:07:54.955131   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:07:54.979042   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:07:55.311378   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:07:55.339117   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:07:55.453933   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:07:55.473277   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:07:55.810459   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:07:55.839203   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:07:55.953682   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:07:55.973269   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:07:56.310822   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:07:56.339632   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:07:56.367924   21485 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-bxmng" in "kube-system" namespace has status "Ready":"False"
	I0806 07:07:56.454727   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:07:56.772990   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:07:56.810956   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:07:56.838627   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:07:56.953679   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:07:56.973779   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:07:57.312588   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:07:57.339136   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:07:57.455228   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:07:57.473132   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:07:57.811952   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:07:57.839238   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:07:57.953762   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:07:57.974199   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:07:58.311575   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:07:58.347089   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:07:58.453344   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:07:58.473535   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:07:58.811336   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:07:58.839265   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:07:58.867104   21485 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-bxmng" in "kube-system" namespace has status "Ready":"False"
	I0806 07:07:58.953842   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:07:58.974861   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:07:59.311080   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:07:59.339431   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:07:59.453058   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:07:59.473186   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:07:59.810926   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:07:59.839124   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:07:59.953330   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:07:59.973095   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:00.310724   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:00.339395   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:00.363726   21485 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-bxmng" in "kube-system" namespace has status "Ready":"True"
	I0806 07:08:00.363753   21485 pod_ready.go:81] duration metric: took 11.005968814s for pod "nvidia-device-plugin-daemonset-bxmng" in "kube-system" namespace to be "Ready" ...
	I0806 07:08:00.363769   21485 pod_ready.go:38] duration metric: took 13.994766516s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 07:08:00.363782   21485 api_server.go:52] waiting for apiserver process to appear ...
	I0806 07:08:00.363830   21485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 07:08:00.381009   21485 api_server.go:72] duration metric: took 18.839123062s to wait for apiserver process to appear ...
	I0806 07:08:00.381033   21485 api_server.go:88] waiting for apiserver healthz status ...
	I0806 07:08:00.381057   21485 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I0806 07:08:00.385284   21485 api_server.go:279] https://192.168.39.6:8443/healthz returned 200:
	ok
	I0806 07:08:00.386308   21485 api_server.go:141] control plane version: v1.30.3
	I0806 07:08:00.386325   21485 api_server.go:131] duration metric: took 5.285561ms to wait for apiserver health ...
	I0806 07:08:00.386333   21485 system_pods.go:43] waiting for kube-system pods to appear ...
	I0806 07:08:00.395028   21485 system_pods.go:59] 18 kube-system pods found
	I0806 07:08:00.395054   21485 system_pods.go:61] "coredns-7db6d8ff4d-k5d8d" [9205d3f6-ae5e-473a-8760-1e9bb9cff77d] Running
	I0806 07:08:00.395062   21485 system_pods.go:61] "csi-hostpath-attacher-0" [82460523-4e86-4f8c-a043-d6075df14822] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0806 07:08:00.395068   21485 system_pods.go:61] "csi-hostpath-resizer-0" [c481f083-8e0b-4fc2-935e-770096142e18] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0806 07:08:00.395082   21485 system_pods.go:61] "csi-hostpathplugin-jv7h6" [289bff98-7069-4ae9-bc1e-af9315c603cb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0806 07:08:00.395090   21485 system_pods.go:61] "etcd-addons-505457" [d5cbe284-1ded-4e91-8525-e27ad1df2767] Running
	I0806 07:08:00.395094   21485 system_pods.go:61] "kube-apiserver-addons-505457" [dfd3ca17-01fc-405b-a1e9-ba0c93da4327] Running
	I0806 07:08:00.395098   21485 system_pods.go:61] "kube-controller-manager-addons-505457" [1450ab97-3dd4-47f7-9bf1-0eaeb31520f2] Running
	I0806 07:08:00.395103   21485 system_pods.go:61] "kube-ingress-dns-minikube" [a14eefbb-7e96-4b00-bce1-8cbeb64e7f88] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0806 07:08:00.395109   21485 system_pods.go:61] "kube-proxy-t7djg" [dbaeea31-08de-4723-9ec5-f51d376650b5] Running
	I0806 07:08:00.395113   21485 system_pods.go:61] "kube-scheduler-addons-505457" [c045203a-aae6-462a-b8d0-70d051cbf2f6] Running
	I0806 07:08:00.395120   21485 system_pods.go:61] "metrics-server-c59844bb4-dghj8" [a73b51d1-3f19-4608-b37f-2d92f56fec02] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0806 07:08:00.395124   21485 system_pods.go:61] "nvidia-device-plugin-daemonset-bxmng" [84dbfa29-9673-477b-a4b1-2e17cc5b745b] Running
	I0806 07:08:00.395131   21485 system_pods.go:61] "registry-698f998955-4xwg8" [d2496fc4-46dd-4a87-b8e6-51cc90641be4] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0806 07:08:00.395136   21485 system_pods.go:61] "registry-proxy-rpkh9" [273a082a-adbe-45cf-93b4-da0d67d0c071] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0806 07:08:00.395145   21485 system_pods.go:61] "snapshot-controller-745499f584-c2sg5" [154847d3-3f68-46bd-b9a2-f94537000c5d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0806 07:08:00.395154   21485 system_pods.go:61] "snapshot-controller-745499f584-xz4nt" [71e5e5ee-6e5b-409b-a1af-5e9d590e7609] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0806 07:08:00.395158   21485 system_pods.go:61] "storage-provisioner" [9631af58-26e8-4f64-a332-ab89aa6934ff] Running
	I0806 07:08:00.395166   21485 system_pods.go:61] "tiller-deploy-6677d64bcd-n2x6k" [d1db42b6-6642-4121-ae56-573e9f26e727] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0806 07:08:00.395171   21485 system_pods.go:74] duration metric: took 8.833455ms to wait for pod list to return data ...
	I0806 07:08:00.395180   21485 default_sa.go:34] waiting for default service account to be created ...
	I0806 07:08:00.397298   21485 default_sa.go:45] found service account: "default"
	I0806 07:08:00.397312   21485 default_sa.go:55] duration metric: took 2.127726ms for default service account to be created ...
	I0806 07:08:00.397319   21485 system_pods.go:116] waiting for k8s-apps to be running ...
	I0806 07:08:00.405261   21485 system_pods.go:86] 18 kube-system pods found
	I0806 07:08:00.405281   21485 system_pods.go:89] "coredns-7db6d8ff4d-k5d8d" [9205d3f6-ae5e-473a-8760-1e9bb9cff77d] Running
	I0806 07:08:00.405289   21485 system_pods.go:89] "csi-hostpath-attacher-0" [82460523-4e86-4f8c-a043-d6075df14822] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0806 07:08:00.405295   21485 system_pods.go:89] "csi-hostpath-resizer-0" [c481f083-8e0b-4fc2-935e-770096142e18] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0806 07:08:00.405304   21485 system_pods.go:89] "csi-hostpathplugin-jv7h6" [289bff98-7069-4ae9-bc1e-af9315c603cb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0806 07:08:00.405310   21485 system_pods.go:89] "etcd-addons-505457" [d5cbe284-1ded-4e91-8525-e27ad1df2767] Running
	I0806 07:08:00.405316   21485 system_pods.go:89] "kube-apiserver-addons-505457" [dfd3ca17-01fc-405b-a1e9-ba0c93da4327] Running
	I0806 07:08:00.405321   21485 system_pods.go:89] "kube-controller-manager-addons-505457" [1450ab97-3dd4-47f7-9bf1-0eaeb31520f2] Running
	I0806 07:08:00.405330   21485 system_pods.go:89] "kube-ingress-dns-minikube" [a14eefbb-7e96-4b00-bce1-8cbeb64e7f88] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0806 07:08:00.405334   21485 system_pods.go:89] "kube-proxy-t7djg" [dbaeea31-08de-4723-9ec5-f51d376650b5] Running
	I0806 07:08:00.405339   21485 system_pods.go:89] "kube-scheduler-addons-505457" [c045203a-aae6-462a-b8d0-70d051cbf2f6] Running
	I0806 07:08:00.405347   21485 system_pods.go:89] "metrics-server-c59844bb4-dghj8" [a73b51d1-3f19-4608-b37f-2d92f56fec02] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0806 07:08:00.405351   21485 system_pods.go:89] "nvidia-device-plugin-daemonset-bxmng" [84dbfa29-9673-477b-a4b1-2e17cc5b745b] Running
	I0806 07:08:00.405361   21485 system_pods.go:89] "registry-698f998955-4xwg8" [d2496fc4-46dd-4a87-b8e6-51cc90641be4] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0806 07:08:00.405368   21485 system_pods.go:89] "registry-proxy-rpkh9" [273a082a-adbe-45cf-93b4-da0d67d0c071] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0806 07:08:00.405375   21485 system_pods.go:89] "snapshot-controller-745499f584-c2sg5" [154847d3-3f68-46bd-b9a2-f94537000c5d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0806 07:08:00.405384   21485 system_pods.go:89] "snapshot-controller-745499f584-xz4nt" [71e5e5ee-6e5b-409b-a1af-5e9d590e7609] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0806 07:08:00.405391   21485 system_pods.go:89] "storage-provisioner" [9631af58-26e8-4f64-a332-ab89aa6934ff] Running
	I0806 07:08:00.405397   21485 system_pods.go:89] "tiller-deploy-6677d64bcd-n2x6k" [d1db42b6-6642-4121-ae56-573e9f26e727] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0806 07:08:00.405403   21485 system_pods.go:126] duration metric: took 8.080452ms to wait for k8s-apps to be running ...
	I0806 07:08:00.405410   21485 system_svc.go:44] waiting for kubelet service to be running ....
	I0806 07:08:00.405480   21485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 07:08:00.420585   21485 system_svc.go:56] duration metric: took 15.169517ms WaitForService to wait for kubelet
	I0806 07:08:00.420610   21485 kubeadm.go:582] duration metric: took 18.878726694s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0806 07:08:00.420628   21485 node_conditions.go:102] verifying NodePressure condition ...
	I0806 07:08:00.423903   21485 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0806 07:08:00.423931   21485 node_conditions.go:123] node cpu capacity is 2
	I0806 07:08:00.423944   21485 node_conditions.go:105] duration metric: took 3.312018ms to run NodePressure ...
	I0806 07:08:00.423954   21485 start.go:241] waiting for startup goroutines ...
	I0806 07:08:00.452838   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:00.472896   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:00.811059   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:00.840993   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:00.953522   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:00.973116   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:01.311615   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:01.339164   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:01.453606   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:01.473201   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:01.810986   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:01.838643   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:01.953717   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:01.974212   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:02.311187   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:02.339450   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:02.453222   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:02.473200   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:02.811125   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:02.839354   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:02.953054   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:02.972518   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:03.325464   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:03.339022   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:03.453719   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:03.473098   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:03.812895   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:03.841060   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:03.952950   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:03.972848   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:04.311697   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:04.339572   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:04.454362   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:04.473683   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:04.810299   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:04.839114   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:04.953770   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:04.973350   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:05.565924   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:05.568667   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:05.569011   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:05.569203   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:05.810130   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:05.839051   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:05.953677   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:05.972962   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:06.311237   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:06.339184   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:06.454749   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:06.474213   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:06.811128   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:06.839518   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:06.953747   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:06.973784   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:07.311440   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:07.339572   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:07.453503   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:07.474016   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:07.810452   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:07.839287   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:07.953159   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:07.972576   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:08.310743   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:08.338829   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:08.453713   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:08.474180   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:08.813014   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:08.846924   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:08.954298   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:08.972721   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:09.310126   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:09.338576   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:09.453381   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:09.472759   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:09.810628   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:09.838698   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:09.953998   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:09.972700   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:10.310030   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:10.338269   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:10.453591   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:10.473217   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:10.812216   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:10.838643   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:10.960349   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:10.977498   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:11.310667   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:11.339909   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:11.453084   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:11.474459   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:11.811335   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:11.839233   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:11.955541   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:11.973230   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:12.311659   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:12.338552   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:12.453111   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:12.472770   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:12.811639   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:12.838801   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:12.954434   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:12.973989   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:13.309997   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:13.341598   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:13.453586   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:13.475156   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:13.811446   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:13.844640   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:13.954669   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:13.973161   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:14.310464   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:14.338635   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:14.454167   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:14.472717   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:14.810316   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:14.839481   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:14.953666   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:14.973190   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:15.310776   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:15.338353   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:15.453354   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:15.473643   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:15.810578   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:15.839367   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:15.952826   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:15.973454   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:16.311492   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:16.339282   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:16.454142   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:16.473417   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:16.811370   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:16.838639   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:16.953829   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:16.973239   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:17.311702   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:17.341585   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:17.452692   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:17.472939   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:17.811206   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:17.839337   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:17.956305   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:17.973300   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:18.319385   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:18.338422   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:18.453744   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:18.474225   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:18.899456   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:18.904515   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:18.953060   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:18.972505   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:19.312336   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:19.338971   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:19.454136   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:19.473339   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:19.810831   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:19.850534   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:19.953828   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:19.972606   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:20.312401   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:20.338831   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:20.453298   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:20.473211   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:20.811352   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:20.838597   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:20.953087   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:20.972937   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:21.310642   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:21.355525   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:21.453968   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:21.473138   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:21.813975   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:21.839971   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:21.953662   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:21.973108   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:22.310932   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:22.338358   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:22.453025   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:22.473512   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:22.810674   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:22.839326   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:22.953994   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:22.972896   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:23.311857   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:23.338941   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:23.452680   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:23.474492   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:23.815228   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:23.839230   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:24.235612   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:24.235684   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:24.310750   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:24.338981   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:24.452942   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:24.472957   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:24.810784   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:24.838325   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:24.952970   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:24.972777   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:25.317198   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:25.351173   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:25.453590   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:25.474232   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:25.810019   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:25.838863   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:25.953229   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:25.973144   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:26.313787   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:26.339291   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:26.453079   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:26.472801   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:26.811946   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:26.839224   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:26.953936   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:26.974571   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:27.311482   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:27.338829   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:27.452828   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:27.472717   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:27.810206   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:27.838557   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:27.953386   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:27.972965   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:28.310251   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:28.338785   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:28.452601   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:28.472808   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:28.810761   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:28.838030   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:28.952931   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:28.973661   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:29.311274   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:29.340439   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:29.453311   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:29.473116   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:29.810736   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:29.838764   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:30.337383   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:30.337585   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:30.337798   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:30.339898   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:30.452999   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:30.472867   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:30.811575   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:30.839219   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:30.953481   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:30.973345   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:31.310945   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:31.339101   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:31.454440   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:31.475057   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:31.810886   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:31.838294   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0806 07:08:31.958319   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:31.973399   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:32.311094   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:32.338768   21485 kapi.go:107] duration metric: took 46.004823729s to wait for kubernetes.io/minikube-addons=registry ...
	I0806 07:08:32.453728   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:32.473412   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:32.811193   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:32.952943   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:32.973368   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:33.311669   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:33.453801   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:33.472436   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:33.811173   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:33.954203   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:33.972949   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:34.310541   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:34.452953   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:34.472678   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:34.810454   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:34.953889   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:34.973937   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:35.310521   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:35.453332   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:35.472821   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:35.811120   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:35.953809   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:35.972607   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:36.311763   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:36.453355   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:36.473139   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:36.811439   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:36.953868   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:36.972528   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:37.311562   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:37.452449   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:37.472961   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:37.811536   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:37.952614   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:37.973272   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:38.311032   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:38.453566   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:38.473207   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:38.812377   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:38.953566   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:38.973414   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:39.314114   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:39.453715   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:39.473433   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:39.811255   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:39.953050   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:39.972825   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:40.311090   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:40.453592   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:40.473791   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:40.810583   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:40.953484   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:40.973434   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:41.310674   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:41.453530   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:41.473025   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:41.811270   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:41.953567   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:41.973679   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:42.312013   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:42.453749   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:42.473878   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:42.810614   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:42.952980   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:42.973096   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:43.310805   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:43.454843   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:43.472687   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:43.815491   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:43.952551   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:43.973139   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:44.314470   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:44.453210   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:44.473149   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:44.811388   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:44.953339   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:44.973618   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:45.310782   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:45.454754   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:45.472696   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:45.816741   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:45.954702   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:45.973561   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:46.311775   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:46.458146   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:46.493168   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:46.811114   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:46.953582   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:46.973625   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:47.313301   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:47.453487   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:47.473585   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:47.817051   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:47.958766   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:47.979731   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:48.311209   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:48.458867   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:48.478222   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:48.810646   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:48.952814   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:48.972499   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:49.311288   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:49.454269   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:49.475968   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:49.811234   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:49.952778   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:49.973908   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:50.311316   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:50.457195   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:50.486281   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:51.205999   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:51.212478   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:51.213032   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:51.311304   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:51.462174   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:51.472731   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:51.811274   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:51.952970   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:51.972646   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:52.314441   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:52.452613   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:52.473298   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:52.815186   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:52.956525   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:52.974175   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:53.310388   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:53.453095   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:53.483943   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:53.810022   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:53.953671   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:53.973565   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:54.311328   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:54.453685   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:54.473496   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:54.810865   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:54.953806   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:54.973680   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:55.311292   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:55.453747   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:55.473223   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:55.810427   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:55.953377   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:55.973854   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:56.311253   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:56.452566   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:56.472933   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:56.810767   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:56.966554   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:56.980122   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:57.317844   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:57.453130   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:57.478271   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:57.816962   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:57.954021   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:57.975251   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:58.343137   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:58.453787   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:58.473337   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:58.812265   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:58.961727   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:58.974622   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:59.312580   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:59.455645   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:59.474168   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:08:59.810416   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:08:59.952690   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:08:59.973506   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:09:00.310993   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:09:00.454499   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:09:00.481734   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:09:00.812013   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:09:00.954649   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:09:00.973562   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:09:01.311008   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:09:01.454899   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:09:01.474609   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:09:01.813004   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:09:01.953802   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:09:01.974133   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:09:02.311852   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:09:02.453596   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:09:02.474307   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:09:02.810356   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:09:02.952876   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:09:02.973110   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:09:03.310392   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:09:03.453473   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:09:03.474309   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:09:03.822283   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:09:03.954282   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:09:03.973036   21485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0806 07:09:04.317275   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:09:04.454132   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:09:04.474449   21485 kapi.go:107] duration metric: took 1m14.005924522s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0806 07:09:04.811853   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:09:04.954758   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:09:05.312002   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:09:05.453947   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:09:05.810590   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:09:05.954631   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:09:06.311283   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:09:06.452847   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:09:06.811454   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:09:06.953584   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:09:07.310644   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:09:07.453522   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:09:07.810603   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:09:07.954652   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:09:08.420187   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:09:08.453451   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:09:08.817525   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:09:08.953486   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0806 07:09:09.321942   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:09:09.453521   21485 kapi.go:107] duration metric: took 1m16.504043823s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0806 07:09:09.455540   21485 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-505457 cluster.
	I0806 07:09:09.457101   21485 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0806 07:09:09.458584   21485 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0806 07:09:09.810654   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:09:10.310727   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:09:10.810354   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:09:11.310146   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:09:11.813554   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:09:12.322052   21485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0806 07:09:12.810716   21485 kapi.go:107] duration metric: took 1m21.005480553s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0806 07:09:12.812639   21485 out.go:177] * Enabled addons: nvidia-device-plugin, ingress-dns, cloud-spanner, storage-provisioner, default-storageclass, metrics-server, helm-tiller, inspektor-gadget, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0806 07:09:12.813793   21485 addons.go:510] duration metric: took 1m31.27186756s for enable addons: enabled=[nvidia-device-plugin ingress-dns cloud-spanner storage-provisioner default-storageclass metrics-server helm-tiller inspektor-gadget yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0806 07:09:12.813829   21485 start.go:246] waiting for cluster config update ...
	I0806 07:09:12.813850   21485 start.go:255] writing updated cluster config ...
	I0806 07:09:12.814088   21485 ssh_runner.go:195] Run: rm -f paused
	I0806 07:09:12.866843   21485 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0806 07:09:12.868866   21485 out.go:177] * Done! kubectl is now configured to use "addons-505457" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 06 07:14:30 addons-505457 crio[684]: time="2024-08-06 07:14:30.272895837Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722928470272870040,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589584,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=027133da-e8c9-43e9-b095-9dd9cd940ad8 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 07:14:30 addons-505457 crio[684]: time="2024-08-06 07:14:30.273583401Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cade8b95-d7cb-47b9-86dc-cf3361221548 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 07:14:30 addons-505457 crio[684]: time="2024-08-06 07:14:30.273638361Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cade8b95-d7cb-47b9-86dc-cf3361221548 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 07:14:30 addons-505457 crio[684]: time="2024-08-06 07:14:30.274069985Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1bb169456345b80326bbe03ea5deb2194034ea3b0e9ae73d767584a86175ef8d,PodSandboxId:0ba76e457d98d104119a038a71a5ebea9e3a0887dc0d8436ac016292a3ce1b32,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722928333818262584,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-xg2dt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: da8616b2-cdae-4436-8bb8-2fd558ba9b6c,},Annotations:map[string]string{io.kubernetes.container.hash: fc3dd633,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5716fd9d0a7957f1d3d821ea36f879dd1040843efd37b7e07f2c5271a19e2c43,PodSandboxId:48323e9942a07b1b4d41f2c930f7d80f4b5d5b58862bb529b4b84889c3a8118e,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722928193681598452,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7fced162-c099-4abd-a6ab-20626a136899,},Annotations:map[string]string{io.kubernet
es.container.hash: acaffa5a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9776dddcced3ea27152001b269947202e4192c03c7b1afc5a43e5848f4749ef7,PodSandboxId:6e6b579adaf6bae36feaba849d1dfd126bebce03ab89556635405bbf8ceb54d0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722928156293700671,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ae3e743b-c04d-4206-a
40c-70c96ca55dea,},Annotations:map[string]string{io.kubernetes.container.hash: d096c7f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e9ca59c5d5d44d3fb6e570442f4c5915f02bafd78a7dabf81c1eec5c26b3b9e,PodSandboxId:507a32da5aa23a6fdf8198b07f8c9b4ab3888e4f3d2357ad5f0c22447d314e48,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722928093784437366,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-dghj8,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: a73b51d1-3f19-4608-b37f-2d92f56fec02,},Annotations:map[string]string{io.kubernetes.container.hash: 8b558cb9,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7a623480af4afbb181b874f946e3af94aca0c310fbb8b0a43223db5c3f102b1,PodSandboxId:af6ab42fed69ba47b2ce7c9a00e8098e134066605db39b09e0dcd412fe7e097c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722928069099504187,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,
io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9631af58-26e8-4f64-a332-ab89aa6934ff,},Annotations:map[string]string{io.kubernetes.container.hash: 361e54aa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:027ea1c110b7f5dfde7f8887ce55fc8c76b57576dd8e0b413a859d725bc0f45e,PodSandboxId:de0ded25e176ea853db6796ed89e6315e0859501717484ecef9c0c0456cd09e4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722928065518161478,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6
d8ff4d-k5d8d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9205d3f6-ae5e-473a-8760-1e9bb9cff77d,},Annotations:map[string]string{io.kubernetes.container.hash: 68374ea1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5080e5fce2c6cf845092e92aebf13550fc5412567f7c21694410eaebc73d84d,PodSandboxId:56f279ca84ed0636221eb9b70b3b986c6ba6f7e791473eedc394ac4175e1fbf5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381
d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722928063132630344,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-t7djg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbaeea31-08de-4723-9ec5-f51d376650b5,},Annotations:map[string]string{io.kubernetes.container.hash: 45c04c54,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e84eb866dc0b3dd79f91399a5714b5b3bbb8eb0ac746133af657c88755e4b88,PodSandboxId:ed75660e1f6f9eb40d6a1e0d06d5f737a0f9cd77ad3556ba4b43f73b61849fc5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e
5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722928042832090751,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-505457,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d3a4c70f5f1a98e373ce2b1bd632569,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd494e38d6a560269121e4eb69c609c6db24bea9862163ad00e525dc7e486a9a,PodSandboxId:16986c599fc34a7842ad40cbb800162bc70784c708c03a68dc81766b286122a8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTA
INER_RUNNING,CreatedAt:1722928042786306734,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-505457,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9445896d2c8822cafd25b745df86bb0b,},Annotations:map[string]string{io.kubernetes.container.hash: d5831ede,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe12a18afe3dccc7a77e9b91dea2873a822a39628e809273739532bd630c0459,PodSandboxId:2648f6a872da28fbc81c2bea3a82c35080febf469813983949da621f3e8f3f8b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:17229
28042760368393,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-505457,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 503a3788790fb79ef323dff2c5ec6ce1,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04bc771142517172d3d6312f52f17040d2235a82f36f5e627dea6620331173a9,PodSandboxId:c44c5af015412e22c8b5050d0749c16c27744783d64dd9df518b059fc6b699d3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722
928042755640737,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-505457,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba2495c51f84135afe3ac6ebc47db99e,},Annotations:map[string]string{io.kubernetes.container.hash: 974ca483,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cade8b95-d7cb-47b9-86dc-cf3361221548 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 07:14:30 addons-505457 crio[684]: time="2024-08-06 07:14:30.311716687Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7b4256a2-2b3c-4175-900c-66baf8be8049 name=/runtime.v1.RuntimeService/Version
	Aug 06 07:14:30 addons-505457 crio[684]: time="2024-08-06 07:14:30.311861317Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7b4256a2-2b3c-4175-900c-66baf8be8049 name=/runtime.v1.RuntimeService/Version
	Aug 06 07:14:30 addons-505457 crio[684]: time="2024-08-06 07:14:30.314006536Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dc93aa21-c79c-467d-9fcf-2d0f810c62a3 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 07:14:30 addons-505457 crio[684]: time="2024-08-06 07:14:30.317080774Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722928470317050252,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589584,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dc93aa21-c79c-467d-9fcf-2d0f810c62a3 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 07:14:30 addons-505457 crio[684]: time="2024-08-06 07:14:30.320157171Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ce9098aa-7a84-4c21-b950-b3fc300dda96 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 07:14:30 addons-505457 crio[684]: time="2024-08-06 07:14:30.320211240Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ce9098aa-7a84-4c21-b950-b3fc300dda96 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 07:14:30 addons-505457 crio[684]: time="2024-08-06 07:14:30.320452799Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1bb169456345b80326bbe03ea5deb2194034ea3b0e9ae73d767584a86175ef8d,PodSandboxId:0ba76e457d98d104119a038a71a5ebea9e3a0887dc0d8436ac016292a3ce1b32,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722928333818262584,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-xg2dt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: da8616b2-cdae-4436-8bb8-2fd558ba9b6c,},Annotations:map[string]string{io.kubernetes.container.hash: fc3dd633,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5716fd9d0a7957f1d3d821ea36f879dd1040843efd37b7e07f2c5271a19e2c43,PodSandboxId:48323e9942a07b1b4d41f2c930f7d80f4b5d5b58862bb529b4b84889c3a8118e,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722928193681598452,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7fced162-c099-4abd-a6ab-20626a136899,},Annotations:map[string]string{io.kubernet
es.container.hash: acaffa5a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9776dddcced3ea27152001b269947202e4192c03c7b1afc5a43e5848f4749ef7,PodSandboxId:6e6b579adaf6bae36feaba849d1dfd126bebce03ab89556635405bbf8ceb54d0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722928156293700671,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ae3e743b-c04d-4206-a
40c-70c96ca55dea,},Annotations:map[string]string{io.kubernetes.container.hash: d096c7f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e9ca59c5d5d44d3fb6e570442f4c5915f02bafd78a7dabf81c1eec5c26b3b9e,PodSandboxId:507a32da5aa23a6fdf8198b07f8c9b4ab3888e4f3d2357ad5f0c22447d314e48,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722928093784437366,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-dghj8,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: a73b51d1-3f19-4608-b37f-2d92f56fec02,},Annotations:map[string]string{io.kubernetes.container.hash: 8b558cb9,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7a623480af4afbb181b874f946e3af94aca0c310fbb8b0a43223db5c3f102b1,PodSandboxId:af6ab42fed69ba47b2ce7c9a00e8098e134066605db39b09e0dcd412fe7e097c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722928069099504187,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,
io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9631af58-26e8-4f64-a332-ab89aa6934ff,},Annotations:map[string]string{io.kubernetes.container.hash: 361e54aa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:027ea1c110b7f5dfde7f8887ce55fc8c76b57576dd8e0b413a859d725bc0f45e,PodSandboxId:de0ded25e176ea853db6796ed89e6315e0859501717484ecef9c0c0456cd09e4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722928065518161478,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6
d8ff4d-k5d8d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9205d3f6-ae5e-473a-8760-1e9bb9cff77d,},Annotations:map[string]string{io.kubernetes.container.hash: 68374ea1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5080e5fce2c6cf845092e92aebf13550fc5412567f7c21694410eaebc73d84d,PodSandboxId:56f279ca84ed0636221eb9b70b3b986c6ba6f7e791473eedc394ac4175e1fbf5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381
d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722928063132630344,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-t7djg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbaeea31-08de-4723-9ec5-f51d376650b5,},Annotations:map[string]string{io.kubernetes.container.hash: 45c04c54,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e84eb866dc0b3dd79f91399a5714b5b3bbb8eb0ac746133af657c88755e4b88,PodSandboxId:ed75660e1f6f9eb40d6a1e0d06d5f737a0f9cd77ad3556ba4b43f73b61849fc5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e
5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722928042832090751,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-505457,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d3a4c70f5f1a98e373ce2b1bd632569,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd494e38d6a560269121e4eb69c609c6db24bea9862163ad00e525dc7e486a9a,PodSandboxId:16986c599fc34a7842ad40cbb800162bc70784c708c03a68dc81766b286122a8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTA
INER_RUNNING,CreatedAt:1722928042786306734,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-505457,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9445896d2c8822cafd25b745df86bb0b,},Annotations:map[string]string{io.kubernetes.container.hash: d5831ede,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe12a18afe3dccc7a77e9b91dea2873a822a39628e809273739532bd630c0459,PodSandboxId:2648f6a872da28fbc81c2bea3a82c35080febf469813983949da621f3e8f3f8b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:17229
28042760368393,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-505457,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 503a3788790fb79ef323dff2c5ec6ce1,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04bc771142517172d3d6312f52f17040d2235a82f36f5e627dea6620331173a9,PodSandboxId:c44c5af015412e22c8b5050d0749c16c27744783d64dd9df518b059fc6b699d3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722
928042755640737,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-505457,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba2495c51f84135afe3ac6ebc47db99e,},Annotations:map[string]string{io.kubernetes.container.hash: 974ca483,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ce9098aa-7a84-4c21-b950-b3fc300dda96 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 07:14:30 addons-505457 crio[684]: time="2024-08-06 07:14:30.361379797Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c836743f-efd0-422b-b86b-e93544ad8129 name=/runtime.v1.RuntimeService/Version
	Aug 06 07:14:30 addons-505457 crio[684]: time="2024-08-06 07:14:30.361487425Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c836743f-efd0-422b-b86b-e93544ad8129 name=/runtime.v1.RuntimeService/Version
	Aug 06 07:14:30 addons-505457 crio[684]: time="2024-08-06 07:14:30.362885839Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=97a0b3be-dfce-4613-8305-e71ea87b99da name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 07:14:30 addons-505457 crio[684]: time="2024-08-06 07:14:30.364096389Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722928470364071066,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589584,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=97a0b3be-dfce-4613-8305-e71ea87b99da name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 07:14:30 addons-505457 crio[684]: time="2024-08-06 07:14:30.364660305Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f78d50d3-28a5-43e6-a3a8-55cb04400eeb name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 07:14:30 addons-505457 crio[684]: time="2024-08-06 07:14:30.364775638Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f78d50d3-28a5-43e6-a3a8-55cb04400eeb name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 07:14:30 addons-505457 crio[684]: time="2024-08-06 07:14:30.365103949Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1bb169456345b80326bbe03ea5deb2194034ea3b0e9ae73d767584a86175ef8d,PodSandboxId:0ba76e457d98d104119a038a71a5ebea9e3a0887dc0d8436ac016292a3ce1b32,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722928333818262584,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-xg2dt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: da8616b2-cdae-4436-8bb8-2fd558ba9b6c,},Annotations:map[string]string{io.kubernetes.container.hash: fc3dd633,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5716fd9d0a7957f1d3d821ea36f879dd1040843efd37b7e07f2c5271a19e2c43,PodSandboxId:48323e9942a07b1b4d41f2c930f7d80f4b5d5b58862bb529b4b84889c3a8118e,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722928193681598452,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7fced162-c099-4abd-a6ab-20626a136899,},Annotations:map[string]string{io.kubernet
es.container.hash: acaffa5a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9776dddcced3ea27152001b269947202e4192c03c7b1afc5a43e5848f4749ef7,PodSandboxId:6e6b579adaf6bae36feaba849d1dfd126bebce03ab89556635405bbf8ceb54d0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722928156293700671,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ae3e743b-c04d-4206-a
40c-70c96ca55dea,},Annotations:map[string]string{io.kubernetes.container.hash: d096c7f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e9ca59c5d5d44d3fb6e570442f4c5915f02bafd78a7dabf81c1eec5c26b3b9e,PodSandboxId:507a32da5aa23a6fdf8198b07f8c9b4ab3888e4f3d2357ad5f0c22447d314e48,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722928093784437366,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-dghj8,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: a73b51d1-3f19-4608-b37f-2d92f56fec02,},Annotations:map[string]string{io.kubernetes.container.hash: 8b558cb9,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7a623480af4afbb181b874f946e3af94aca0c310fbb8b0a43223db5c3f102b1,PodSandboxId:af6ab42fed69ba47b2ce7c9a00e8098e134066605db39b09e0dcd412fe7e097c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722928069099504187,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,
io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9631af58-26e8-4f64-a332-ab89aa6934ff,},Annotations:map[string]string{io.kubernetes.container.hash: 361e54aa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:027ea1c110b7f5dfde7f8887ce55fc8c76b57576dd8e0b413a859d725bc0f45e,PodSandboxId:de0ded25e176ea853db6796ed89e6315e0859501717484ecef9c0c0456cd09e4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722928065518161478,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6
d8ff4d-k5d8d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9205d3f6-ae5e-473a-8760-1e9bb9cff77d,},Annotations:map[string]string{io.kubernetes.container.hash: 68374ea1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5080e5fce2c6cf845092e92aebf13550fc5412567f7c21694410eaebc73d84d,PodSandboxId:56f279ca84ed0636221eb9b70b3b986c6ba6f7e791473eedc394ac4175e1fbf5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381
d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722928063132630344,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-t7djg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbaeea31-08de-4723-9ec5-f51d376650b5,},Annotations:map[string]string{io.kubernetes.container.hash: 45c04c54,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e84eb866dc0b3dd79f91399a5714b5b3bbb8eb0ac746133af657c88755e4b88,PodSandboxId:ed75660e1f6f9eb40d6a1e0d06d5f737a0f9cd77ad3556ba4b43f73b61849fc5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e
5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722928042832090751,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-505457,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d3a4c70f5f1a98e373ce2b1bd632569,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd494e38d6a560269121e4eb69c609c6db24bea9862163ad00e525dc7e486a9a,PodSandboxId:16986c599fc34a7842ad40cbb800162bc70784c708c03a68dc81766b286122a8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTA
INER_RUNNING,CreatedAt:1722928042786306734,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-505457,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9445896d2c8822cafd25b745df86bb0b,},Annotations:map[string]string{io.kubernetes.container.hash: d5831ede,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe12a18afe3dccc7a77e9b91dea2873a822a39628e809273739532bd630c0459,PodSandboxId:2648f6a872da28fbc81c2bea3a82c35080febf469813983949da621f3e8f3f8b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:17229
28042760368393,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-505457,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 503a3788790fb79ef323dff2c5ec6ce1,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04bc771142517172d3d6312f52f17040d2235a82f36f5e627dea6620331173a9,PodSandboxId:c44c5af015412e22c8b5050d0749c16c27744783d64dd9df518b059fc6b699d3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722
928042755640737,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-505457,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba2495c51f84135afe3ac6ebc47db99e,},Annotations:map[string]string{io.kubernetes.container.hash: 974ca483,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f78d50d3-28a5-43e6-a3a8-55cb04400eeb name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 07:14:30 addons-505457 crio[684]: time="2024-08-06 07:14:30.399057079Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=68cdc7bf-c023-4c6b-b672-2a10fb6dc3a6 name=/runtime.v1.RuntimeService/Version
	Aug 06 07:14:30 addons-505457 crio[684]: time="2024-08-06 07:14:30.399145255Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=68cdc7bf-c023-4c6b-b672-2a10fb6dc3a6 name=/runtime.v1.RuntimeService/Version
	Aug 06 07:14:30 addons-505457 crio[684]: time="2024-08-06 07:14:30.400482947Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d597c281-cbfb-450d-9e6e-f7a447df8773 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 07:14:30 addons-505457 crio[684]: time="2024-08-06 07:14:30.401831829Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722928470401791797,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589584,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d597c281-cbfb-450d-9e6e-f7a447df8773 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 07:14:30 addons-505457 crio[684]: time="2024-08-06 07:14:30.402362877Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a7e42976-fb1e-40ee-ab4d-23ea107dc28e name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 07:14:30 addons-505457 crio[684]: time="2024-08-06 07:14:30.402431261Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a7e42976-fb1e-40ee-ab4d-23ea107dc28e name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 07:14:30 addons-505457 crio[684]: time="2024-08-06 07:14:30.402712421Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1bb169456345b80326bbe03ea5deb2194034ea3b0e9ae73d767584a86175ef8d,PodSandboxId:0ba76e457d98d104119a038a71a5ebea9e3a0887dc0d8436ac016292a3ce1b32,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722928333818262584,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-xg2dt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: da8616b2-cdae-4436-8bb8-2fd558ba9b6c,},Annotations:map[string]string{io.kubernetes.container.hash: fc3dd633,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5716fd9d0a7957f1d3d821ea36f879dd1040843efd37b7e07f2c5271a19e2c43,PodSandboxId:48323e9942a07b1b4d41f2c930f7d80f4b5d5b58862bb529b4b84889c3a8118e,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722928193681598452,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7fced162-c099-4abd-a6ab-20626a136899,},Annotations:map[string]string{io.kubernet
es.container.hash: acaffa5a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9776dddcced3ea27152001b269947202e4192c03c7b1afc5a43e5848f4749ef7,PodSandboxId:6e6b579adaf6bae36feaba849d1dfd126bebce03ab89556635405bbf8ceb54d0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722928156293700671,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ae3e743b-c04d-4206-a
40c-70c96ca55dea,},Annotations:map[string]string{io.kubernetes.container.hash: d096c7f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e9ca59c5d5d44d3fb6e570442f4c5915f02bafd78a7dabf81c1eec5c26b3b9e,PodSandboxId:507a32da5aa23a6fdf8198b07f8c9b4ab3888e4f3d2357ad5f0c22447d314e48,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722928093784437366,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-dghj8,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: a73b51d1-3f19-4608-b37f-2d92f56fec02,},Annotations:map[string]string{io.kubernetes.container.hash: 8b558cb9,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7a623480af4afbb181b874f946e3af94aca0c310fbb8b0a43223db5c3f102b1,PodSandboxId:af6ab42fed69ba47b2ce7c9a00e8098e134066605db39b09e0dcd412fe7e097c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722928069099504187,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,
io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9631af58-26e8-4f64-a332-ab89aa6934ff,},Annotations:map[string]string{io.kubernetes.container.hash: 361e54aa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:027ea1c110b7f5dfde7f8887ce55fc8c76b57576dd8e0b413a859d725bc0f45e,PodSandboxId:de0ded25e176ea853db6796ed89e6315e0859501717484ecef9c0c0456cd09e4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722928065518161478,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6
d8ff4d-k5d8d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9205d3f6-ae5e-473a-8760-1e9bb9cff77d,},Annotations:map[string]string{io.kubernetes.container.hash: 68374ea1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5080e5fce2c6cf845092e92aebf13550fc5412567f7c21694410eaebc73d84d,PodSandboxId:56f279ca84ed0636221eb9b70b3b986c6ba6f7e791473eedc394ac4175e1fbf5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381
d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722928063132630344,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-t7djg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbaeea31-08de-4723-9ec5-f51d376650b5,},Annotations:map[string]string{io.kubernetes.container.hash: 45c04c54,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e84eb866dc0b3dd79f91399a5714b5b3bbb8eb0ac746133af657c88755e4b88,PodSandboxId:ed75660e1f6f9eb40d6a1e0d06d5f737a0f9cd77ad3556ba4b43f73b61849fc5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e
5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722928042832090751,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-505457,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d3a4c70f5f1a98e373ce2b1bd632569,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd494e38d6a560269121e4eb69c609c6db24bea9862163ad00e525dc7e486a9a,PodSandboxId:16986c599fc34a7842ad40cbb800162bc70784c708c03a68dc81766b286122a8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTA
INER_RUNNING,CreatedAt:1722928042786306734,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-505457,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9445896d2c8822cafd25b745df86bb0b,},Annotations:map[string]string{io.kubernetes.container.hash: d5831ede,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe12a18afe3dccc7a77e9b91dea2873a822a39628e809273739532bd630c0459,PodSandboxId:2648f6a872da28fbc81c2bea3a82c35080febf469813983949da621f3e8f3f8b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:17229
28042760368393,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-505457,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 503a3788790fb79ef323dff2c5ec6ce1,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04bc771142517172d3d6312f52f17040d2235a82f36f5e627dea6620331173a9,PodSandboxId:c44c5af015412e22c8b5050d0749c16c27744783d64dd9df518b059fc6b699d3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722
928042755640737,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-505457,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba2495c51f84135afe3ac6ebc47db99e,},Annotations:map[string]string{io.kubernetes.container.hash: 974ca483,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a7e42976-fb1e-40ee-ab4d-23ea107dc28e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	1bb169456345b       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   2 minutes ago       Running             hello-world-app           0                   0ba76e457d98d       hello-world-app-6778b5fc9f-xg2dt
	5716fd9d0a795       docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9                         4 minutes ago       Running             nginx                     0                   48323e9942a07       nginx
	9776dddcced3e       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                     5 minutes ago       Running             busybox                   0                   6e6b579adaf6b       busybox
	5e9ca59c5d5d4       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872   6 minutes ago       Running             metrics-server            0                   507a32da5aa23       metrics-server-c59844bb4-dghj8
	e7a623480af4a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        6 minutes ago       Running             storage-provisioner       0                   af6ab42fed69b       storage-provisioner
	027ea1c110b7f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                        6 minutes ago       Running             coredns                   0                   de0ded25e176e       coredns-7db6d8ff4d-k5d8d
	a5080e5fce2c6       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                                        6 minutes ago       Running             kube-proxy                0                   56f279ca84ed0       kube-proxy-t7djg
	9e84eb866dc0b       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                                        7 minutes ago       Running             kube-scheduler            0                   ed75660e1f6f9       kube-scheduler-addons-505457
	fd494e38d6a56       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                        7 minutes ago       Running             etcd                      0                   16986c599fc34       etcd-addons-505457
	fe12a18afe3dc       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                                        7 minutes ago       Running             kube-controller-manager   0                   2648f6a872da2       kube-controller-manager-addons-505457
	04bc771142517       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                                        7 minutes ago       Running             kube-apiserver            0                   c44c5af015412       kube-apiserver-addons-505457
	
	
	==> coredns [027ea1c110b7f5dfde7f8887ce55fc8c76b57576dd8e0b413a859d725bc0f45e] <==
	[INFO] 10.244.0.6:45223 - 17083 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00009545s
	[INFO] 10.244.0.6:55889 - 15484 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000061371s
	[INFO] 10.244.0.6:55889 - 34938 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000137638s
	[INFO] 10.244.0.6:49821 - 6750 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000126139s
	[INFO] 10.244.0.6:49821 - 10332 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000121551s
	[INFO] 10.244.0.6:38435 - 52788 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000088756s
	[INFO] 10.244.0.6:38435 - 19285 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000094909s
	[INFO] 10.244.0.6:37065 - 53195 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000081967s
	[INFO] 10.244.0.6:37065 - 23496 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000177732s
	[INFO] 10.244.0.6:33496 - 49199 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000042554s
	[INFO] 10.244.0.6:33496 - 39725 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000094066s
	[INFO] 10.244.0.6:57368 - 27338 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000084884s
	[INFO] 10.244.0.6:57368 - 16584 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000072126s
	[INFO] 10.244.0.6:59234 - 18212 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000078351s
	[INFO] 10.244.0.6:59234 - 56613 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000129033s
	[INFO] 10.244.0.22:42783 - 24608 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000394637s
	[INFO] 10.244.0.22:55478 - 3385 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000090641s
	[INFO] 10.244.0.22:34909 - 6297 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000135472s
	[INFO] 10.244.0.22:45266 - 44235 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00011482s
	[INFO] 10.244.0.22:37960 - 23912 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000090019s
	[INFO] 10.244.0.22:54657 - 9934 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000131092s
	[INFO] 10.244.0.22:48901 - 42081 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.000715231s
	[INFO] 10.244.0.22:59032 - 41980 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000554558s
	[INFO] 10.244.0.24:38626 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000293895s
	[INFO] 10.244.0.24:58900 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000173161s
	
	
	==> describe nodes <==
	Name:               addons-505457
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-505457
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e92cb06692f5ea1ba801d10d148e5e92e807f9c8
	                    minikube.k8s.io/name=addons-505457
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_06T07_07_29_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-505457
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 06 Aug 2024 07:07:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-505457
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 06 Aug 2024 07:14:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 06 Aug 2024 07:12:35 +0000   Tue, 06 Aug 2024 07:07:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 06 Aug 2024 07:12:35 +0000   Tue, 06 Aug 2024 07:07:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 06 Aug 2024 07:12:35 +0000   Tue, 06 Aug 2024 07:07:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 06 Aug 2024 07:12:35 +0000   Tue, 06 Aug 2024 07:07:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.6
	  Hostname:    addons-505457
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 eebd60decccf4374b4c2eb3bc40a4c03
	  System UUID:                eebd60de-cccf-4374-b4c2-eb3bc40a4c03
	  Boot ID:                    2b63510c-2480-4f43-b2ca-32ce6bc2e06f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m17s
	  default                     hello-world-app-6778b5fc9f-xg2dt         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m20s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m41s
	  kube-system                 coredns-7db6d8ff4d-k5d8d                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     6m49s
	  kube-system                 etcd-addons-505457                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         7m2s
	  kube-system                 kube-apiserver-addons-505457             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m2s
	  kube-system                 kube-controller-manager-addons-505457    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m2s
	  kube-system                 kube-proxy-t7djg                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m49s
	  kube-system                 kube-scheduler-addons-505457             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m2s
	  kube-system                 metrics-server-c59844bb4-dghj8           100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         6m43s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (9%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 6m46s                kube-proxy       
	  Normal  NodeHasSufficientMemory  7m8s (x8 over 7m8s)  kubelet          Node addons-505457 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m8s (x8 over 7m8s)  kubelet          Node addons-505457 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m8s (x7 over 7m8s)  kubelet          Node addons-505457 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 7m2s                 kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m2s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m2s                 kubelet          Node addons-505457 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m2s                 kubelet          Node addons-505457 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m2s                 kubelet          Node addons-505457 status is now: NodeHasSufficientPID
	  Normal  NodeReady                7m1s                 kubelet          Node addons-505457 status is now: NodeReady
	  Normal  RegisteredNode           6m50s                node-controller  Node addons-505457 event: Registered Node addons-505457 in Controller
	
	
	==> dmesg <==
	[  +8.257190] kauditd_printk_skb: 56 callbacks suppressed
	[Aug 6 07:08] kauditd_printk_skb: 2 callbacks suppressed
	[ +17.615453] kauditd_printk_skb: 25 callbacks suppressed
	[ +12.359288] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.589367] kauditd_printk_skb: 20 callbacks suppressed
	[  +5.156882] kauditd_printk_skb: 61 callbacks suppressed
	[Aug 6 07:09] kauditd_printk_skb: 48 callbacks suppressed
	[  +5.680626] kauditd_printk_skb: 9 callbacks suppressed
	[  +7.007253] kauditd_printk_skb: 18 callbacks suppressed
	[ +10.275422] kauditd_printk_skb: 31 callbacks suppressed
	[ +11.779606] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.010598] kauditd_printk_skb: 3 callbacks suppressed
	[  +5.092813] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.206837] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.131887] kauditd_printk_skb: 9 callbacks suppressed
	[Aug 6 07:10] kauditd_printk_skb: 15 callbacks suppressed
	[  +6.081335] kauditd_printk_skb: 20 callbacks suppressed
	[ +14.428360] kauditd_printk_skb: 2 callbacks suppressed
	[ +13.727466] kauditd_printk_skb: 15 callbacks suppressed
	[ +14.216837] kauditd_printk_skb: 33 callbacks suppressed
	[  +5.172191] kauditd_printk_skb: 12 callbacks suppressed
	[Aug 6 07:11] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.262435] kauditd_printk_skb: 16 callbacks suppressed
	[Aug 6 07:12] kauditd_printk_skb: 20 callbacks suppressed
	[  +5.027138] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [fd494e38d6a560269121e4eb69c609c6db24bea9862163ad00e525dc7e486a9a] <==
	{"level":"info","ts":"2024-08-06T07:08:29.195209Z","caller":"traceutil/trace.go:171","msg":"trace[118727515] transaction","detail":"{read_only:false; response_revision:971; number_of_response:1; }","duration":"106.593798ms","start":"2024-08-06T07:08:29.08859Z","end":"2024-08-06T07:08:29.195184Z","steps":["trace[118727515] 'process raft request'  (duration: 106.491635ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-06T07:08:30.325106Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"380.921377ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11155"}
	{"level":"info","ts":"2024-08-06T07:08:30.325257Z","caller":"traceutil/trace.go:171","msg":"trace[212798188] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:977; }","duration":"381.029138ms","start":"2024-08-06T07:08:29.944129Z","end":"2024-08-06T07:08:30.325159Z","steps":["trace[212798188] 'range keys from in-memory index tree'  (duration: 380.802911ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-06T07:08:30.325292Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-06T07:08:29.944113Z","time spent":"381.170472ms","remote":"127.0.0.1:46178","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":3,"response size":11177,"request content":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" "}
	{"level":"warn","ts":"2024-08-06T07:08:30.325485Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"362.5004ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14065"}
	{"level":"info","ts":"2024-08-06T07:08:30.325522Z","caller":"traceutil/trace.go:171","msg":"trace[520414956] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:977; }","duration":"362.568443ms","start":"2024-08-06T07:08:29.962947Z","end":"2024-08-06T07:08:30.325515Z","steps":["trace[520414956] 'range keys from in-memory index tree'  (duration: 362.39537ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-06T07:08:30.32554Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-06T07:08:29.962924Z","time spent":"362.61104ms","remote":"127.0.0.1:46178","response type":"/etcdserverpb.KV/Range","request count":0,"request size":62,"response count":3,"response size":14087,"request content":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" "}
	{"level":"info","ts":"2024-08-06T07:08:51.177387Z","caller":"traceutil/trace.go:171","msg":"trace[468014521] transaction","detail":"{read_only:false; response_revision:1069; number_of_response:1; }","duration":"573.06147ms","start":"2024-08-06T07:08:50.604256Z","end":"2024-08-06T07:08:51.177318Z","steps":["trace[468014521] 'process raft request'  (duration: 572.94634ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-06T07:08:51.177605Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-06T07:08:50.604238Z","time spent":"573.296944ms","remote":"127.0.0.1:46232","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4349,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/jobs/ingress-nginx/ingress-nginx-admission-create\" mod_revision:739 > success:<request_put:<key:\"/registry/jobs/ingress-nginx/ingress-nginx-admission-create\" value_size:4282 >> failure:<request_range:<key:\"/registry/jobs/ingress-nginx/ingress-nginx-admission-create\" > >"}
	{"level":"warn","ts":"2024-08-06T07:08:51.178362Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"380.854709ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:85465"}
	{"level":"info","ts":"2024-08-06T07:08:51.178496Z","caller":"traceutil/trace.go:171","msg":"trace[1138503758] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1069; }","duration":"381.013693ms","start":"2024-08-06T07:08:50.797473Z","end":"2024-08-06T07:08:51.178487Z","steps":["trace[1138503758] 'agreement among raft nodes before linearized reading'  (duration: 380.707204ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-06T07:08:51.178578Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-06T07:08:50.797459Z","time spent":"381.075348ms","remote":"127.0.0.1:46178","response type":"/etcdserverpb.KV/Range","request count":0,"request size":58,"response count":18,"response size":85487,"request content":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" "}
	{"level":"info","ts":"2024-08-06T07:08:51.178123Z","caller":"traceutil/trace.go:171","msg":"trace[511861839] linearizableReadLoop","detail":"{readStateIndex:1103; appliedIndex:1103; }","duration":"380.613012ms","start":"2024-08-06T07:08:50.797498Z","end":"2024-08-06T07:08:51.178111Z","steps":["trace[511861839] 'read index received'  (duration: 380.58321ms)","trace[511861839] 'applied index is now lower than readState.Index'  (duration: 28.906µs)"],"step_count":2}
	{"level":"warn","ts":"2024-08-06T07:08:51.179306Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"218.447366ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14552"}
	{"level":"info","ts":"2024-08-06T07:08:51.188903Z","caller":"traceutil/trace.go:171","msg":"trace[1328550162] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1069; }","duration":"228.038199ms","start":"2024-08-06T07:08:50.960851Z","end":"2024-08-06T07:08:51.188889Z","steps":["trace[1328550162] 'agreement among raft nodes before linearized reading'  (duration: 218.3629ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-06T07:08:51.189199Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"236.613628ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11155"}
	{"level":"info","ts":"2024-08-06T07:08:51.18923Z","caller":"traceutil/trace.go:171","msg":"trace[837689716] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1069; }","duration":"246.789901ms","start":"2024-08-06T07:08:50.942431Z","end":"2024-08-06T07:08:51.189221Z","steps":["trace[837689716] 'agreement among raft nodes before linearized reading'  (duration: 236.613673ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-06T07:08:58.32804Z","caller":"traceutil/trace.go:171","msg":"trace[401382471] transaction","detail":"{read_only:false; response_revision:1139; number_of_response:1; }","duration":"301.976614ms","start":"2024-08-06T07:08:58.026047Z","end":"2024-08-06T07:08:58.328024Z","steps":["trace[401382471] 'process raft request'  (duration: 301.872041ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-06T07:08:58.328144Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-06T07:08:58.026027Z","time spent":"302.056122ms","remote":"127.0.0.1:46160","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1122 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-08-06T07:09:00.790383Z","caller":"traceutil/trace.go:171","msg":"trace[1821302382] transaction","detail":"{read_only:false; response_revision:1145; number_of_response:1; }","duration":"271.24571ms","start":"2024-08-06T07:09:00.519109Z","end":"2024-08-06T07:09:00.790354Z","steps":["trace[1821302382] 'process raft request'  (duration: 271.151944ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-06T07:09:08.404606Z","caller":"traceutil/trace.go:171","msg":"trace[755152330] linearizableReadLoop","detail":"{readStateIndex:1212; appliedIndex:1211; }","duration":"107.344571ms","start":"2024-08-06T07:09:08.297203Z","end":"2024-08-06T07:09:08.404547Z","steps":["trace[755152330] 'read index received'  (duration: 107.185396ms)","trace[755152330] 'applied index is now lower than readState.Index'  (duration: 158.764µs)"],"step_count":2}
	{"level":"info","ts":"2024-08-06T07:09:08.404816Z","caller":"traceutil/trace.go:171","msg":"trace[1294783742] transaction","detail":"{read_only:false; response_revision:1174; number_of_response:1; }","duration":"277.002001ms","start":"2024-08-06T07:09:08.127804Z","end":"2024-08-06T07:09:08.404806Z","steps":["trace[1294783742] 'process raft request'  (duration: 276.625015ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-06T07:09:08.405203Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.95536ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:85516"}
	{"level":"info","ts":"2024-08-06T07:09:08.406363Z","caller":"traceutil/trace.go:171","msg":"trace[2103059547] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1174; }","duration":"109.171725ms","start":"2024-08-06T07:09:08.297177Z","end":"2024-08-06T07:09:08.406348Z","steps":["trace[2103059547] 'agreement among raft nodes before linearized reading'  (duration: 107.817011ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-06T07:09:24.182231Z","caller":"traceutil/trace.go:171","msg":"trace[821864348] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1262; }","duration":"208.278475ms","start":"2024-08-06T07:09:23.973934Z","end":"2024-08-06T07:09:24.182212Z","steps":["trace[821864348] 'process raft request'  (duration: 208.123671ms)"],"step_count":1}
	
	
	==> kernel <==
	 07:14:30 up 7 min,  0 users,  load average: 0.15, 0.66, 0.47
	Linux addons-505457 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [04bc771142517172d3d6312f52f17040d2235a82f36f5e627dea6620331173a9] <==
	E0806 07:09:18.531704       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.57.22:443/apis/metrics.k8s.io/v1beta1: Get "https://10.97.57.22:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.97.57.22:443: connect: connection refused
	I0806 07:09:18.602719       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0806 07:09:23.328501       1 conn.go:339] Error on socket receive: read tcp 192.168.39.6:8443->192.168.39.1:59332: use of closed network connection
	E0806 07:09:23.529077       1 conn.go:339] Error on socket receive: read tcp 192.168.39.6:8443->192.168.39.1:59356: use of closed network connection
	I0806 07:09:48.917153       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0806 07:09:49.117619       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.105.159.40"}
	I0806 07:09:50.301277       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0806 07:09:51.340281       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	E0806 07:10:25.257108       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0806 07:10:33.241010       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0806 07:10:54.000176       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0806 07:10:54.000495       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0806 07:10:54.017557       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0806 07:10:54.018072       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0806 07:10:54.038520       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0806 07:10:54.038576       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0806 07:10:54.051181       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0806 07:10:54.051243       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0806 07:10:54.070226       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0806 07:10:54.070289       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0806 07:10:55.038992       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0806 07:10:55.070631       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0806 07:10:55.105427       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0806 07:11:00.751837       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.105.159.121"}
	I0806 07:12:10.880148       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.107.227.173"}
	
	
	==> kube-controller-manager [fe12a18afe3dccc7a77e9b91dea2873a822a39628e809273739532bd630c0459] <==
	E0806 07:12:21.417969       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0806 07:12:21.585956       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0806 07:12:21.586003       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0806 07:12:22.912166       1 namespace_controller.go:182] "Namespace has been deleted" logger="namespace-controller" namespace="ingress-nginx"
	W0806 07:12:52.396880       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0806 07:12:52.397095       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0806 07:12:55.241385       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0806 07:12:55.241447       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0806 07:13:04.109268       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0806 07:13:04.109380       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0806 07:13:10.203847       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0806 07:13:10.203904       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0806 07:13:36.394866       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0806 07:13:36.394942       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0806 07:13:41.715662       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0806 07:13:41.715713       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0806 07:13:42.254945       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0806 07:13:42.255032       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0806 07:13:44.265569       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0806 07:13:44.265678       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0806 07:14:10.852036       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0806 07:14:10.852087       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0806 07:14:24.521098       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0806 07:14:24.521256       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0806 07:14:29.389903       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-c59844bb4" duration="18.553µs"
	
	
	==> kube-proxy [a5080e5fce2c6cf845092e92aebf13550fc5412567f7c21694410eaebc73d84d] <==
	I0806 07:07:44.052551       1 server_linux.go:69] "Using iptables proxy"
	I0806 07:07:44.072152       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.6"]
	I0806 07:07:44.163100       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0806 07:07:44.163152       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0806 07:07:44.163169       1 server_linux.go:165] "Using iptables Proxier"
	I0806 07:07:44.165614       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0806 07:07:44.165912       1 server.go:872] "Version info" version="v1.30.3"
	I0806 07:07:44.165940       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0806 07:07:44.167042       1 config.go:192] "Starting service config controller"
	I0806 07:07:44.167074       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0806 07:07:44.167098       1 config.go:101] "Starting endpoint slice config controller"
	I0806 07:07:44.167101       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0806 07:07:44.167515       1 config.go:319] "Starting node config controller"
	I0806 07:07:44.167545       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0806 07:07:44.267923       1 shared_informer.go:320] Caches are synced for node config
	I0806 07:07:44.267967       1 shared_informer.go:320] Caches are synced for service config
	I0806 07:07:44.267987       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [9e84eb866dc0b3dd79f91399a5714b5b3bbb8eb0ac746133af657c88755e4b88] <==
	W0806 07:07:25.506413       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0806 07:07:25.506450       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0806 07:07:25.507256       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0806 07:07:25.507295       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0806 07:07:26.502861       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0806 07:07:26.502909       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0806 07:07:26.507117       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0806 07:07:26.507193       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0806 07:07:26.584806       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0806 07:07:26.584958       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0806 07:07:26.633651       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0806 07:07:26.633725       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0806 07:07:26.640480       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0806 07:07:26.640560       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0806 07:07:26.656068       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0806 07:07:26.656155       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0806 07:07:26.664937       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0806 07:07:26.664981       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0806 07:07:26.666702       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0806 07:07:26.666799       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0806 07:07:26.692636       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0806 07:07:26.692855       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0806 07:07:26.737250       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0806 07:07:26.737436       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0806 07:07:29.801936       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 06 07:12:16 addons-505457 kubelet[1282]: I0806 07:12:16.617042    1282 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="251be54e-83c5-40a7-8cbe-d9111a8a8db4" path="/var/lib/kubelet/pods/251be54e-83c5-40a7-8cbe-d9111a8a8db4/volumes"
	Aug 06 07:12:28 addons-505457 kubelet[1282]: E0806 07:12:28.657511    1282 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 06 07:12:28 addons-505457 kubelet[1282]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 06 07:12:28 addons-505457 kubelet[1282]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 06 07:12:28 addons-505457 kubelet[1282]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 06 07:12:28 addons-505457 kubelet[1282]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 06 07:12:31 addons-505457 kubelet[1282]: I0806 07:12:31.401270    1282 scope.go:117] "RemoveContainer" containerID="dc63ed54019f6622542a4f78554ac1d61e333d9425e6933f71750e404ae4c498"
	Aug 06 07:12:31 addons-505457 kubelet[1282]: I0806 07:12:31.425428    1282 scope.go:117] "RemoveContainer" containerID="a60360e8ba07430103531158c7a4c8b2a18bee0bffe23935876e12eeea3daf54"
	Aug 06 07:13:11 addons-505457 kubelet[1282]: I0806 07:13:11.614290    1282 kubelet_pods.go:988] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Aug 06 07:13:28 addons-505457 kubelet[1282]: E0806 07:13:28.658099    1282 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 06 07:13:28 addons-505457 kubelet[1282]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 06 07:13:28 addons-505457 kubelet[1282]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 06 07:13:28 addons-505457 kubelet[1282]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 06 07:13:28 addons-505457 kubelet[1282]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 06 07:14:28 addons-505457 kubelet[1282]: E0806 07:14:28.656879    1282 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 06 07:14:28 addons-505457 kubelet[1282]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 06 07:14:28 addons-505457 kubelet[1282]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 06 07:14:28 addons-505457 kubelet[1282]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 06 07:14:28 addons-505457 kubelet[1282]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 06 07:14:30 addons-505457 kubelet[1282]: I0806 07:14:30.890533    1282 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/a73b51d1-3f19-4608-b37f-2d92f56fec02-tmp-dir\") pod \"a73b51d1-3f19-4608-b37f-2d92f56fec02\" (UID: \"a73b51d1-3f19-4608-b37f-2d92f56fec02\") "
	Aug 06 07:14:30 addons-505457 kubelet[1282]: I0806 07:14:30.890577    1282 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xp7w\" (UniqueName: \"kubernetes.io/projected/a73b51d1-3f19-4608-b37f-2d92f56fec02-kube-api-access-9xp7w\") pod \"a73b51d1-3f19-4608-b37f-2d92f56fec02\" (UID: \"a73b51d1-3f19-4608-b37f-2d92f56fec02\") "
	Aug 06 07:14:30 addons-505457 kubelet[1282]: I0806 07:14:30.891067    1282 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a73b51d1-3f19-4608-b37f-2d92f56fec02-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "a73b51d1-3f19-4608-b37f-2d92f56fec02" (UID: "a73b51d1-3f19-4608-b37f-2d92f56fec02"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Aug 06 07:14:30 addons-505457 kubelet[1282]: I0806 07:14:30.893122    1282 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a73b51d1-3f19-4608-b37f-2d92f56fec02-kube-api-access-9xp7w" (OuterVolumeSpecName: "kube-api-access-9xp7w") pod "a73b51d1-3f19-4608-b37f-2d92f56fec02" (UID: "a73b51d1-3f19-4608-b37f-2d92f56fec02"). InnerVolumeSpecName "kube-api-access-9xp7w". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 06 07:14:30 addons-505457 kubelet[1282]: I0806 07:14:30.990978    1282 reconciler_common.go:289] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/a73b51d1-3f19-4608-b37f-2d92f56fec02-tmp-dir\") on node \"addons-505457\" DevicePath \"\""
	Aug 06 07:14:30 addons-505457 kubelet[1282]: I0806 07:14:30.991042    1282 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-9xp7w\" (UniqueName: \"kubernetes.io/projected/a73b51d1-3f19-4608-b37f-2d92f56fec02-kube-api-access-9xp7w\") on node \"addons-505457\" DevicePath \"\""
	
	
	==> storage-provisioner [e7a623480af4afbb181b874f946e3af94aca0c310fbb8b0a43223db5c3f102b1] <==
	I0806 07:07:50.577913       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0806 07:07:50.669439       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0806 07:07:50.669619       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0806 07:07:50.812822       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0806 07:07:50.830620       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6a5fb6f7-352f-4ca7-a941-e98024868ec4", APIVersion:"v1", ResourceVersion:"776", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-505457_9eebe1de-93a4-4fff-8399-d241320c1a27 became leader
	I0806 07:07:50.831054       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-505457_9eebe1de-93a4-4fff-8399-d241320c1a27!
	I0806 07:07:51.041058       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-505457_9eebe1de-93a4-4fff-8399-d241320c1a27!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-505457 -n addons-505457
helpers_test.go:261: (dbg) Run:  kubectl --context addons-505457 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-c59844bb4-dghj8
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/MetricsServer]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-505457 describe pod metrics-server-c59844bb4-dghj8
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-505457 describe pod metrics-server-c59844bb4-dghj8: exit status 1 (63.392773ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-c59844bb4-dghj8" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-505457 describe pod metrics-server-c59844bb4-dghj8: exit status 1
--- FAIL: TestAddons/parallel/MetricsServer (299.68s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.41s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-505457
addons_test.go:174: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-505457: exit status 82 (2m0.487811695s)

                                                
                                                
-- stdout --
	* Stopping node "addons-505457"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:176: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-505457" : exit status 82
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-505457
addons_test.go:178: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-505457: exit status 11 (21.632559325s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.6:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:180: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-505457" : exit status 11
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-505457
addons_test.go:182: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-505457: exit status 11 (6.143395366s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.6:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:184: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-505457" : exit status 11
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-505457
addons_test.go:187: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-505457: exit status 11 (6.143147s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.6:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:189: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-505457" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-118173 node stop m02 -v=7 --alsologtostderr
E0806 07:29:05.668753   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/functional-811691/client.crt: no such file or directory
E0806 07:29:13.203789   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/addons-505457/client.crt: no such file or directory
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-118173 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.465533858s)

                                                
                                                
-- stdout --
	* Stopping node "ha-118173-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 07:28:17.886497   36096 out.go:291] Setting OutFile to fd 1 ...
	I0806 07:28:17.886758   36096 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 07:28:17.886768   36096 out.go:304] Setting ErrFile to fd 2...
	I0806 07:28:17.886772   36096 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 07:28:17.886960   36096 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19370-13057/.minikube/bin
	I0806 07:28:17.887208   36096 mustload.go:65] Loading cluster: ha-118173
	I0806 07:28:17.887540   36096 config.go:182] Loaded profile config "ha-118173": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 07:28:17.887557   36096 stop.go:39] StopHost: ha-118173-m02
	I0806 07:28:17.887912   36096 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:28:17.887957   36096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:28:17.903712   36096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34223
	I0806 07:28:17.904158   36096 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:28:17.904727   36096 main.go:141] libmachine: Using API Version  1
	I0806 07:28:17.904752   36096 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:28:17.905123   36096 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:28:17.907556   36096 out.go:177] * Stopping node "ha-118173-m02"  ...
	I0806 07:28:17.909001   36096 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0806 07:28:17.909038   36096 main.go:141] libmachine: (ha-118173-m02) Calling .DriverName
	I0806 07:28:17.909249   36096 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0806 07:28:17.909275   36096 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHHostname
	I0806 07:28:17.911978   36096 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:28:17.912431   36096 main.go:141] libmachine: (ha-118173-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:e9:38", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:23:29 +0000 UTC Type:0 Mac:52:54:00:b4:e9:38 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-118173-m02 Clientid:01:52:54:00:b4:e9:38}
	I0806 07:28:17.912459   36096 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined IP address 192.168.39.203 and MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:28:17.912674   36096 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHPort
	I0806 07:28:17.912864   36096 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHKeyPath
	I0806 07:28:17.913024   36096 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHUsername
	I0806 07:28:17.913163   36096 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173-m02/id_rsa Username:docker}
	I0806 07:28:18.000914   36096 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0806 07:28:18.056485   36096 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0806 07:28:18.112388   36096 main.go:141] libmachine: Stopping "ha-118173-m02"...
	I0806 07:28:18.112438   36096 main.go:141] libmachine: (ha-118173-m02) Calling .GetState
	I0806 07:28:18.114150   36096 main.go:141] libmachine: (ha-118173-m02) Calling .Stop
	I0806 07:28:18.117804   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 0/120
	I0806 07:28:19.120128   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 1/120
	I0806 07:28:20.121500   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 2/120
	I0806 07:28:21.122920   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 3/120
	I0806 07:28:22.124021   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 4/120
	I0806 07:28:23.125918   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 5/120
	I0806 07:28:24.127436   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 6/120
	I0806 07:28:25.129801   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 7/120
	I0806 07:28:26.131232   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 8/120
	I0806 07:28:27.132742   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 9/120
	I0806 07:28:28.134769   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 10/120
	I0806 07:28:29.136146   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 11/120
	I0806 07:28:30.137734   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 12/120
	I0806 07:28:31.139147   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 13/120
	I0806 07:28:32.140501   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 14/120
	I0806 07:28:33.142382   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 15/120
	I0806 07:28:34.144066   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 16/120
	I0806 07:28:35.145573   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 17/120
	I0806 07:28:36.146983   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 18/120
	I0806 07:28:37.148447   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 19/120
	I0806 07:28:38.150416   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 20/120
	I0806 07:28:39.151627   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 21/120
	I0806 07:28:40.153028   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 22/120
	I0806 07:28:41.154817   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 23/120
	I0806 07:28:42.156202   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 24/120
	I0806 07:28:43.158204   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 25/120
	I0806 07:28:44.159615   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 26/120
	I0806 07:28:45.160868   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 27/120
	I0806 07:28:46.163053   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 28/120
	I0806 07:28:47.164260   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 29/120
	I0806 07:28:48.165576   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 30/120
	I0806 07:28:49.167153   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 31/120
	I0806 07:28:50.168701   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 32/120
	I0806 07:28:51.170796   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 33/120
	I0806 07:28:52.172068   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 34/120
	I0806 07:28:53.173919   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 35/120
	I0806 07:28:54.175825   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 36/120
	I0806 07:28:55.177188   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 37/120
	I0806 07:28:56.179011   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 38/120
	I0806 07:28:57.180159   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 39/120
	I0806 07:28:58.181842   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 40/120
	I0806 07:28:59.183098   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 41/120
	I0806 07:29:00.184613   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 42/120
	I0806 07:29:01.186770   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 43/120
	I0806 07:29:02.188544   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 44/120
	I0806 07:29:03.190391   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 45/120
	I0806 07:29:04.191721   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 46/120
	I0806 07:29:05.193370   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 47/120
	I0806 07:29:06.194811   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 48/120
	I0806 07:29:07.196149   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 49/120
	I0806 07:29:08.197813   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 50/120
	I0806 07:29:09.199148   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 51/120
	I0806 07:29:10.200532   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 52/120
	I0806 07:29:11.201926   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 53/120
	I0806 07:29:12.203366   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 54/120
	I0806 07:29:13.205518   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 55/120
	I0806 07:29:14.206932   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 56/120
	I0806 07:29:15.208121   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 57/120
	I0806 07:29:16.209321   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 58/120
	I0806 07:29:17.210771   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 59/120
	I0806 07:29:18.212507   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 60/120
	I0806 07:29:19.214832   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 61/120
	I0806 07:29:20.216227   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 62/120
	I0806 07:29:21.217666   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 63/120
	I0806 07:29:22.218948   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 64/120
	I0806 07:29:23.220675   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 65/120
	I0806 07:29:24.221863   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 66/120
	I0806 07:29:25.223114   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 67/120
	I0806 07:29:26.224428   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 68/120
	I0806 07:29:27.225833   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 69/120
	I0806 07:29:28.227720   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 70/120
	I0806 07:29:29.229093   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 71/120
	I0806 07:29:30.231126   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 72/120
	I0806 07:29:31.232710   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 73/120
	I0806 07:29:32.234913   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 74/120
	I0806 07:29:33.237022   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 75/120
	I0806 07:29:34.239220   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 76/120
	I0806 07:29:35.240518   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 77/120
	I0806 07:29:36.243028   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 78/120
	I0806 07:29:37.244607   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 79/120
	I0806 07:29:38.246357   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 80/120
	I0806 07:29:39.247694   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 81/120
	I0806 07:29:40.249201   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 82/120
	I0806 07:29:41.251158   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 83/120
	I0806 07:29:42.252615   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 84/120
	I0806 07:29:43.254810   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 85/120
	I0806 07:29:44.256223   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 86/120
	I0806 07:29:45.257605   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 87/120
	I0806 07:29:46.258898   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 88/120
	I0806 07:29:47.260100   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 89/120
	I0806 07:29:48.261423   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 90/120
	I0806 07:29:49.262597   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 91/120
	I0806 07:29:50.264310   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 92/120
	I0806 07:29:51.265965   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 93/120
	I0806 07:29:52.268051   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 94/120
	I0806 07:29:53.270141   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 95/120
	I0806 07:29:54.272284   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 96/120
	I0806 07:29:55.273564   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 97/120
	I0806 07:29:56.274865   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 98/120
	I0806 07:29:57.276275   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 99/120
	I0806 07:29:58.277841   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 100/120
	I0806 07:29:59.279082   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 101/120
	I0806 07:30:00.280181   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 102/120
	I0806 07:30:01.281681   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 103/120
	I0806 07:30:02.283051   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 104/120
	I0806 07:30:03.285279   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 105/120
	I0806 07:30:04.286672   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 106/120
	I0806 07:30:05.287988   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 107/120
	I0806 07:30:06.289273   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 108/120
	I0806 07:30:07.291104   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 109/120
	I0806 07:30:08.293347   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 110/120
	I0806 07:30:09.294688   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 111/120
	I0806 07:30:10.296738   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 112/120
	I0806 07:30:11.298812   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 113/120
	I0806 07:30:12.300581   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 114/120
	I0806 07:30:13.302474   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 115/120
	I0806 07:30:14.304543   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 116/120
	I0806 07:30:15.306792   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 117/120
	I0806 07:30:16.308192   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 118/120
	I0806 07:30:17.309587   36096 main.go:141] libmachine: (ha-118173-m02) Waiting for machine to stop 119/120
	I0806 07:30:18.310023   36096 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0806 07:30:18.310149   36096 out.go:239] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-118173 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-118173 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-118173 status -v=7 --alsologtostderr: exit status 3 (19.065323353s)

                                                
                                                
-- stdout --
	ha-118173
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-118173-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-118173-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-118173-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 07:30:18.352971   36521 out.go:291] Setting OutFile to fd 1 ...
	I0806 07:30:18.353102   36521 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 07:30:18.353110   36521 out.go:304] Setting ErrFile to fd 2...
	I0806 07:30:18.353114   36521 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 07:30:18.353306   36521 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19370-13057/.minikube/bin
	I0806 07:30:18.353484   36521 out.go:298] Setting JSON to false
	I0806 07:30:18.353505   36521 mustload.go:65] Loading cluster: ha-118173
	I0806 07:30:18.353559   36521 notify.go:220] Checking for updates...
	I0806 07:30:18.353912   36521 config.go:182] Loaded profile config "ha-118173": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 07:30:18.353930   36521 status.go:255] checking status of ha-118173 ...
	I0806 07:30:18.354395   36521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:30:18.354531   36521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:30:18.374767   36521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38463
	I0806 07:30:18.375275   36521 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:30:18.375919   36521 main.go:141] libmachine: Using API Version  1
	I0806 07:30:18.375950   36521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:30:18.376458   36521 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:30:18.376736   36521 main.go:141] libmachine: (ha-118173) Calling .GetState
	I0806 07:30:18.378507   36521 status.go:330] ha-118173 host status = "Running" (err=<nil>)
	I0806 07:30:18.378520   36521 host.go:66] Checking if "ha-118173" exists ...
	I0806 07:30:18.378821   36521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:30:18.378907   36521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:30:18.393088   36521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43807
	I0806 07:30:18.393507   36521 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:30:18.393991   36521 main.go:141] libmachine: Using API Version  1
	I0806 07:30:18.394026   36521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:30:18.394339   36521 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:30:18.394499   36521 main.go:141] libmachine: (ha-118173) Calling .GetIP
	I0806 07:30:18.397107   36521 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:30:18.397504   36521 main.go:141] libmachine: (ha-118173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:df:f0", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:22:33 +0000 UTC Type:0 Mac:52:54:00:ef:df:f0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-118173 Clientid:01:52:54:00:ef:df:f0}
	I0806 07:30:18.397533   36521 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined IP address 192.168.39.164 and MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:30:18.397682   36521 host.go:66] Checking if "ha-118173" exists ...
	I0806 07:30:18.397967   36521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:30:18.398026   36521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:30:18.414166   36521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34383
	I0806 07:30:18.414570   36521 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:30:18.415022   36521 main.go:141] libmachine: Using API Version  1
	I0806 07:30:18.415056   36521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:30:18.415362   36521 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:30:18.415534   36521 main.go:141] libmachine: (ha-118173) Calling .DriverName
	I0806 07:30:18.415720   36521 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0806 07:30:18.415748   36521 main.go:141] libmachine: (ha-118173) Calling .GetSSHHostname
	I0806 07:30:18.418258   36521 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:30:18.418683   36521 main.go:141] libmachine: (ha-118173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:df:f0", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:22:33 +0000 UTC Type:0 Mac:52:54:00:ef:df:f0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-118173 Clientid:01:52:54:00:ef:df:f0}
	I0806 07:30:18.418710   36521 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined IP address 192.168.39.164 and MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:30:18.418911   36521 main.go:141] libmachine: (ha-118173) Calling .GetSSHPort
	I0806 07:30:18.419061   36521 main.go:141] libmachine: (ha-118173) Calling .GetSSHKeyPath
	I0806 07:30:18.419178   36521 main.go:141] libmachine: (ha-118173) Calling .GetSSHUsername
	I0806 07:30:18.419328   36521 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173/id_rsa Username:docker}
	I0806 07:30:18.501980   36521 ssh_runner.go:195] Run: systemctl --version
	I0806 07:30:18.508980   36521 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 07:30:18.527596   36521 kubeconfig.go:125] found "ha-118173" server: "https://192.168.39.254:8443"
	I0806 07:30:18.527621   36521 api_server.go:166] Checking apiserver status ...
	I0806 07:30:18.527657   36521 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 07:30:18.545496   36521 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1178/cgroup
	W0806 07:30:18.556340   36521 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1178/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0806 07:30:18.556416   36521 ssh_runner.go:195] Run: ls
	I0806 07:30:18.561068   36521 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0806 07:30:18.567815   36521 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0806 07:30:18.567838   36521 status.go:422] ha-118173 apiserver status = Running (err=<nil>)
	I0806 07:30:18.567847   36521 status.go:257] ha-118173 status: &{Name:ha-118173 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0806 07:30:18.567873   36521 status.go:255] checking status of ha-118173-m02 ...
	I0806 07:30:18.568196   36521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:30:18.568243   36521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:30:18.584698   36521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33207
	I0806 07:30:18.585212   36521 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:30:18.585666   36521 main.go:141] libmachine: Using API Version  1
	I0806 07:30:18.585683   36521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:30:18.586047   36521 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:30:18.586273   36521 main.go:141] libmachine: (ha-118173-m02) Calling .GetState
	I0806 07:30:18.588328   36521 status.go:330] ha-118173-m02 host status = "Running" (err=<nil>)
	I0806 07:30:18.588347   36521 host.go:66] Checking if "ha-118173-m02" exists ...
	I0806 07:30:18.588816   36521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:30:18.588855   36521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:30:18.603965   36521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42877
	I0806 07:30:18.604424   36521 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:30:18.604886   36521 main.go:141] libmachine: Using API Version  1
	I0806 07:30:18.604903   36521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:30:18.605294   36521 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:30:18.605492   36521 main.go:141] libmachine: (ha-118173-m02) Calling .GetIP
	I0806 07:30:18.608424   36521 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:30:18.608875   36521 main.go:141] libmachine: (ha-118173-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:e9:38", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:23:29 +0000 UTC Type:0 Mac:52:54:00:b4:e9:38 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-118173-m02 Clientid:01:52:54:00:b4:e9:38}
	I0806 07:30:18.608893   36521 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined IP address 192.168.39.203 and MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:30:18.609096   36521 host.go:66] Checking if "ha-118173-m02" exists ...
	I0806 07:30:18.609494   36521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:30:18.609538   36521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:30:18.624426   36521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39913
	I0806 07:30:18.624817   36521 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:30:18.625296   36521 main.go:141] libmachine: Using API Version  1
	I0806 07:30:18.625314   36521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:30:18.625633   36521 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:30:18.625897   36521 main.go:141] libmachine: (ha-118173-m02) Calling .DriverName
	I0806 07:30:18.626119   36521 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0806 07:30:18.626143   36521 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHHostname
	I0806 07:30:18.629023   36521 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:30:18.629488   36521 main.go:141] libmachine: (ha-118173-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:e9:38", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:23:29 +0000 UTC Type:0 Mac:52:54:00:b4:e9:38 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-118173-m02 Clientid:01:52:54:00:b4:e9:38}
	I0806 07:30:18.629522   36521 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined IP address 192.168.39.203 and MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:30:18.629609   36521 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHPort
	I0806 07:30:18.629784   36521 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHKeyPath
	I0806 07:30:18.629947   36521 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHUsername
	I0806 07:30:18.630104   36521 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173-m02/id_rsa Username:docker}
	W0806 07:30:36.996559   36521 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.203:22: connect: no route to host
	W0806 07:30:36.996656   36521 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.203:22: connect: no route to host
	E0806 07:30:36.996671   36521 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.203:22: connect: no route to host
	I0806 07:30:36.996678   36521 status.go:257] ha-118173-m02 status: &{Name:ha-118173-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0806 07:30:36.996704   36521 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.203:22: connect: no route to host
	I0806 07:30:36.996711   36521 status.go:255] checking status of ha-118173-m03 ...
	I0806 07:30:36.996992   36521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:30:36.997035   36521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:30:37.012459   36521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32901
	I0806 07:30:37.013106   36521 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:30:37.013718   36521 main.go:141] libmachine: Using API Version  1
	I0806 07:30:37.013744   36521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:30:37.014039   36521 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:30:37.014275   36521 main.go:141] libmachine: (ha-118173-m03) Calling .GetState
	I0806 07:30:37.015911   36521 status.go:330] ha-118173-m03 host status = "Running" (err=<nil>)
	I0806 07:30:37.015928   36521 host.go:66] Checking if "ha-118173-m03" exists ...
	I0806 07:30:37.016221   36521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:30:37.016255   36521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:30:37.030698   36521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43869
	I0806 07:30:37.031142   36521 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:30:37.031618   36521 main.go:141] libmachine: Using API Version  1
	I0806 07:30:37.031641   36521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:30:37.032011   36521 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:30:37.032216   36521 main.go:141] libmachine: (ha-118173-m03) Calling .GetIP
	I0806 07:30:37.035043   36521 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:30:37.035483   36521 main.go:141] libmachine: (ha-118173-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:90:55", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:25:53 +0000 UTC Type:0 Mac:52:54:00:d8:90:55 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-118173-m03 Clientid:01:52:54:00:d8:90:55}
	I0806 07:30:37.035510   36521 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined IP address 192.168.39.5 and MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:30:37.035642   36521 host.go:66] Checking if "ha-118173-m03" exists ...
	I0806 07:30:37.035936   36521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:30:37.035966   36521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:30:37.051052   36521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36999
	I0806 07:30:37.051502   36521 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:30:37.051966   36521 main.go:141] libmachine: Using API Version  1
	I0806 07:30:37.051986   36521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:30:37.052311   36521 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:30:37.052497   36521 main.go:141] libmachine: (ha-118173-m03) Calling .DriverName
	I0806 07:30:37.052704   36521 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0806 07:30:37.052723   36521 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHHostname
	I0806 07:30:37.055448   36521 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:30:37.055923   36521 main.go:141] libmachine: (ha-118173-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:90:55", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:25:53 +0000 UTC Type:0 Mac:52:54:00:d8:90:55 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-118173-m03 Clientid:01:52:54:00:d8:90:55}
	I0806 07:30:37.055942   36521 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined IP address 192.168.39.5 and MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:30:37.056152   36521 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHPort
	I0806 07:30:37.056318   36521 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHKeyPath
	I0806 07:30:37.056502   36521 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHUsername
	I0806 07:30:37.056625   36521 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173-m03/id_rsa Username:docker}
	I0806 07:30:37.148677   36521 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 07:30:37.171943   36521 kubeconfig.go:125] found "ha-118173" server: "https://192.168.39.254:8443"
	I0806 07:30:37.171979   36521 api_server.go:166] Checking apiserver status ...
	I0806 07:30:37.172022   36521 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 07:30:37.189639   36521 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1553/cgroup
	W0806 07:30:37.201118   36521 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1553/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0806 07:30:37.201179   36521 ssh_runner.go:195] Run: ls
	I0806 07:30:37.205661   36521 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0806 07:30:37.210181   36521 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0806 07:30:37.210203   36521 status.go:422] ha-118173-m03 apiserver status = Running (err=<nil>)
	I0806 07:30:37.210211   36521 status.go:257] ha-118173-m03 status: &{Name:ha-118173-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0806 07:30:37.210227   36521 status.go:255] checking status of ha-118173-m04 ...
	I0806 07:30:37.210577   36521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:30:37.210612   36521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:30:37.226620   36521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46453
	I0806 07:30:37.227039   36521 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:30:37.227469   36521 main.go:141] libmachine: Using API Version  1
	I0806 07:30:37.227493   36521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:30:37.227864   36521 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:30:37.228104   36521 main.go:141] libmachine: (ha-118173-m04) Calling .GetState
	I0806 07:30:37.229672   36521 status.go:330] ha-118173-m04 host status = "Running" (err=<nil>)
	I0806 07:30:37.229686   36521 host.go:66] Checking if "ha-118173-m04" exists ...
	I0806 07:30:37.229943   36521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:30:37.229990   36521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:30:37.244740   36521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37013
	I0806 07:30:37.245162   36521 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:30:37.245619   36521 main.go:141] libmachine: Using API Version  1
	I0806 07:30:37.245641   36521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:30:37.245928   36521 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:30:37.246144   36521 main.go:141] libmachine: (ha-118173-m04) Calling .GetIP
	I0806 07:30:37.248801   36521 main.go:141] libmachine: (ha-118173-m04) DBG | domain ha-118173-m04 has defined MAC address 52:54:00:8f:be:12 in network mk-ha-118173
	I0806 07:30:37.249241   36521 main.go:141] libmachine: (ha-118173-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:be:12", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:27:21 +0000 UTC Type:0 Mac:52:54:00:8f:be:12 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:ha-118173-m04 Clientid:01:52:54:00:8f:be:12}
	I0806 07:30:37.249263   36521 main.go:141] libmachine: (ha-118173-m04) DBG | domain ha-118173-m04 has defined IP address 192.168.39.107 and MAC address 52:54:00:8f:be:12 in network mk-ha-118173
	I0806 07:30:37.249394   36521 host.go:66] Checking if "ha-118173-m04" exists ...
	I0806 07:30:37.249776   36521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:30:37.249815   36521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:30:37.264184   36521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43523
	I0806 07:30:37.264572   36521 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:30:37.265156   36521 main.go:141] libmachine: Using API Version  1
	I0806 07:30:37.265187   36521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:30:37.265494   36521 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:30:37.265710   36521 main.go:141] libmachine: (ha-118173-m04) Calling .DriverName
	I0806 07:30:37.265910   36521 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0806 07:30:37.265935   36521 main.go:141] libmachine: (ha-118173-m04) Calling .GetSSHHostname
	I0806 07:30:37.269013   36521 main.go:141] libmachine: (ha-118173-m04) DBG | domain ha-118173-m04 has defined MAC address 52:54:00:8f:be:12 in network mk-ha-118173
	I0806 07:30:37.269497   36521 main.go:141] libmachine: (ha-118173-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:be:12", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:27:21 +0000 UTC Type:0 Mac:52:54:00:8f:be:12 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:ha-118173-m04 Clientid:01:52:54:00:8f:be:12}
	I0806 07:30:37.269520   36521 main.go:141] libmachine: (ha-118173-m04) DBG | domain ha-118173-m04 has defined IP address 192.168.39.107 and MAC address 52:54:00:8f:be:12 in network mk-ha-118173
	I0806 07:30:37.269712   36521 main.go:141] libmachine: (ha-118173-m04) Calling .GetSSHPort
	I0806 07:30:37.269902   36521 main.go:141] libmachine: (ha-118173-m04) Calling .GetSSHKeyPath
	I0806 07:30:37.270067   36521 main.go:141] libmachine: (ha-118173-m04) Calling .GetSSHUsername
	I0806 07:30:37.270236   36521 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173-m04/id_rsa Username:docker}
	I0806 07:30:37.358148   36521 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 07:30:37.374759   36521 status.go:257] ha-118173-m04 status: &{Name:ha-118173-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-118173 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-118173 -n ha-118173
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-118173 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-118173 logs -n 25: (1.443200699s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-118173 cp ha-118173-m03:/home/docker/cp-test.txt                              | ha-118173 | jenkins | v1.33.1 | 06 Aug 24 07:28 UTC | 06 Aug 24 07:28 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile4204706009/001/cp-test_ha-118173-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-118173 ssh -n                                                                 | ha-118173 | jenkins | v1.33.1 | 06 Aug 24 07:28 UTC | 06 Aug 24 07:28 UTC |
	|         | ha-118173-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-118173 cp ha-118173-m03:/home/docker/cp-test.txt                              | ha-118173 | jenkins | v1.33.1 | 06 Aug 24 07:28 UTC | 06 Aug 24 07:28 UTC |
	|         | ha-118173:/home/docker/cp-test_ha-118173-m03_ha-118173.txt                       |           |         |         |                     |                     |
	| ssh     | ha-118173 ssh -n                                                                 | ha-118173 | jenkins | v1.33.1 | 06 Aug 24 07:28 UTC | 06 Aug 24 07:28 UTC |
	|         | ha-118173-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-118173 ssh -n ha-118173 sudo cat                                              | ha-118173 | jenkins | v1.33.1 | 06 Aug 24 07:28 UTC | 06 Aug 24 07:28 UTC |
	|         | /home/docker/cp-test_ha-118173-m03_ha-118173.txt                                 |           |         |         |                     |                     |
	| cp      | ha-118173 cp ha-118173-m03:/home/docker/cp-test.txt                              | ha-118173 | jenkins | v1.33.1 | 06 Aug 24 07:28 UTC | 06 Aug 24 07:28 UTC |
	|         | ha-118173-m02:/home/docker/cp-test_ha-118173-m03_ha-118173-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-118173 ssh -n                                                                 | ha-118173 | jenkins | v1.33.1 | 06 Aug 24 07:28 UTC | 06 Aug 24 07:28 UTC |
	|         | ha-118173-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-118173 ssh -n ha-118173-m02 sudo cat                                          | ha-118173 | jenkins | v1.33.1 | 06 Aug 24 07:28 UTC | 06 Aug 24 07:28 UTC |
	|         | /home/docker/cp-test_ha-118173-m03_ha-118173-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-118173 cp ha-118173-m03:/home/docker/cp-test.txt                              | ha-118173 | jenkins | v1.33.1 | 06 Aug 24 07:28 UTC | 06 Aug 24 07:28 UTC |
	|         | ha-118173-m04:/home/docker/cp-test_ha-118173-m03_ha-118173-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-118173 ssh -n                                                                 | ha-118173 | jenkins | v1.33.1 | 06 Aug 24 07:28 UTC | 06 Aug 24 07:28 UTC |
	|         | ha-118173-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-118173 ssh -n ha-118173-m04 sudo cat                                          | ha-118173 | jenkins | v1.33.1 | 06 Aug 24 07:28 UTC | 06 Aug 24 07:28 UTC |
	|         | /home/docker/cp-test_ha-118173-m03_ha-118173-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-118173 cp testdata/cp-test.txt                                                | ha-118173 | jenkins | v1.33.1 | 06 Aug 24 07:28 UTC | 06 Aug 24 07:28 UTC |
	|         | ha-118173-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-118173 ssh -n                                                                 | ha-118173 | jenkins | v1.33.1 | 06 Aug 24 07:28 UTC | 06 Aug 24 07:28 UTC |
	|         | ha-118173-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-118173 cp ha-118173-m04:/home/docker/cp-test.txt                              | ha-118173 | jenkins | v1.33.1 | 06 Aug 24 07:28 UTC | 06 Aug 24 07:28 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile4204706009/001/cp-test_ha-118173-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-118173 ssh -n                                                                 | ha-118173 | jenkins | v1.33.1 | 06 Aug 24 07:28 UTC | 06 Aug 24 07:28 UTC |
	|         | ha-118173-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-118173 cp ha-118173-m04:/home/docker/cp-test.txt                              | ha-118173 | jenkins | v1.33.1 | 06 Aug 24 07:28 UTC | 06 Aug 24 07:28 UTC |
	|         | ha-118173:/home/docker/cp-test_ha-118173-m04_ha-118173.txt                       |           |         |         |                     |                     |
	| ssh     | ha-118173 ssh -n                                                                 | ha-118173 | jenkins | v1.33.1 | 06 Aug 24 07:28 UTC | 06 Aug 24 07:28 UTC |
	|         | ha-118173-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-118173 ssh -n ha-118173 sudo cat                                              | ha-118173 | jenkins | v1.33.1 | 06 Aug 24 07:28 UTC | 06 Aug 24 07:28 UTC |
	|         | /home/docker/cp-test_ha-118173-m04_ha-118173.txt                                 |           |         |         |                     |                     |
	| cp      | ha-118173 cp ha-118173-m04:/home/docker/cp-test.txt                              | ha-118173 | jenkins | v1.33.1 | 06 Aug 24 07:28 UTC | 06 Aug 24 07:28 UTC |
	|         | ha-118173-m02:/home/docker/cp-test_ha-118173-m04_ha-118173-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-118173 ssh -n                                                                 | ha-118173 | jenkins | v1.33.1 | 06 Aug 24 07:28 UTC | 06 Aug 24 07:28 UTC |
	|         | ha-118173-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-118173 ssh -n ha-118173-m02 sudo cat                                          | ha-118173 | jenkins | v1.33.1 | 06 Aug 24 07:28 UTC | 06 Aug 24 07:28 UTC |
	|         | /home/docker/cp-test_ha-118173-m04_ha-118173-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-118173 cp ha-118173-m04:/home/docker/cp-test.txt                              | ha-118173 | jenkins | v1.33.1 | 06 Aug 24 07:28 UTC | 06 Aug 24 07:28 UTC |
	|         | ha-118173-m03:/home/docker/cp-test_ha-118173-m04_ha-118173-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-118173 ssh -n                                                                 | ha-118173 | jenkins | v1.33.1 | 06 Aug 24 07:28 UTC | 06 Aug 24 07:28 UTC |
	|         | ha-118173-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-118173 ssh -n ha-118173-m03 sudo cat                                          | ha-118173 | jenkins | v1.33.1 | 06 Aug 24 07:28 UTC | 06 Aug 24 07:28 UTC |
	|         | /home/docker/cp-test_ha-118173-m04_ha-118173-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-118173 node stop m02 -v=7                                                     | ha-118173 | jenkins | v1.33.1 | 06 Aug 24 07:28 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/06 07:22:19
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0806 07:22:19.165619   31793 out.go:291] Setting OutFile to fd 1 ...
	I0806 07:22:19.165868   31793 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 07:22:19.165876   31793 out.go:304] Setting ErrFile to fd 2...
	I0806 07:22:19.165880   31793 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 07:22:19.166038   31793 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19370-13057/.minikube/bin
	I0806 07:22:19.166569   31793 out.go:298] Setting JSON to false
	I0806 07:22:19.167442   31793 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3885,"bootTime":1722925054,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0806 07:22:19.167497   31793 start.go:139] virtualization: kvm guest
	I0806 07:22:19.169559   31793 out.go:177] * [ha-118173] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0806 07:22:19.171087   31793 out.go:177]   - MINIKUBE_LOCATION=19370
	I0806 07:22:19.171085   31793 notify.go:220] Checking for updates...
	I0806 07:22:19.173961   31793 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0806 07:22:19.175568   31793 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19370-13057/kubeconfig
	I0806 07:22:19.177238   31793 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19370-13057/.minikube
	I0806 07:22:19.178649   31793 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0806 07:22:19.180538   31793 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0806 07:22:19.182046   31793 driver.go:392] Setting default libvirt URI to qemu:///system
	I0806 07:22:19.217814   31793 out.go:177] * Using the kvm2 driver based on user configuration
	I0806 07:22:19.219269   31793 start.go:297] selected driver: kvm2
	I0806 07:22:19.219281   31793 start.go:901] validating driver "kvm2" against <nil>
	I0806 07:22:19.219291   31793 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0806 07:22:19.219959   31793 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 07:22:19.220038   31793 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19370-13057/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0806 07:22:19.235588   31793 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0806 07:22:19.235636   31793 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0806 07:22:19.235826   31793 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0806 07:22:19.235882   31793 cni.go:84] Creating CNI manager for ""
	I0806 07:22:19.235900   31793 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0806 07:22:19.235907   31793 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0806 07:22:19.235952   31793 start.go:340] cluster config:
	{Name:ha-118173 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-118173 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0806 07:22:19.236037   31793 iso.go:125] acquiring lock: {Name:mke58d71de28babe305c5eb0c31c274e9713bb3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 07:22:19.239086   31793 out.go:177] * Starting "ha-118173" primary control-plane node in "ha-118173" cluster
	I0806 07:22:19.240422   31793 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0806 07:22:19.240456   31793 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0806 07:22:19.240464   31793 cache.go:56] Caching tarball of preloaded images
	I0806 07:22:19.240536   31793 preload.go:172] Found /home/jenkins/minikube-integration/19370-13057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0806 07:22:19.240546   31793 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0806 07:22:19.240830   31793 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/config.json ...
	I0806 07:22:19.240847   31793 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/config.json: {Name:mk9d17cd34cb906319ddb11b145cd3570462329e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 07:22:19.240990   31793 start.go:360] acquireMachinesLock for ha-118173: {Name:mkc602ac9c8cc3360dcc0fcb8104303837aaab61 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 07:22:19.241018   31793 start.go:364] duration metric: took 15.051µs to acquireMachinesLock for "ha-118173"
	I0806 07:22:19.241033   31793 start.go:93] Provisioning new machine with config: &{Name:ha-118173 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-118173 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0806 07:22:19.241086   31793 start.go:125] createHost starting for "" (driver="kvm2")
	I0806 07:22:19.243213   31793 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0806 07:22:19.243369   31793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:22:19.243414   31793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:22:19.257859   31793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46593
	I0806 07:22:19.258258   31793 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:22:19.258748   31793 main.go:141] libmachine: Using API Version  1
	I0806 07:22:19.258769   31793 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:22:19.259082   31793 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:22:19.259259   31793 main.go:141] libmachine: (ha-118173) Calling .GetMachineName
	I0806 07:22:19.259386   31793 main.go:141] libmachine: (ha-118173) Calling .DriverName
	I0806 07:22:19.259535   31793 start.go:159] libmachine.API.Create for "ha-118173" (driver="kvm2")
	I0806 07:22:19.259560   31793 client.go:168] LocalClient.Create starting
	I0806 07:22:19.259587   31793 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem
	I0806 07:22:19.259623   31793 main.go:141] libmachine: Decoding PEM data...
	I0806 07:22:19.259639   31793 main.go:141] libmachine: Parsing certificate...
	I0806 07:22:19.259687   31793 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem
	I0806 07:22:19.259705   31793 main.go:141] libmachine: Decoding PEM data...
	I0806 07:22:19.259726   31793 main.go:141] libmachine: Parsing certificate...
	I0806 07:22:19.259741   31793 main.go:141] libmachine: Running pre-create checks...
	I0806 07:22:19.259754   31793 main.go:141] libmachine: (ha-118173) Calling .PreCreateCheck
	I0806 07:22:19.260262   31793 main.go:141] libmachine: (ha-118173) Calling .GetConfigRaw
	I0806 07:22:19.260642   31793 main.go:141] libmachine: Creating machine...
	I0806 07:22:19.260655   31793 main.go:141] libmachine: (ha-118173) Calling .Create
	I0806 07:22:19.260774   31793 main.go:141] libmachine: (ha-118173) Creating KVM machine...
	I0806 07:22:19.261978   31793 main.go:141] libmachine: (ha-118173) DBG | found existing default KVM network
	I0806 07:22:19.262624   31793 main.go:141] libmachine: (ha-118173) DBG | I0806 07:22:19.262492   31816 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ac0}
	I0806 07:22:19.262707   31793 main.go:141] libmachine: (ha-118173) DBG | created network xml: 
	I0806 07:22:19.262736   31793 main.go:141] libmachine: (ha-118173) DBG | <network>
	I0806 07:22:19.262748   31793 main.go:141] libmachine: (ha-118173) DBG |   <name>mk-ha-118173</name>
	I0806 07:22:19.262761   31793 main.go:141] libmachine: (ha-118173) DBG |   <dns enable='no'/>
	I0806 07:22:19.262772   31793 main.go:141] libmachine: (ha-118173) DBG |   
	I0806 07:22:19.262783   31793 main.go:141] libmachine: (ha-118173) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0806 07:22:19.262791   31793 main.go:141] libmachine: (ha-118173) DBG |     <dhcp>
	I0806 07:22:19.262801   31793 main.go:141] libmachine: (ha-118173) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0806 07:22:19.262817   31793 main.go:141] libmachine: (ha-118173) DBG |     </dhcp>
	I0806 07:22:19.262831   31793 main.go:141] libmachine: (ha-118173) DBG |   </ip>
	I0806 07:22:19.262841   31793 main.go:141] libmachine: (ha-118173) DBG |   
	I0806 07:22:19.262853   31793 main.go:141] libmachine: (ha-118173) DBG | </network>
	I0806 07:22:19.262867   31793 main.go:141] libmachine: (ha-118173) DBG | 
	I0806 07:22:19.267917   31793 main.go:141] libmachine: (ha-118173) DBG | trying to create private KVM network mk-ha-118173 192.168.39.0/24...
	I0806 07:22:19.335037   31793 main.go:141] libmachine: (ha-118173) DBG | private KVM network mk-ha-118173 192.168.39.0/24 created
	I0806 07:22:19.335062   31793 main.go:141] libmachine: (ha-118173) Setting up store path in /home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173 ...
	I0806 07:22:19.335076   31793 main.go:141] libmachine: (ha-118173) DBG | I0806 07:22:19.335015   31816 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19370-13057/.minikube
	I0806 07:22:19.335086   31793 main.go:141] libmachine: (ha-118173) Building disk image from file:///home/jenkins/minikube-integration/19370-13057/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0806 07:22:19.335264   31793 main.go:141] libmachine: (ha-118173) Downloading /home/jenkins/minikube-integration/19370-13057/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19370-13057/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0806 07:22:19.580519   31793 main.go:141] libmachine: (ha-118173) DBG | I0806 07:22:19.580396   31816 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173/id_rsa...
	I0806 07:22:19.759220   31793 main.go:141] libmachine: (ha-118173) DBG | I0806 07:22:19.759065   31816 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173/ha-118173.rawdisk...
	I0806 07:22:19.759248   31793 main.go:141] libmachine: (ha-118173) DBG | Writing magic tar header
	I0806 07:22:19.759262   31793 main.go:141] libmachine: (ha-118173) DBG | Writing SSH key tar header
	I0806 07:22:19.759274   31793 main.go:141] libmachine: (ha-118173) DBG | I0806 07:22:19.759202   31816 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173 ...
	I0806 07:22:19.759350   31793 main.go:141] libmachine: (ha-118173) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173
	I0806 07:22:19.759386   31793 main.go:141] libmachine: (ha-118173) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19370-13057/.minikube/machines
	I0806 07:22:19.759397   31793 main.go:141] libmachine: (ha-118173) Setting executable bit set on /home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173 (perms=drwx------)
	I0806 07:22:19.759406   31793 main.go:141] libmachine: (ha-118173) Setting executable bit set on /home/jenkins/minikube-integration/19370-13057/.minikube/machines (perms=drwxr-xr-x)
	I0806 07:22:19.759428   31793 main.go:141] libmachine: (ha-118173) Setting executable bit set on /home/jenkins/minikube-integration/19370-13057/.minikube (perms=drwxr-xr-x)
	I0806 07:22:19.759445   31793 main.go:141] libmachine: (ha-118173) Setting executable bit set on /home/jenkins/minikube-integration/19370-13057 (perms=drwxrwxr-x)
	I0806 07:22:19.759462   31793 main.go:141] libmachine: (ha-118173) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19370-13057/.minikube
	I0806 07:22:19.759475   31793 main.go:141] libmachine: (ha-118173) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0806 07:22:19.759491   31793 main.go:141] libmachine: (ha-118173) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0806 07:22:19.759499   31793 main.go:141] libmachine: (ha-118173) Creating domain...
	I0806 07:22:19.759507   31793 main.go:141] libmachine: (ha-118173) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19370-13057
	I0806 07:22:19.759515   31793 main.go:141] libmachine: (ha-118173) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0806 07:22:19.759526   31793 main.go:141] libmachine: (ha-118173) DBG | Checking permissions on dir: /home/jenkins
	I0806 07:22:19.759539   31793 main.go:141] libmachine: (ha-118173) DBG | Checking permissions on dir: /home
	I0806 07:22:19.759553   31793 main.go:141] libmachine: (ha-118173) DBG | Skipping /home - not owner
	I0806 07:22:19.760506   31793 main.go:141] libmachine: (ha-118173) define libvirt domain using xml: 
	I0806 07:22:19.760527   31793 main.go:141] libmachine: (ha-118173) <domain type='kvm'>
	I0806 07:22:19.760533   31793 main.go:141] libmachine: (ha-118173)   <name>ha-118173</name>
	I0806 07:22:19.760539   31793 main.go:141] libmachine: (ha-118173)   <memory unit='MiB'>2200</memory>
	I0806 07:22:19.760562   31793 main.go:141] libmachine: (ha-118173)   <vcpu>2</vcpu>
	I0806 07:22:19.760581   31793 main.go:141] libmachine: (ha-118173)   <features>
	I0806 07:22:19.760594   31793 main.go:141] libmachine: (ha-118173)     <acpi/>
	I0806 07:22:19.760603   31793 main.go:141] libmachine: (ha-118173)     <apic/>
	I0806 07:22:19.760612   31793 main.go:141] libmachine: (ha-118173)     <pae/>
	I0806 07:22:19.760622   31793 main.go:141] libmachine: (ha-118173)     
	I0806 07:22:19.760634   31793 main.go:141] libmachine: (ha-118173)   </features>
	I0806 07:22:19.760644   31793 main.go:141] libmachine: (ha-118173)   <cpu mode='host-passthrough'>
	I0806 07:22:19.760661   31793 main.go:141] libmachine: (ha-118173)   
	I0806 07:22:19.760677   31793 main.go:141] libmachine: (ha-118173)   </cpu>
	I0806 07:22:19.760686   31793 main.go:141] libmachine: (ha-118173)   <os>
	I0806 07:22:19.760691   31793 main.go:141] libmachine: (ha-118173)     <type>hvm</type>
	I0806 07:22:19.760696   31793 main.go:141] libmachine: (ha-118173)     <boot dev='cdrom'/>
	I0806 07:22:19.760707   31793 main.go:141] libmachine: (ha-118173)     <boot dev='hd'/>
	I0806 07:22:19.760715   31793 main.go:141] libmachine: (ha-118173)     <bootmenu enable='no'/>
	I0806 07:22:19.760719   31793 main.go:141] libmachine: (ha-118173)   </os>
	I0806 07:22:19.760726   31793 main.go:141] libmachine: (ha-118173)   <devices>
	I0806 07:22:19.760731   31793 main.go:141] libmachine: (ha-118173)     <disk type='file' device='cdrom'>
	I0806 07:22:19.760741   31793 main.go:141] libmachine: (ha-118173)       <source file='/home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173/boot2docker.iso'/>
	I0806 07:22:19.760750   31793 main.go:141] libmachine: (ha-118173)       <target dev='hdc' bus='scsi'/>
	I0806 07:22:19.760757   31793 main.go:141] libmachine: (ha-118173)       <readonly/>
	I0806 07:22:19.760761   31793 main.go:141] libmachine: (ha-118173)     </disk>
	I0806 07:22:19.760770   31793 main.go:141] libmachine: (ha-118173)     <disk type='file' device='disk'>
	I0806 07:22:19.760776   31793 main.go:141] libmachine: (ha-118173)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0806 07:22:19.760785   31793 main.go:141] libmachine: (ha-118173)       <source file='/home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173/ha-118173.rawdisk'/>
	I0806 07:22:19.760790   31793 main.go:141] libmachine: (ha-118173)       <target dev='hda' bus='virtio'/>
	I0806 07:22:19.760797   31793 main.go:141] libmachine: (ha-118173)     </disk>
	I0806 07:22:19.760802   31793 main.go:141] libmachine: (ha-118173)     <interface type='network'>
	I0806 07:22:19.760810   31793 main.go:141] libmachine: (ha-118173)       <source network='mk-ha-118173'/>
	I0806 07:22:19.760823   31793 main.go:141] libmachine: (ha-118173)       <model type='virtio'/>
	I0806 07:22:19.760831   31793 main.go:141] libmachine: (ha-118173)     </interface>
	I0806 07:22:19.760835   31793 main.go:141] libmachine: (ha-118173)     <interface type='network'>
	I0806 07:22:19.760841   31793 main.go:141] libmachine: (ha-118173)       <source network='default'/>
	I0806 07:22:19.760847   31793 main.go:141] libmachine: (ha-118173)       <model type='virtio'/>
	I0806 07:22:19.760853   31793 main.go:141] libmachine: (ha-118173)     </interface>
	I0806 07:22:19.760859   31793 main.go:141] libmachine: (ha-118173)     <serial type='pty'>
	I0806 07:22:19.760864   31793 main.go:141] libmachine: (ha-118173)       <target port='0'/>
	I0806 07:22:19.760869   31793 main.go:141] libmachine: (ha-118173)     </serial>
	I0806 07:22:19.760875   31793 main.go:141] libmachine: (ha-118173)     <console type='pty'>
	I0806 07:22:19.760884   31793 main.go:141] libmachine: (ha-118173)       <target type='serial' port='0'/>
	I0806 07:22:19.760892   31793 main.go:141] libmachine: (ha-118173)     </console>
	I0806 07:22:19.760896   31793 main.go:141] libmachine: (ha-118173)     <rng model='virtio'>
	I0806 07:22:19.760902   31793 main.go:141] libmachine: (ha-118173)       <backend model='random'>/dev/random</backend>
	I0806 07:22:19.760908   31793 main.go:141] libmachine: (ha-118173)     </rng>
	I0806 07:22:19.760912   31793 main.go:141] libmachine: (ha-118173)     
	I0806 07:22:19.760919   31793 main.go:141] libmachine: (ha-118173)     
	I0806 07:22:19.760923   31793 main.go:141] libmachine: (ha-118173)   </devices>
	I0806 07:22:19.760929   31793 main.go:141] libmachine: (ha-118173) </domain>
	I0806 07:22:19.760936   31793 main.go:141] libmachine: (ha-118173) 
	I0806 07:22:19.765478   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:c3:65:2c in network default
	I0806 07:22:19.766041   31793 main.go:141] libmachine: (ha-118173) Ensuring networks are active...
	I0806 07:22:19.766056   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:22:19.766643   31793 main.go:141] libmachine: (ha-118173) Ensuring network default is active
	I0806 07:22:19.766933   31793 main.go:141] libmachine: (ha-118173) Ensuring network mk-ha-118173 is active
	I0806 07:22:19.767403   31793 main.go:141] libmachine: (ha-118173) Getting domain xml...
	I0806 07:22:19.767993   31793 main.go:141] libmachine: (ha-118173) Creating domain...
	I0806 07:22:20.969734   31793 main.go:141] libmachine: (ha-118173) Waiting to get IP...
	I0806 07:22:20.970482   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:22:20.970863   31793 main.go:141] libmachine: (ha-118173) DBG | unable to find current IP address of domain ha-118173 in network mk-ha-118173
	I0806 07:22:20.970891   31793 main.go:141] libmachine: (ha-118173) DBG | I0806 07:22:20.970816   31816 retry.go:31] will retry after 237.153154ms: waiting for machine to come up
	I0806 07:22:21.209342   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:22:21.209803   31793 main.go:141] libmachine: (ha-118173) DBG | unable to find current IP address of domain ha-118173 in network mk-ha-118173
	I0806 07:22:21.209831   31793 main.go:141] libmachine: (ha-118173) DBG | I0806 07:22:21.209764   31816 retry.go:31] will retry after 321.567483ms: waiting for machine to come up
	I0806 07:22:21.533243   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:22:21.533703   31793 main.go:141] libmachine: (ha-118173) DBG | unable to find current IP address of domain ha-118173 in network mk-ha-118173
	I0806 07:22:21.533726   31793 main.go:141] libmachine: (ha-118173) DBG | I0806 07:22:21.533632   31816 retry.go:31] will retry after 468.04222ms: waiting for machine to come up
	I0806 07:22:22.003258   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:22:22.003621   31793 main.go:141] libmachine: (ha-118173) DBG | unable to find current IP address of domain ha-118173 in network mk-ha-118173
	I0806 07:22:22.003648   31793 main.go:141] libmachine: (ha-118173) DBG | I0806 07:22:22.003599   31816 retry.go:31] will retry after 376.04295ms: waiting for machine to come up
	I0806 07:22:22.381111   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:22:22.381516   31793 main.go:141] libmachine: (ha-118173) DBG | unable to find current IP address of domain ha-118173 in network mk-ha-118173
	I0806 07:22:22.381538   31793 main.go:141] libmachine: (ha-118173) DBG | I0806 07:22:22.381491   31816 retry.go:31] will retry after 578.39666ms: waiting for machine to come up
	I0806 07:22:22.961296   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:22:22.961662   31793 main.go:141] libmachine: (ha-118173) DBG | unable to find current IP address of domain ha-118173 in network mk-ha-118173
	I0806 07:22:22.961687   31793 main.go:141] libmachine: (ha-118173) DBG | I0806 07:22:22.961615   31816 retry.go:31] will retry after 574.068902ms: waiting for machine to come up
	I0806 07:22:23.537289   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:22:23.537732   31793 main.go:141] libmachine: (ha-118173) DBG | unable to find current IP address of domain ha-118173 in network mk-ha-118173
	I0806 07:22:23.537767   31793 main.go:141] libmachine: (ha-118173) DBG | I0806 07:22:23.537696   31816 retry.go:31] will retry after 1.126514021s: waiting for machine to come up
	I0806 07:22:24.665477   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:22:24.665839   31793 main.go:141] libmachine: (ha-118173) DBG | unable to find current IP address of domain ha-118173 in network mk-ha-118173
	I0806 07:22:24.665873   31793 main.go:141] libmachine: (ha-118173) DBG | I0806 07:22:24.665801   31816 retry.go:31] will retry after 1.097616443s: waiting for machine to come up
	I0806 07:22:25.765277   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:22:25.765676   31793 main.go:141] libmachine: (ha-118173) DBG | unable to find current IP address of domain ha-118173 in network mk-ha-118173
	I0806 07:22:25.765699   31793 main.go:141] libmachine: (ha-118173) DBG | I0806 07:22:25.765646   31816 retry.go:31] will retry after 1.51043032s: waiting for machine to come up
	I0806 07:22:27.277172   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:22:27.277519   31793 main.go:141] libmachine: (ha-118173) DBG | unable to find current IP address of domain ha-118173 in network mk-ha-118173
	I0806 07:22:27.277546   31793 main.go:141] libmachine: (ha-118173) DBG | I0806 07:22:27.277476   31816 retry.go:31] will retry after 1.467155257s: waiting for machine to come up
	I0806 07:22:28.745913   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:22:28.746506   31793 main.go:141] libmachine: (ha-118173) DBG | unable to find current IP address of domain ha-118173 in network mk-ha-118173
	I0806 07:22:28.746614   31793 main.go:141] libmachine: (ha-118173) DBG | I0806 07:22:28.746481   31816 retry.go:31] will retry after 2.73537464s: waiting for machine to come up
	I0806 07:22:31.484404   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:22:31.484928   31793 main.go:141] libmachine: (ha-118173) DBG | unable to find current IP address of domain ha-118173 in network mk-ha-118173
	I0806 07:22:31.484961   31793 main.go:141] libmachine: (ha-118173) DBG | I0806 07:22:31.484896   31816 retry.go:31] will retry after 2.580809537s: waiting for machine to come up
	I0806 07:22:34.068424   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:22:34.068889   31793 main.go:141] libmachine: (ha-118173) DBG | unable to find current IP address of domain ha-118173 in network mk-ha-118173
	I0806 07:22:34.068913   31793 main.go:141] libmachine: (ha-118173) DBG | I0806 07:22:34.068861   31816 retry.go:31] will retry after 2.864411811s: waiting for machine to come up
	I0806 07:22:36.936160   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:22:36.936439   31793 main.go:141] libmachine: (ha-118173) DBG | unable to find current IP address of domain ha-118173 in network mk-ha-118173
	I0806 07:22:36.936459   31793 main.go:141] libmachine: (ha-118173) DBG | I0806 07:22:36.936426   31816 retry.go:31] will retry after 5.39122106s: waiting for machine to come up
	I0806 07:22:42.331757   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:22:42.332181   31793 main.go:141] libmachine: (ha-118173) Found IP for machine: 192.168.39.164
	I0806 07:22:42.332206   31793 main.go:141] libmachine: (ha-118173) Reserving static IP address...
	I0806 07:22:42.332217   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has current primary IP address 192.168.39.164 and MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:22:42.332542   31793 main.go:141] libmachine: (ha-118173) DBG | unable to find host DHCP lease matching {name: "ha-118173", mac: "52:54:00:ef:df:f0", ip: "192.168.39.164"} in network mk-ha-118173
	I0806 07:22:42.404958   31793 main.go:141] libmachine: (ha-118173) DBG | Getting to WaitForSSH function...
	I0806 07:22:42.404976   31793 main.go:141] libmachine: (ha-118173) Reserved static IP address: 192.168.39.164
	I0806 07:22:42.404985   31793 main.go:141] libmachine: (ha-118173) Waiting for SSH to be available...
	I0806 07:22:42.407425   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:22:42.407848   31793 main.go:141] libmachine: (ha-118173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:df:f0", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:22:33 +0000 UTC Type:0 Mac:52:54:00:ef:df:f0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ef:df:f0}
	I0806 07:22:42.407870   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined IP address 192.168.39.164 and MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:22:42.408023   31793 main.go:141] libmachine: (ha-118173) DBG | Using SSH client type: external
	I0806 07:22:42.408042   31793 main.go:141] libmachine: (ha-118173) DBG | Using SSH private key: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173/id_rsa (-rw-------)
	I0806 07:22:42.408084   31793 main.go:141] libmachine: (ha-118173) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.164 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0806 07:22:42.408110   31793 main.go:141] libmachine: (ha-118173) DBG | About to run SSH command:
	I0806 07:22:42.408144   31793 main.go:141] libmachine: (ha-118173) DBG | exit 0
	I0806 07:22:42.532516   31793 main.go:141] libmachine: (ha-118173) DBG | SSH cmd err, output: <nil>: 
	I0806 07:22:42.532772   31793 main.go:141] libmachine: (ha-118173) KVM machine creation complete!
	I0806 07:22:42.533108   31793 main.go:141] libmachine: (ha-118173) Calling .GetConfigRaw
	I0806 07:22:42.533610   31793 main.go:141] libmachine: (ha-118173) Calling .DriverName
	I0806 07:22:42.533842   31793 main.go:141] libmachine: (ha-118173) Calling .DriverName
	I0806 07:22:42.534064   31793 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0806 07:22:42.534081   31793 main.go:141] libmachine: (ha-118173) Calling .GetState
	I0806 07:22:42.535521   31793 main.go:141] libmachine: Detecting operating system of created instance...
	I0806 07:22:42.535535   31793 main.go:141] libmachine: Waiting for SSH to be available...
	I0806 07:22:42.535541   31793 main.go:141] libmachine: Getting to WaitForSSH function...
	I0806 07:22:42.535547   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHHostname
	I0806 07:22:42.538167   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:22:42.538529   31793 main.go:141] libmachine: (ha-118173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:df:f0", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:22:33 +0000 UTC Type:0 Mac:52:54:00:ef:df:f0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-118173 Clientid:01:52:54:00:ef:df:f0}
	I0806 07:22:42.538551   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined IP address 192.168.39.164 and MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:22:42.538692   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHPort
	I0806 07:22:42.538896   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHKeyPath
	I0806 07:22:42.539040   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHKeyPath
	I0806 07:22:42.539196   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHUsername
	I0806 07:22:42.539328   31793 main.go:141] libmachine: Using SSH client type: native
	I0806 07:22:42.539504   31793 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.164 22 <nil> <nil>}
	I0806 07:22:42.539513   31793 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0806 07:22:42.639729   31793 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 07:22:42.639751   31793 main.go:141] libmachine: Detecting the provisioner...
	I0806 07:22:42.639759   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHHostname
	I0806 07:22:42.642762   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:22:42.643126   31793 main.go:141] libmachine: (ha-118173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:df:f0", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:22:33 +0000 UTC Type:0 Mac:52:54:00:ef:df:f0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-118173 Clientid:01:52:54:00:ef:df:f0}
	I0806 07:22:42.643149   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined IP address 192.168.39.164 and MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:22:42.643304   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHPort
	I0806 07:22:42.643520   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHKeyPath
	I0806 07:22:42.643700   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHKeyPath
	I0806 07:22:42.643875   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHUsername
	I0806 07:22:42.644062   31793 main.go:141] libmachine: Using SSH client type: native
	I0806 07:22:42.644248   31793 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.164 22 <nil> <nil>}
	I0806 07:22:42.644262   31793 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0806 07:22:42.745341   31793 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0806 07:22:42.745414   31793 main.go:141] libmachine: found compatible host: buildroot
	I0806 07:22:42.745428   31793 main.go:141] libmachine: Provisioning with buildroot...
	I0806 07:22:42.745439   31793 main.go:141] libmachine: (ha-118173) Calling .GetMachineName
	I0806 07:22:42.745695   31793 buildroot.go:166] provisioning hostname "ha-118173"
	I0806 07:22:42.745719   31793 main.go:141] libmachine: (ha-118173) Calling .GetMachineName
	I0806 07:22:42.745907   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHHostname
	I0806 07:22:42.748408   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:22:42.748670   31793 main.go:141] libmachine: (ha-118173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:df:f0", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:22:33 +0000 UTC Type:0 Mac:52:54:00:ef:df:f0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-118173 Clientid:01:52:54:00:ef:df:f0}
	I0806 07:22:42.748706   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined IP address 192.168.39.164 and MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:22:42.748904   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHPort
	I0806 07:22:42.749035   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHKeyPath
	I0806 07:22:42.749160   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHKeyPath
	I0806 07:22:42.749310   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHUsername
	I0806 07:22:42.749487   31793 main.go:141] libmachine: Using SSH client type: native
	I0806 07:22:42.749647   31793 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.164 22 <nil> <nil>}
	I0806 07:22:42.749659   31793 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-118173 && echo "ha-118173" | sudo tee /etc/hostname
	I0806 07:22:42.863245   31793 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-118173
	
	I0806 07:22:42.863266   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHHostname
	I0806 07:22:42.865903   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:22:42.866299   31793 main.go:141] libmachine: (ha-118173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:df:f0", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:22:33 +0000 UTC Type:0 Mac:52:54:00:ef:df:f0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-118173 Clientid:01:52:54:00:ef:df:f0}
	I0806 07:22:42.866327   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined IP address 192.168.39.164 and MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:22:42.866479   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHPort
	I0806 07:22:42.866683   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHKeyPath
	I0806 07:22:42.866862   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHKeyPath
	I0806 07:22:42.867019   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHUsername
	I0806 07:22:42.867200   31793 main.go:141] libmachine: Using SSH client type: native
	I0806 07:22:42.867410   31793 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.164 22 <nil> <nil>}
	I0806 07:22:42.867434   31793 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-118173' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-118173/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-118173' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0806 07:22:42.977503   31793 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 07:22:42.977535   31793 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19370-13057/.minikube CaCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19370-13057/.minikube}
	I0806 07:22:42.977579   31793 buildroot.go:174] setting up certificates
	I0806 07:22:42.977595   31793 provision.go:84] configureAuth start
	I0806 07:22:42.977605   31793 main.go:141] libmachine: (ha-118173) Calling .GetMachineName
	I0806 07:22:42.977928   31793 main.go:141] libmachine: (ha-118173) Calling .GetIP
	I0806 07:22:42.980709   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:22:42.981057   31793 main.go:141] libmachine: (ha-118173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:df:f0", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:22:33 +0000 UTC Type:0 Mac:52:54:00:ef:df:f0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-118173 Clientid:01:52:54:00:ef:df:f0}
	I0806 07:22:42.981082   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined IP address 192.168.39.164 and MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:22:42.981227   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHHostname
	I0806 07:22:42.983204   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:22:42.983618   31793 main.go:141] libmachine: (ha-118173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:df:f0", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:22:33 +0000 UTC Type:0 Mac:52:54:00:ef:df:f0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-118173 Clientid:01:52:54:00:ef:df:f0}
	I0806 07:22:42.983633   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined IP address 192.168.39.164 and MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:22:42.983775   31793 provision.go:143] copyHostCerts
	I0806 07:22:42.983805   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem
	I0806 07:22:42.983847   31793 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem, removing ...
	I0806 07:22:42.983856   31793 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem
	I0806 07:22:42.983933   31793 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem (1078 bytes)
	I0806 07:22:42.984041   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem
	I0806 07:22:42.984068   31793 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem, removing ...
	I0806 07:22:42.984078   31793 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem
	I0806 07:22:42.984117   31793 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem (1123 bytes)
	I0806 07:22:42.984192   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem
	I0806 07:22:42.984216   31793 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem, removing ...
	I0806 07:22:42.984225   31793 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem
	I0806 07:22:42.984259   31793 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem (1675 bytes)
	I0806 07:22:42.984338   31793 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem org=jenkins.ha-118173 san=[127.0.0.1 192.168.39.164 ha-118173 localhost minikube]
	I0806 07:22:43.227825   31793 provision.go:177] copyRemoteCerts
	I0806 07:22:43.227894   31793 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0806 07:22:43.227920   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHHostname
	I0806 07:22:43.230720   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:22:43.231085   31793 main.go:141] libmachine: (ha-118173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:df:f0", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:22:33 +0000 UTC Type:0 Mac:52:54:00:ef:df:f0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-118173 Clientid:01:52:54:00:ef:df:f0}
	I0806 07:22:43.231115   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined IP address 192.168.39.164 and MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:22:43.231313   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHPort
	I0806 07:22:43.231483   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHKeyPath
	I0806 07:22:43.231667   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHUsername
	I0806 07:22:43.231803   31793 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173/id_rsa Username:docker}
	I0806 07:22:43.310581   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0806 07:22:43.310659   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0806 07:22:43.334641   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0806 07:22:43.334719   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0806 07:22:43.359238   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0806 07:22:43.359305   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0806 07:22:43.383248   31793 provision.go:87] duration metric: took 405.641533ms to configureAuth
	I0806 07:22:43.383272   31793 buildroot.go:189] setting minikube options for container-runtime
	I0806 07:22:43.383432   31793 config.go:182] Loaded profile config "ha-118173": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 07:22:43.383512   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHHostname
	I0806 07:22:43.385959   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:22:43.386318   31793 main.go:141] libmachine: (ha-118173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:df:f0", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:22:33 +0000 UTC Type:0 Mac:52:54:00:ef:df:f0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-118173 Clientid:01:52:54:00:ef:df:f0}
	I0806 07:22:43.386342   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined IP address 192.168.39.164 and MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:22:43.386514   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHPort
	I0806 07:22:43.386708   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHKeyPath
	I0806 07:22:43.386858   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHKeyPath
	I0806 07:22:43.386993   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHUsername
	I0806 07:22:43.387147   31793 main.go:141] libmachine: Using SSH client type: native
	I0806 07:22:43.387364   31793 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.164 22 <nil> <nil>}
	I0806 07:22:43.387382   31793 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0806 07:22:43.649612   31793 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0806 07:22:43.649641   31793 main.go:141] libmachine: Checking connection to Docker...
	I0806 07:22:43.649651   31793 main.go:141] libmachine: (ha-118173) Calling .GetURL
	I0806 07:22:43.651060   31793 main.go:141] libmachine: (ha-118173) DBG | Using libvirt version 6000000
	I0806 07:22:43.653159   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:22:43.653535   31793 main.go:141] libmachine: (ha-118173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:df:f0", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:22:33 +0000 UTC Type:0 Mac:52:54:00:ef:df:f0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-118173 Clientid:01:52:54:00:ef:df:f0}
	I0806 07:22:43.653555   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined IP address 192.168.39.164 and MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:22:43.653733   31793 main.go:141] libmachine: Docker is up and running!
	I0806 07:22:43.653753   31793 main.go:141] libmachine: Reticulating splines...
	I0806 07:22:43.653761   31793 client.go:171] duration metric: took 24.394192731s to LocalClient.Create
	I0806 07:22:43.653784   31793 start.go:167] duration metric: took 24.394250648s to libmachine.API.Create "ha-118173"
	I0806 07:22:43.653790   31793 start.go:293] postStartSetup for "ha-118173" (driver="kvm2")
	I0806 07:22:43.653799   31793 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0806 07:22:43.653811   31793 main.go:141] libmachine: (ha-118173) Calling .DriverName
	I0806 07:22:43.654028   31793 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0806 07:22:43.654050   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHHostname
	I0806 07:22:43.656729   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:22:43.657098   31793 main.go:141] libmachine: (ha-118173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:df:f0", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:22:33 +0000 UTC Type:0 Mac:52:54:00:ef:df:f0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-118173 Clientid:01:52:54:00:ef:df:f0}
	I0806 07:22:43.657122   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined IP address 192.168.39.164 and MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:22:43.657296   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHPort
	I0806 07:22:43.657499   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHKeyPath
	I0806 07:22:43.657675   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHUsername
	I0806 07:22:43.657840   31793 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173/id_rsa Username:docker}
	I0806 07:22:43.738928   31793 ssh_runner.go:195] Run: cat /etc/os-release
	I0806 07:22:43.743272   31793 info.go:137] Remote host: Buildroot 2023.02.9
	I0806 07:22:43.743305   31793 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-13057/.minikube/addons for local assets ...
	I0806 07:22:43.743380   31793 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-13057/.minikube/files for local assets ...
	I0806 07:22:43.743485   31793 filesync.go:149] local asset: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem -> 202462.pem in /etc/ssl/certs
	I0806 07:22:43.743497   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem -> /etc/ssl/certs/202462.pem
	I0806 07:22:43.743590   31793 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0806 07:22:43.752929   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem --> /etc/ssl/certs/202462.pem (1708 bytes)
	I0806 07:22:43.777273   31793 start.go:296] duration metric: took 123.469717ms for postStartSetup
	I0806 07:22:43.777320   31793 main.go:141] libmachine: (ha-118173) Calling .GetConfigRaw
	I0806 07:22:43.777825   31793 main.go:141] libmachine: (ha-118173) Calling .GetIP
	I0806 07:22:43.780421   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:22:43.780747   31793 main.go:141] libmachine: (ha-118173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:df:f0", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:22:33 +0000 UTC Type:0 Mac:52:54:00:ef:df:f0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-118173 Clientid:01:52:54:00:ef:df:f0}
	I0806 07:22:43.780778   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined IP address 192.168.39.164 and MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:22:43.781069   31793 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/config.json ...
	I0806 07:22:43.781248   31793 start.go:128] duration metric: took 24.540153885s to createHost
	I0806 07:22:43.781269   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHHostname
	I0806 07:22:43.783423   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:22:43.783719   31793 main.go:141] libmachine: (ha-118173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:df:f0", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:22:33 +0000 UTC Type:0 Mac:52:54:00:ef:df:f0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-118173 Clientid:01:52:54:00:ef:df:f0}
	I0806 07:22:43.783741   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined IP address 192.168.39.164 and MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:22:43.783856   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHPort
	I0806 07:22:43.784027   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHKeyPath
	I0806 07:22:43.784196   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHKeyPath
	I0806 07:22:43.784349   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHUsername
	I0806 07:22:43.784536   31793 main.go:141] libmachine: Using SSH client type: native
	I0806 07:22:43.784740   31793 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.164 22 <nil> <nil>}
	I0806 07:22:43.784758   31793 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0806 07:22:43.885380   31793 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722928963.865301045
	
	I0806 07:22:43.885401   31793 fix.go:216] guest clock: 1722928963.865301045
	I0806 07:22:43.885409   31793 fix.go:229] Guest: 2024-08-06 07:22:43.865301045 +0000 UTC Remote: 2024-08-06 07:22:43.781259462 +0000 UTC m=+24.648998203 (delta=84.041583ms)
	I0806 07:22:43.885445   31793 fix.go:200] guest clock delta is within tolerance: 84.041583ms
	I0806 07:22:43.885453   31793 start.go:83] releasing machines lock for "ha-118173", held for 24.644426654s
	I0806 07:22:43.885472   31793 main.go:141] libmachine: (ha-118173) Calling .DriverName
	I0806 07:22:43.885748   31793 main.go:141] libmachine: (ha-118173) Calling .GetIP
	I0806 07:22:43.888641   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:22:43.889013   31793 main.go:141] libmachine: (ha-118173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:df:f0", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:22:33 +0000 UTC Type:0 Mac:52:54:00:ef:df:f0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-118173 Clientid:01:52:54:00:ef:df:f0}
	I0806 07:22:43.889034   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined IP address 192.168.39.164 and MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:22:43.889178   31793 main.go:141] libmachine: (ha-118173) Calling .DriverName
	I0806 07:22:43.889589   31793 main.go:141] libmachine: (ha-118173) Calling .DriverName
	I0806 07:22:43.889778   31793 main.go:141] libmachine: (ha-118173) Calling .DriverName
	I0806 07:22:43.889891   31793 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0806 07:22:43.889930   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHHostname
	I0806 07:22:43.890025   31793 ssh_runner.go:195] Run: cat /version.json
	I0806 07:22:43.890052   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHHostname
	I0806 07:22:43.892656   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:22:43.892876   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:22:43.893089   31793 main.go:141] libmachine: (ha-118173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:df:f0", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:22:33 +0000 UTC Type:0 Mac:52:54:00:ef:df:f0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-118173 Clientid:01:52:54:00:ef:df:f0}
	I0806 07:22:43.893120   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined IP address 192.168.39.164 and MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:22:43.893217   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHPort
	I0806 07:22:43.893359   31793 main.go:141] libmachine: (ha-118173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:df:f0", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:22:33 +0000 UTC Type:0 Mac:52:54:00:ef:df:f0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-118173 Clientid:01:52:54:00:ef:df:f0}
	I0806 07:22:43.893384   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHKeyPath
	I0806 07:22:43.893388   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined IP address 192.168.39.164 and MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:22:43.893569   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHUsername
	I0806 07:22:43.893572   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHPort
	I0806 07:22:43.893758   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHKeyPath
	I0806 07:22:43.893756   31793 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173/id_rsa Username:docker}
	I0806 07:22:43.893965   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHUsername
	I0806 07:22:43.894118   31793 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173/id_rsa Username:docker}
	I0806 07:22:43.993907   31793 ssh_runner.go:195] Run: systemctl --version
	I0806 07:22:43.999975   31793 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0806 07:22:44.161213   31793 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0806 07:22:44.167486   31793 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0806 07:22:44.167558   31793 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0806 07:22:44.183937   31793 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0806 07:22:44.183961   31793 start.go:495] detecting cgroup driver to use...
	I0806 07:22:44.184015   31793 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0806 07:22:44.200171   31793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 07:22:44.213909   31793 docker.go:217] disabling cri-docker service (if available) ...
	I0806 07:22:44.213960   31793 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0806 07:22:44.227305   31793 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0806 07:22:44.241338   31793 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0806 07:22:44.360983   31793 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0806 07:22:44.532055   31793 docker.go:233] disabling docker service ...
	I0806 07:22:44.532156   31793 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0806 07:22:44.546746   31793 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0806 07:22:44.559760   31793 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0806 07:22:44.679096   31793 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0806 07:22:44.796850   31793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0806 07:22:44.810975   31793 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 07:22:44.829608   31793 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0806 07:22:44.829674   31793 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 07:22:44.840501   31793 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0806 07:22:44.840567   31793 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 07:22:44.851272   31793 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 07:22:44.861943   31793 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 07:22:44.872594   31793 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0806 07:22:44.883104   31793 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 07:22:44.894221   31793 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 07:22:44.912514   31793 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 07:22:44.922924   31793 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0806 07:22:44.933095   31793 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0806 07:22:44.933159   31793 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0806 07:22:44.946489   31793 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0806 07:22:44.956242   31793 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 07:22:45.071281   31793 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0806 07:22:45.217139   31793 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0806 07:22:45.217236   31793 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0806 07:22:45.222664   31793 start.go:563] Will wait 60s for crictl version
	I0806 07:22:45.222725   31793 ssh_runner.go:195] Run: which crictl
	I0806 07:22:45.227164   31793 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0806 07:22:45.268682   31793 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0806 07:22:45.268751   31793 ssh_runner.go:195] Run: crio --version
	I0806 07:22:45.297199   31793 ssh_runner.go:195] Run: crio --version
	I0806 07:22:45.327614   31793 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0806 07:22:45.328747   31793 main.go:141] libmachine: (ha-118173) Calling .GetIP
	I0806 07:22:45.331387   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:22:45.331766   31793 main.go:141] libmachine: (ha-118173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:df:f0", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:22:33 +0000 UTC Type:0 Mac:52:54:00:ef:df:f0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-118173 Clientid:01:52:54:00:ef:df:f0}
	I0806 07:22:45.331789   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined IP address 192.168.39.164 and MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:22:45.332026   31793 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0806 07:22:45.336081   31793 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 07:22:45.349240   31793 kubeadm.go:883] updating cluster {Name:ha-118173 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-118173 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.164 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0806 07:22:45.349334   31793 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0806 07:22:45.349381   31793 ssh_runner.go:195] Run: sudo crictl images --output json
	I0806 07:22:45.382326   31793 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0806 07:22:45.382385   31793 ssh_runner.go:195] Run: which lz4
	I0806 07:22:45.386376   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0806 07:22:45.386482   31793 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0806 07:22:45.390588   31793 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0806 07:22:45.390624   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0806 07:22:46.801233   31793 crio.go:462] duration metric: took 1.414781252s to copy over tarball
	I0806 07:22:46.801305   31793 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0806 07:22:48.974176   31793 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.172841008s)
	I0806 07:22:48.974201   31793 crio.go:469] duration metric: took 2.172941498s to extract the tarball
	I0806 07:22:48.974210   31793 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0806 07:22:49.012818   31793 ssh_runner.go:195] Run: sudo crictl images --output json
	I0806 07:22:49.059997   31793 crio.go:514] all images are preloaded for cri-o runtime.
	I0806 07:22:49.060065   31793 cache_images.go:84] Images are preloaded, skipping loading
	I0806 07:22:49.060081   31793 kubeadm.go:934] updating node { 192.168.39.164 8443 v1.30.3 crio true true} ...
	I0806 07:22:49.060247   31793 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-118173 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.164
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-118173 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0806 07:22:49.060340   31793 ssh_runner.go:195] Run: crio config
	I0806 07:22:49.116968   31793 cni.go:84] Creating CNI manager for ""
	I0806 07:22:49.116985   31793 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0806 07:22:49.116994   31793 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0806 07:22:49.117012   31793 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.164 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-118173 NodeName:ha-118173 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.164"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.164 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0806 07:22:49.117137   31793 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.164
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-118173"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.164
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.164"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0806 07:22:49.117159   31793 kube-vip.go:115] generating kube-vip config ...
	I0806 07:22:49.117199   31793 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0806 07:22:49.133977   31793 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0806 07:22:49.134071   31793 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0806 07:22:49.134129   31793 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0806 07:22:49.145001   31793 binaries.go:44] Found k8s binaries, skipping transfer
	I0806 07:22:49.145059   31793 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0806 07:22:49.155390   31793 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0806 07:22:49.173597   31793 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0806 07:22:49.190947   31793 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0806 07:22:49.208738   31793 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0806 07:22:49.225473   31793 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0806 07:22:49.229532   31793 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 07:22:49.242073   31793 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 07:22:49.356832   31793 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 07:22:49.374197   31793 certs.go:68] Setting up /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173 for IP: 192.168.39.164
	I0806 07:22:49.374220   31793 certs.go:194] generating shared ca certs ...
	I0806 07:22:49.374235   31793 certs.go:226] acquiring lock for ca certs: {Name:mkf5c5aed91f4108054d9f2c742cad66c86afbed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 07:22:49.374367   31793 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.key
	I0806 07:22:49.374415   31793 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.key
	I0806 07:22:49.374424   31793 certs.go:256] generating profile certs ...
	I0806 07:22:49.374469   31793 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/client.key
	I0806 07:22:49.374482   31793 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/client.crt with IP's: []
	I0806 07:22:49.468767   31793 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/client.crt ...
	I0806 07:22:49.468796   31793 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/client.crt: {Name:mkdb6504129ad3f0008cbaa25ad6e428ceef3f50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 07:22:49.469000   31793 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/client.key ...
	I0806 07:22:49.469017   31793 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/client.key: {Name:mk6c635edafde6192639fe4ec7e973a177087c23 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 07:22:49.469109   31793 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.key.2ac65cee
	I0806 07:22:49.469126   31793 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.crt.2ac65cee with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.164 192.168.39.254]
	I0806 07:22:49.578237   31793 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.crt.2ac65cee ...
	I0806 07:22:49.578265   31793 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.crt.2ac65cee: {Name:mke131565c18c2601dc5eab7c24faae128de16f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 07:22:49.578415   31793 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.key.2ac65cee ...
	I0806 07:22:49.578428   31793 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.key.2ac65cee: {Name:mk7bc1c46e2a06fbc7907c0f77b4010f21a4b3bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 07:22:49.578503   31793 certs.go:381] copying /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.crt.2ac65cee -> /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.crt
	I0806 07:22:49.578579   31793 certs.go:385] copying /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.key.2ac65cee -> /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.key
	I0806 07:22:49.578630   31793 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/proxy-client.key
	I0806 07:22:49.578643   31793 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/proxy-client.crt with IP's: []
	I0806 07:22:49.824928   31793 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/proxy-client.crt ...
	I0806 07:22:49.824956   31793 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/proxy-client.crt: {Name:mk1d3336f6bea6ba58f669d05a304e02d240dab1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 07:22:49.825113   31793 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/proxy-client.key ...
	I0806 07:22:49.825123   31793 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/proxy-client.key: {Name:mk5400a9c46c36ea7c2478043ec2ddcc14a42c9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 07:22:49.825191   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0806 07:22:49.825208   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0806 07:22:49.825218   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0806 07:22:49.825231   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0806 07:22:49.825243   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0806 07:22:49.825259   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0806 07:22:49.825273   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0806 07:22:49.825286   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0806 07:22:49.825333   31793 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246.pem (1338 bytes)
	W0806 07:22:49.825364   31793 certs.go:480] ignoring /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246_empty.pem, impossibly tiny 0 bytes
	I0806 07:22:49.825374   31793 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem (1675 bytes)
	I0806 07:22:49.825399   31793 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem (1078 bytes)
	I0806 07:22:49.825442   31793 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem (1123 bytes)
	I0806 07:22:49.825467   31793 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem (1675 bytes)
	I0806 07:22:49.825505   31793 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem (1708 bytes)
	I0806 07:22:49.825531   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem -> /usr/share/ca-certificates/202462.pem
	I0806 07:22:49.825544   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0806 07:22:49.825556   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246.pem -> /usr/share/ca-certificates/20246.pem
	I0806 07:22:49.826059   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0806 07:22:49.852228   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0806 07:22:49.876745   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0806 07:22:49.900976   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0806 07:22:49.924627   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0806 07:22:49.949332   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0806 07:22:49.972966   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0806 07:22:49.997847   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0806 07:22:50.021034   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem --> /usr/share/ca-certificates/202462.pem (1708 bytes)
	I0806 07:22:50.044994   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0806 07:22:50.068735   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246.pem --> /usr/share/ca-certificates/20246.pem (1338 bytes)
	I0806 07:22:50.093218   31793 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0806 07:22:50.110297   31793 ssh_runner.go:195] Run: openssl version
	I0806 07:22:50.116593   31793 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202462.pem && ln -fs /usr/share/ca-certificates/202462.pem /etc/ssl/certs/202462.pem"
	I0806 07:22:50.127451   31793 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202462.pem
	I0806 07:22:50.132026   31793 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  6 07:17 /usr/share/ca-certificates/202462.pem
	I0806 07:22:50.132084   31793 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202462.pem
	I0806 07:22:50.137994   31793 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202462.pem /etc/ssl/certs/3ec20f2e.0"
	I0806 07:22:50.148960   31793 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0806 07:22:50.163116   31793 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0806 07:22:50.171691   31793 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  6 07:07 /usr/share/ca-certificates/minikubeCA.pem
	I0806 07:22:50.171755   31793 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0806 07:22:50.177861   31793 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0806 07:22:50.189174   31793 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20246.pem && ln -fs /usr/share/ca-certificates/20246.pem /etc/ssl/certs/20246.pem"
	I0806 07:22:50.206985   31793 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20246.pem
	I0806 07:22:50.215266   31793 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  6 07:17 /usr/share/ca-certificates/20246.pem
	I0806 07:22:50.215333   31793 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20246.pem
	I0806 07:22:50.222067   31793 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20246.pem /etc/ssl/certs/51391683.0"
	I0806 07:22:50.234620   31793 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0806 07:22:50.238723   31793 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0806 07:22:50.238786   31793 kubeadm.go:392] StartCluster: {Name:ha-118173 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-118173 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.164 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 07:22:50.238882   31793 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0806 07:22:50.238946   31793 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0806 07:22:50.283743   31793 cri.go:89] found id: ""
	I0806 07:22:50.283808   31793 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0806 07:22:50.294119   31793 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0806 07:22:50.306527   31793 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0806 07:22:50.318875   31793 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0806 07:22:50.318890   31793 kubeadm.go:157] found existing configuration files:
	
	I0806 07:22:50.318941   31793 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0806 07:22:50.328339   31793 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0806 07:22:50.328407   31793 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0806 07:22:50.338391   31793 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0806 07:22:50.347873   31793 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0806 07:22:50.347931   31793 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0806 07:22:50.358017   31793 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0806 07:22:50.367780   31793 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0806 07:22:50.367827   31793 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0806 07:22:50.377570   31793 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0806 07:22:50.386717   31793 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0806 07:22:50.386784   31793 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0806 07:22:50.396426   31793 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0806 07:22:50.503109   31793 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0806 07:22:50.503212   31793 kubeadm.go:310] [preflight] Running pre-flight checks
	I0806 07:22:50.632413   31793 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0806 07:22:50.632554   31793 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0806 07:22:50.632684   31793 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0806 07:22:50.837479   31793 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0806 07:22:50.940130   31793 out.go:204]   - Generating certificates and keys ...
	I0806 07:22:50.940235   31793 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0806 07:22:50.940304   31793 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0806 07:22:51.015566   31793 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0806 07:22:51.180117   31793 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0806 07:22:51.247206   31793 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0806 07:22:51.403988   31793 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0806 07:22:51.605886   31793 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0806 07:22:51.606072   31793 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-118173 localhost] and IPs [192.168.39.164 127.0.0.1 ::1]
	I0806 07:22:51.691308   31793 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0806 07:22:51.691444   31793 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-118173 localhost] and IPs [192.168.39.164 127.0.0.1 ::1]
	I0806 07:22:51.911058   31793 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0806 07:22:52.155713   31793 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0806 07:22:52.333276   31793 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0806 07:22:52.333340   31793 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0806 07:22:52.699252   31793 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0806 07:22:52.889643   31793 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0806 07:22:53.005971   31793 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0806 07:22:53.103664   31793 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0806 07:22:53.292265   31793 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0806 07:22:53.293138   31793 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0806 07:22:53.297138   31793 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0806 07:22:53.354587   31793 out.go:204]   - Booting up control plane ...
	I0806 07:22:53.354730   31793 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0806 07:22:53.354836   31793 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0806 07:22:53.354936   31793 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0806 07:22:53.355109   31793 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0806 07:22:53.355273   31793 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0806 07:22:53.355339   31793 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0806 07:22:53.441301   31793 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0806 07:22:53.441393   31793 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0806 07:22:53.942653   31793 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.740948ms
	I0806 07:22:53.942756   31793 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0806 07:22:59.881669   31793 kubeadm.go:310] [api-check] The API server is healthy after 5.942191318s
	I0806 07:22:59.901087   31793 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0806 07:22:59.920346   31793 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0806 07:22:59.954894   31793 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0806 07:22:59.955114   31793 kubeadm.go:310] [mark-control-plane] Marking the node ha-118173 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0806 07:22:59.977487   31793 kubeadm.go:310] [bootstrap-token] Using token: fqcri1.u6s5b3pjujut08vh
	I0806 07:22:59.978720   31793 out.go:204]   - Configuring RBAC rules ...
	I0806 07:22:59.978854   31793 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0806 07:22:59.989786   31793 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0806 07:23:00.003618   31793 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0806 07:23:00.013102   31793 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0806 07:23:00.018335   31793 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0806 07:23:00.022863   31793 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0806 07:23:00.289905   31793 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0806 07:23:00.746683   31793 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0806 07:23:01.289532   31793 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0806 07:23:01.290604   31793 kubeadm.go:310] 
	I0806 07:23:01.290685   31793 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0806 07:23:01.290693   31793 kubeadm.go:310] 
	I0806 07:23:01.290771   31793 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0806 07:23:01.290779   31793 kubeadm.go:310] 
	I0806 07:23:01.290827   31793 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0806 07:23:01.290905   31793 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0806 07:23:01.290979   31793 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0806 07:23:01.290999   31793 kubeadm.go:310] 
	I0806 07:23:01.291065   31793 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0806 07:23:01.291075   31793 kubeadm.go:310] 
	I0806 07:23:01.291131   31793 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0806 07:23:01.291141   31793 kubeadm.go:310] 
	I0806 07:23:01.291205   31793 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0806 07:23:01.291310   31793 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0806 07:23:01.291412   31793 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0806 07:23:01.291432   31793 kubeadm.go:310] 
	I0806 07:23:01.291580   31793 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0806 07:23:01.291705   31793 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0806 07:23:01.291716   31793 kubeadm.go:310] 
	I0806 07:23:01.291850   31793 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token fqcri1.u6s5b3pjujut08vh \
	I0806 07:23:01.292001   31793 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d36e5c1dfe989c7d561c809d7c06e0fc13926ac8f6d664cc5d6865ea5f79931b \
	I0806 07:23:01.292024   31793 kubeadm.go:310] 	--control-plane 
	I0806 07:23:01.292032   31793 kubeadm.go:310] 
	I0806 07:23:01.292132   31793 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0806 07:23:01.292142   31793 kubeadm.go:310] 
	I0806 07:23:01.292255   31793 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token fqcri1.u6s5b3pjujut08vh \
	I0806 07:23:01.292411   31793 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d36e5c1dfe989c7d561c809d7c06e0fc13926ac8f6d664cc5d6865ea5f79931b 
	I0806 07:23:01.294053   31793 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0806 07:23:01.294077   31793 cni.go:84] Creating CNI manager for ""
	I0806 07:23:01.294085   31793 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0806 07:23:01.295774   31793 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0806 07:23:01.297227   31793 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0806 07:23:01.302891   31793 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0806 07:23:01.302908   31793 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0806 07:23:01.323321   31793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0806 07:23:01.649030   31793 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0806 07:23:01.649123   31793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 07:23:01.649128   31793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-118173 minikube.k8s.io/updated_at=2024_08_06T07_23_01_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=e92cb06692f5ea1ba801d10d148e5e92e807f9c8 minikube.k8s.io/name=ha-118173 minikube.k8s.io/primary=true
	I0806 07:23:01.662300   31793 ops.go:34] apiserver oom_adj: -16
	I0806 07:23:01.837688   31793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 07:23:02.337917   31793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 07:23:02.838694   31793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 07:23:03.337851   31793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 07:23:03.837800   31793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 07:23:04.338252   31793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 07:23:04.838153   31793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 07:23:05.338045   31793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 07:23:05.838541   31793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 07:23:06.338506   31793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 07:23:06.838409   31793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 07:23:07.338469   31793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 07:23:07.837864   31793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 07:23:08.337724   31793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 07:23:08.838540   31793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 07:23:09.338677   31793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 07:23:09.838346   31793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 07:23:10.337750   31793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 07:23:10.838657   31793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 07:23:11.337722   31793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 07:23:11.838695   31793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 07:23:12.337840   31793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 07:23:12.837740   31793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 07:23:13.338482   31793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 07:23:13.415858   31793 kubeadm.go:1113] duration metric: took 11.766796251s to wait for elevateKubeSystemPrivileges
	I0806 07:23:13.415909   31793 kubeadm.go:394] duration metric: took 23.177126815s to StartCluster
	I0806 07:23:13.415933   31793 settings.go:142] acquiring lock: {Name:mkb0bfbcae3b4205ef43487a9de84cfd8afd2e5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 07:23:13.416026   31793 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19370-13057/kubeconfig
	I0806 07:23:13.417013   31793 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/kubeconfig: {Name:mk5c066914acf8e36556b29792f75b98076ca34a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 07:23:13.417279   31793 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0806 07:23:13.417288   31793 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.164 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0806 07:23:13.417312   31793 start.go:241] waiting for startup goroutines ...
	I0806 07:23:13.417325   31793 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0806 07:23:13.417389   31793 addons.go:69] Setting storage-provisioner=true in profile "ha-118173"
	I0806 07:23:13.417413   31793 addons.go:69] Setting default-storageclass=true in profile "ha-118173"
	I0806 07:23:13.417420   31793 addons.go:234] Setting addon storage-provisioner=true in "ha-118173"
	I0806 07:23:13.417444   31793 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-118173"
	I0806 07:23:13.417448   31793 host.go:66] Checking if "ha-118173" exists ...
	I0806 07:23:13.417527   31793 config.go:182] Loaded profile config "ha-118173": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 07:23:13.417841   31793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:23:13.417864   31793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:23:13.417874   31793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:23:13.417895   31793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:23:13.432576   31793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46107
	I0806 07:23:13.432584   31793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33159
	I0806 07:23:13.432988   31793 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:23:13.433080   31793 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:23:13.433451   31793 main.go:141] libmachine: Using API Version  1
	I0806 07:23:13.433466   31793 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:23:13.433624   31793 main.go:141] libmachine: Using API Version  1
	I0806 07:23:13.433645   31793 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:23:13.433808   31793 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:23:13.434018   31793 main.go:141] libmachine: (ha-118173) Calling .GetState
	I0806 07:23:13.434035   31793 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:23:13.434522   31793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:23:13.434563   31793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:23:13.436293   31793 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19370-13057/kubeconfig
	I0806 07:23:13.436651   31793 kapi.go:59] client config for ha-118173: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/client.crt", KeyFile:"/home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/client.key", CAFile:"/home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02a80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0806 07:23:13.437101   31793 cert_rotation.go:137] Starting client certificate rotation controller
	I0806 07:23:13.437355   31793 addons.go:234] Setting addon default-storageclass=true in "ha-118173"
	I0806 07:23:13.437401   31793 host.go:66] Checking if "ha-118173" exists ...
	I0806 07:23:13.437767   31793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:23:13.437814   31793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:23:13.450209   31793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45857
	I0806 07:23:13.450656   31793 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:23:13.451208   31793 main.go:141] libmachine: Using API Version  1
	I0806 07:23:13.451232   31793 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:23:13.451610   31793 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:23:13.451803   31793 main.go:141] libmachine: (ha-118173) Calling .GetState
	I0806 07:23:13.452631   31793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44311
	I0806 07:23:13.453234   31793 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:23:13.453700   31793 main.go:141] libmachine: (ha-118173) Calling .DriverName
	I0806 07:23:13.453727   31793 main.go:141] libmachine: Using API Version  1
	I0806 07:23:13.453747   31793 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:23:13.454060   31793 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:23:13.454512   31793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:23:13.454539   31793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:23:13.455504   31793 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 07:23:13.457284   31793 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0806 07:23:13.457309   31793 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0806 07:23:13.457331   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHHostname
	I0806 07:23:13.460218   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:23:13.460629   31793 main.go:141] libmachine: (ha-118173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:df:f0", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:22:33 +0000 UTC Type:0 Mac:52:54:00:ef:df:f0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-118173 Clientid:01:52:54:00:ef:df:f0}
	I0806 07:23:13.460651   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined IP address 192.168.39.164 and MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:23:13.460855   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHPort
	I0806 07:23:13.461047   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHKeyPath
	I0806 07:23:13.461214   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHUsername
	I0806 07:23:13.461338   31793 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173/id_rsa Username:docker}
	I0806 07:23:13.473573   31793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42685
	I0806 07:23:13.474048   31793 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:23:13.474556   31793 main.go:141] libmachine: Using API Version  1
	I0806 07:23:13.474589   31793 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:23:13.474961   31793 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:23:13.475167   31793 main.go:141] libmachine: (ha-118173) Calling .GetState
	I0806 07:23:13.477062   31793 main.go:141] libmachine: (ha-118173) Calling .DriverName
	I0806 07:23:13.477280   31793 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0806 07:23:13.477294   31793 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0806 07:23:13.477308   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHHostname
	I0806 07:23:13.480393   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:23:13.480895   31793 main.go:141] libmachine: (ha-118173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:df:f0", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:22:33 +0000 UTC Type:0 Mac:52:54:00:ef:df:f0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-118173 Clientid:01:52:54:00:ef:df:f0}
	I0806 07:23:13.480929   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined IP address 192.168.39.164 and MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:23:13.481068   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHPort
	I0806 07:23:13.481292   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHKeyPath
	I0806 07:23:13.481439   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHUsername
	I0806 07:23:13.481582   31793 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173/id_rsa Username:docker}
	I0806 07:23:13.551371   31793 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0806 07:23:13.591964   31793 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0806 07:23:13.695834   31793 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0806 07:23:14.162916   31793 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0806 07:23:14.515978   31793 main.go:141] libmachine: Making call to close driver server
	I0806 07:23:14.516007   31793 main.go:141] libmachine: (ha-118173) Calling .Close
	I0806 07:23:14.516016   31793 main.go:141] libmachine: Making call to close driver server
	I0806 07:23:14.516035   31793 main.go:141] libmachine: (ha-118173) Calling .Close
	I0806 07:23:14.516341   31793 main.go:141] libmachine: Successfully made call to close driver server
	I0806 07:23:14.516371   31793 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 07:23:14.516380   31793 main.go:141] libmachine: Making call to close driver server
	I0806 07:23:14.516388   31793 main.go:141] libmachine: (ha-118173) Calling .Close
	I0806 07:23:14.516388   31793 main.go:141] libmachine: Successfully made call to close driver server
	I0806 07:23:14.516401   31793 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 07:23:14.516412   31793 main.go:141] libmachine: Making call to close driver server
	I0806 07:23:14.516420   31793 main.go:141] libmachine: (ha-118173) Calling .Close
	I0806 07:23:14.516462   31793 main.go:141] libmachine: (ha-118173) DBG | Closing plugin on server side
	I0806 07:23:14.516544   31793 main.go:141] libmachine: (ha-118173) DBG | Closing plugin on server side
	I0806 07:23:14.516596   31793 main.go:141] libmachine: (ha-118173) DBG | Closing plugin on server side
	I0806 07:23:14.516620   31793 main.go:141] libmachine: Successfully made call to close driver server
	I0806 07:23:14.516628   31793 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 07:23:14.517941   31793 main.go:141] libmachine: (ha-118173) DBG | Closing plugin on server side
	I0806 07:23:14.517979   31793 main.go:141] libmachine: Successfully made call to close driver server
	I0806 07:23:14.517997   31793 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 07:23:14.518151   31793 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0806 07:23:14.518165   31793 round_trippers.go:469] Request Headers:
	I0806 07:23:14.518177   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:23:14.518191   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:23:14.528720   31793 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0806 07:23:14.530107   31793 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0806 07:23:14.530131   31793 round_trippers.go:469] Request Headers:
	I0806 07:23:14.530138   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:23:14.530141   31793 round_trippers.go:473]     Content-Type: application/json
	I0806 07:23:14.530144   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:23:14.532918   31793 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 07:23:14.533101   31793 main.go:141] libmachine: Making call to close driver server
	I0806 07:23:14.533126   31793 main.go:141] libmachine: (ha-118173) Calling .Close
	I0806 07:23:14.533390   31793 main.go:141] libmachine: Successfully made call to close driver server
	I0806 07:23:14.533405   31793 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 07:23:14.535118   31793 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0806 07:23:14.536337   31793 addons.go:510] duration metric: took 1.119007516s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0806 07:23:14.536388   31793 start.go:246] waiting for cluster config update ...
	I0806 07:23:14.536406   31793 start.go:255] writing updated cluster config ...
	I0806 07:23:14.538220   31793 out.go:177] 
	I0806 07:23:14.539561   31793 config.go:182] Loaded profile config "ha-118173": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 07:23:14.539664   31793 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/config.json ...
	I0806 07:23:14.541328   31793 out.go:177] * Starting "ha-118173-m02" control-plane node in "ha-118173" cluster
	I0806 07:23:14.542630   31793 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0806 07:23:14.542665   31793 cache.go:56] Caching tarball of preloaded images
	I0806 07:23:14.542777   31793 preload.go:172] Found /home/jenkins/minikube-integration/19370-13057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0806 07:23:14.542798   31793 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0806 07:23:14.542923   31793 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/config.json ...
	I0806 07:23:14.543152   31793 start.go:360] acquireMachinesLock for ha-118173-m02: {Name:mkc602ac9c8cc3360dcc0fcb8104303837aaab61 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 07:23:14.543214   31793 start.go:364] duration metric: took 36.596µs to acquireMachinesLock for "ha-118173-m02"
	I0806 07:23:14.543237   31793 start.go:93] Provisioning new machine with config: &{Name:ha-118173 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-118173 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.164 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0806 07:23:14.543331   31793 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0806 07:23:14.544956   31793 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0806 07:23:14.545089   31793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:23:14.545119   31793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:23:14.560354   31793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35915
	I0806 07:23:14.560785   31793 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:23:14.561300   31793 main.go:141] libmachine: Using API Version  1
	I0806 07:23:14.561318   31793 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:23:14.561639   31793 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:23:14.561889   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetMachineName
	I0806 07:23:14.562052   31793 main.go:141] libmachine: (ha-118173-m02) Calling .DriverName
	I0806 07:23:14.562234   31793 start.go:159] libmachine.API.Create for "ha-118173" (driver="kvm2")
	I0806 07:23:14.562255   31793 client.go:168] LocalClient.Create starting
	I0806 07:23:14.562291   31793 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem
	I0806 07:23:14.562331   31793 main.go:141] libmachine: Decoding PEM data...
	I0806 07:23:14.562344   31793 main.go:141] libmachine: Parsing certificate...
	I0806 07:23:14.562394   31793 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem
	I0806 07:23:14.562412   31793 main.go:141] libmachine: Decoding PEM data...
	I0806 07:23:14.562422   31793 main.go:141] libmachine: Parsing certificate...
	I0806 07:23:14.562437   31793 main.go:141] libmachine: Running pre-create checks...
	I0806 07:23:14.562445   31793 main.go:141] libmachine: (ha-118173-m02) Calling .PreCreateCheck
	I0806 07:23:14.562616   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetConfigRaw
	I0806 07:23:14.563050   31793 main.go:141] libmachine: Creating machine...
	I0806 07:23:14.563069   31793 main.go:141] libmachine: (ha-118173-m02) Calling .Create
	I0806 07:23:14.563233   31793 main.go:141] libmachine: (ha-118173-m02) Creating KVM machine...
	I0806 07:23:14.564414   31793 main.go:141] libmachine: (ha-118173-m02) DBG | found existing default KVM network
	I0806 07:23:14.564527   31793 main.go:141] libmachine: (ha-118173-m02) DBG | found existing private KVM network mk-ha-118173
	I0806 07:23:14.564689   31793 main.go:141] libmachine: (ha-118173-m02) Setting up store path in /home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173-m02 ...
	I0806 07:23:14.564713   31793 main.go:141] libmachine: (ha-118173-m02) Building disk image from file:///home/jenkins/minikube-integration/19370-13057/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0806 07:23:14.564790   31793 main.go:141] libmachine: (ha-118173-m02) DBG | I0806 07:23:14.564685   32199 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19370-13057/.minikube
	I0806 07:23:14.564891   31793 main.go:141] libmachine: (ha-118173-m02) Downloading /home/jenkins/minikube-integration/19370-13057/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19370-13057/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0806 07:23:14.794040   31793 main.go:141] libmachine: (ha-118173-m02) DBG | I0806 07:23:14.793908   32199 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173-m02/id_rsa...
	I0806 07:23:14.896464   31793 main.go:141] libmachine: (ha-118173-m02) DBG | I0806 07:23:14.896307   32199 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173-m02/ha-118173-m02.rawdisk...
	I0806 07:23:14.896495   31793 main.go:141] libmachine: (ha-118173-m02) DBG | Writing magic tar header
	I0806 07:23:14.896506   31793 main.go:141] libmachine: (ha-118173-m02) DBG | Writing SSH key tar header
	I0806 07:23:14.896520   31793 main.go:141] libmachine: (ha-118173-m02) DBG | I0806 07:23:14.896451   32199 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173-m02 ...
	I0806 07:23:14.896616   31793 main.go:141] libmachine: (ha-118173-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173-m02
	I0806 07:23:14.896649   31793 main.go:141] libmachine: (ha-118173-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19370-13057/.minikube/machines
	I0806 07:23:14.896661   31793 main.go:141] libmachine: (ha-118173-m02) Setting executable bit set on /home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173-m02 (perms=drwx------)
	I0806 07:23:14.896678   31793 main.go:141] libmachine: (ha-118173-m02) Setting executable bit set on /home/jenkins/minikube-integration/19370-13057/.minikube/machines (perms=drwxr-xr-x)
	I0806 07:23:14.896690   31793 main.go:141] libmachine: (ha-118173-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19370-13057/.minikube
	I0806 07:23:14.896704   31793 main.go:141] libmachine: (ha-118173-m02) Setting executable bit set on /home/jenkins/minikube-integration/19370-13057/.minikube (perms=drwxr-xr-x)
	I0806 07:23:14.896719   31793 main.go:141] libmachine: (ha-118173-m02) Setting executable bit set on /home/jenkins/minikube-integration/19370-13057 (perms=drwxrwxr-x)
	I0806 07:23:14.896732   31793 main.go:141] libmachine: (ha-118173-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0806 07:23:14.896745   31793 main.go:141] libmachine: (ha-118173-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19370-13057
	I0806 07:23:14.896755   31793 main.go:141] libmachine: (ha-118173-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0806 07:23:14.896770   31793 main.go:141] libmachine: (ha-118173-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0806 07:23:14.896782   31793 main.go:141] libmachine: (ha-118173-m02) Creating domain...
	I0806 07:23:14.896803   31793 main.go:141] libmachine: (ha-118173-m02) DBG | Checking permissions on dir: /home/jenkins
	I0806 07:23:14.896816   31793 main.go:141] libmachine: (ha-118173-m02) DBG | Checking permissions on dir: /home
	I0806 07:23:14.896862   31793 main.go:141] libmachine: (ha-118173-m02) DBG | Skipping /home - not owner
	I0806 07:23:14.897812   31793 main.go:141] libmachine: (ha-118173-m02) define libvirt domain using xml: 
	I0806 07:23:14.897835   31793 main.go:141] libmachine: (ha-118173-m02) <domain type='kvm'>
	I0806 07:23:14.897845   31793 main.go:141] libmachine: (ha-118173-m02)   <name>ha-118173-m02</name>
	I0806 07:23:14.897857   31793 main.go:141] libmachine: (ha-118173-m02)   <memory unit='MiB'>2200</memory>
	I0806 07:23:14.897868   31793 main.go:141] libmachine: (ha-118173-m02)   <vcpu>2</vcpu>
	I0806 07:23:14.897876   31793 main.go:141] libmachine: (ha-118173-m02)   <features>
	I0806 07:23:14.897884   31793 main.go:141] libmachine: (ha-118173-m02)     <acpi/>
	I0806 07:23:14.897893   31793 main.go:141] libmachine: (ha-118173-m02)     <apic/>
	I0806 07:23:14.897930   31793 main.go:141] libmachine: (ha-118173-m02)     <pae/>
	I0806 07:23:14.897956   31793 main.go:141] libmachine: (ha-118173-m02)     
	I0806 07:23:14.897979   31793 main.go:141] libmachine: (ha-118173-m02)   </features>
	I0806 07:23:14.897997   31793 main.go:141] libmachine: (ha-118173-m02)   <cpu mode='host-passthrough'>
	I0806 07:23:14.898009   31793 main.go:141] libmachine: (ha-118173-m02)   
	I0806 07:23:14.898019   31793 main.go:141] libmachine: (ha-118173-m02)   </cpu>
	I0806 07:23:14.898029   31793 main.go:141] libmachine: (ha-118173-m02)   <os>
	I0806 07:23:14.898034   31793 main.go:141] libmachine: (ha-118173-m02)     <type>hvm</type>
	I0806 07:23:14.898040   31793 main.go:141] libmachine: (ha-118173-m02)     <boot dev='cdrom'/>
	I0806 07:23:14.898047   31793 main.go:141] libmachine: (ha-118173-m02)     <boot dev='hd'/>
	I0806 07:23:14.898057   31793 main.go:141] libmachine: (ha-118173-m02)     <bootmenu enable='no'/>
	I0806 07:23:14.898066   31793 main.go:141] libmachine: (ha-118173-m02)   </os>
	I0806 07:23:14.898076   31793 main.go:141] libmachine: (ha-118173-m02)   <devices>
	I0806 07:23:14.898087   31793 main.go:141] libmachine: (ha-118173-m02)     <disk type='file' device='cdrom'>
	I0806 07:23:14.898100   31793 main.go:141] libmachine: (ha-118173-m02)       <source file='/home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173-m02/boot2docker.iso'/>
	I0806 07:23:14.898111   31793 main.go:141] libmachine: (ha-118173-m02)       <target dev='hdc' bus='scsi'/>
	I0806 07:23:14.898125   31793 main.go:141] libmachine: (ha-118173-m02)       <readonly/>
	I0806 07:23:14.898138   31793 main.go:141] libmachine: (ha-118173-m02)     </disk>
	I0806 07:23:14.898145   31793 main.go:141] libmachine: (ha-118173-m02)     <disk type='file' device='disk'>
	I0806 07:23:14.898157   31793 main.go:141] libmachine: (ha-118173-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0806 07:23:14.898166   31793 main.go:141] libmachine: (ha-118173-m02)       <source file='/home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173-m02/ha-118173-m02.rawdisk'/>
	I0806 07:23:14.898174   31793 main.go:141] libmachine: (ha-118173-m02)       <target dev='hda' bus='virtio'/>
	I0806 07:23:14.898182   31793 main.go:141] libmachine: (ha-118173-m02)     </disk>
	I0806 07:23:14.898192   31793 main.go:141] libmachine: (ha-118173-m02)     <interface type='network'>
	I0806 07:23:14.898201   31793 main.go:141] libmachine: (ha-118173-m02)       <source network='mk-ha-118173'/>
	I0806 07:23:14.898211   31793 main.go:141] libmachine: (ha-118173-m02)       <model type='virtio'/>
	I0806 07:23:14.898219   31793 main.go:141] libmachine: (ha-118173-m02)     </interface>
	I0806 07:23:14.898229   31793 main.go:141] libmachine: (ha-118173-m02)     <interface type='network'>
	I0806 07:23:14.898237   31793 main.go:141] libmachine: (ha-118173-m02)       <source network='default'/>
	I0806 07:23:14.898248   31793 main.go:141] libmachine: (ha-118173-m02)       <model type='virtio'/>
	I0806 07:23:14.898262   31793 main.go:141] libmachine: (ha-118173-m02)     </interface>
	I0806 07:23:14.898275   31793 main.go:141] libmachine: (ha-118173-m02)     <serial type='pty'>
	I0806 07:23:14.898287   31793 main.go:141] libmachine: (ha-118173-m02)       <target port='0'/>
	I0806 07:23:14.898295   31793 main.go:141] libmachine: (ha-118173-m02)     </serial>
	I0806 07:23:14.898302   31793 main.go:141] libmachine: (ha-118173-m02)     <console type='pty'>
	I0806 07:23:14.898313   31793 main.go:141] libmachine: (ha-118173-m02)       <target type='serial' port='0'/>
	I0806 07:23:14.898326   31793 main.go:141] libmachine: (ha-118173-m02)     </console>
	I0806 07:23:14.898337   31793 main.go:141] libmachine: (ha-118173-m02)     <rng model='virtio'>
	I0806 07:23:14.898361   31793 main.go:141] libmachine: (ha-118173-m02)       <backend model='random'>/dev/random</backend>
	I0806 07:23:14.898383   31793 main.go:141] libmachine: (ha-118173-m02)     </rng>
	I0806 07:23:14.898396   31793 main.go:141] libmachine: (ha-118173-m02)     
	I0806 07:23:14.898405   31793 main.go:141] libmachine: (ha-118173-m02)     
	I0806 07:23:14.898414   31793 main.go:141] libmachine: (ha-118173-m02)   </devices>
	I0806 07:23:14.898423   31793 main.go:141] libmachine: (ha-118173-m02) </domain>
	I0806 07:23:14.898433   31793 main.go:141] libmachine: (ha-118173-m02) 
	I0806 07:23:14.904881   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined MAC address 52:54:00:9e:4c:42 in network default
	I0806 07:23:14.905379   31793 main.go:141] libmachine: (ha-118173-m02) Ensuring networks are active...
	I0806 07:23:14.905400   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:23:14.906112   31793 main.go:141] libmachine: (ha-118173-m02) Ensuring network default is active
	I0806 07:23:14.906564   31793 main.go:141] libmachine: (ha-118173-m02) Ensuring network mk-ha-118173 is active
	I0806 07:23:14.906920   31793 main.go:141] libmachine: (ha-118173-m02) Getting domain xml...
	I0806 07:23:14.907661   31793 main.go:141] libmachine: (ha-118173-m02) Creating domain...
	I0806 07:23:16.128703   31793 main.go:141] libmachine: (ha-118173-m02) Waiting to get IP...
	I0806 07:23:16.129605   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:23:16.129962   31793 main.go:141] libmachine: (ha-118173-m02) DBG | unable to find current IP address of domain ha-118173-m02 in network mk-ha-118173
	I0806 07:23:16.130007   31793 main.go:141] libmachine: (ha-118173-m02) DBG | I0806 07:23:16.129959   32199 retry.go:31] will retry after 241.520623ms: waiting for machine to come up
	I0806 07:23:16.373661   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:23:16.374151   31793 main.go:141] libmachine: (ha-118173-m02) DBG | unable to find current IP address of domain ha-118173-m02 in network mk-ha-118173
	I0806 07:23:16.374181   31793 main.go:141] libmachine: (ha-118173-m02) DBG | I0806 07:23:16.374107   32199 retry.go:31] will retry after 260.223274ms: waiting for machine to come up
	I0806 07:23:16.635735   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:23:16.636249   31793 main.go:141] libmachine: (ha-118173-m02) DBG | unable to find current IP address of domain ha-118173-m02 in network mk-ha-118173
	I0806 07:23:16.636271   31793 main.go:141] libmachine: (ha-118173-m02) DBG | I0806 07:23:16.636212   32199 retry.go:31] will retry after 435.280892ms: waiting for machine to come up
	I0806 07:23:17.072887   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:23:17.073260   31793 main.go:141] libmachine: (ha-118173-m02) DBG | unable to find current IP address of domain ha-118173-m02 in network mk-ha-118173
	I0806 07:23:17.073280   31793 main.go:141] libmachine: (ha-118173-m02) DBG | I0806 07:23:17.073239   32199 retry.go:31] will retry after 455.559843ms: waiting for machine to come up
	I0806 07:23:17.530537   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:23:17.530979   31793 main.go:141] libmachine: (ha-118173-m02) DBG | unable to find current IP address of domain ha-118173-m02 in network mk-ha-118173
	I0806 07:23:17.531007   31793 main.go:141] libmachine: (ha-118173-m02) DBG | I0806 07:23:17.530930   32199 retry.go:31] will retry after 660.172837ms: waiting for machine to come up
	I0806 07:23:18.192659   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:23:18.193143   31793 main.go:141] libmachine: (ha-118173-m02) DBG | unable to find current IP address of domain ha-118173-m02 in network mk-ha-118173
	I0806 07:23:18.193177   31793 main.go:141] libmachine: (ha-118173-m02) DBG | I0806 07:23:18.193087   32199 retry.go:31] will retry after 876.754505ms: waiting for machine to come up
	I0806 07:23:19.071078   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:23:19.071460   31793 main.go:141] libmachine: (ha-118173-m02) DBG | unable to find current IP address of domain ha-118173-m02 in network mk-ha-118173
	I0806 07:23:19.071514   31793 main.go:141] libmachine: (ha-118173-m02) DBG | I0806 07:23:19.071409   32199 retry.go:31] will retry after 1.111120578s: waiting for machine to come up
	I0806 07:23:20.184316   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:23:20.184805   31793 main.go:141] libmachine: (ha-118173-m02) DBG | unable to find current IP address of domain ha-118173-m02 in network mk-ha-118173
	I0806 07:23:20.184881   31793 main.go:141] libmachine: (ha-118173-m02) DBG | I0806 07:23:20.184764   32199 retry.go:31] will retry after 1.325458338s: waiting for machine to come up
	I0806 07:23:21.512349   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:23:21.512851   31793 main.go:141] libmachine: (ha-118173-m02) DBG | unable to find current IP address of domain ha-118173-m02 in network mk-ha-118173
	I0806 07:23:21.512878   31793 main.go:141] libmachine: (ha-118173-m02) DBG | I0806 07:23:21.512812   32199 retry.go:31] will retry after 1.231406403s: waiting for machine to come up
	I0806 07:23:22.746319   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:23:22.746760   31793 main.go:141] libmachine: (ha-118173-m02) DBG | unable to find current IP address of domain ha-118173-m02 in network mk-ha-118173
	I0806 07:23:22.746788   31793 main.go:141] libmachine: (ha-118173-m02) DBG | I0806 07:23:22.746710   32199 retry.go:31] will retry after 1.568319665s: waiting for machine to come up
	I0806 07:23:24.317506   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:23:24.318026   31793 main.go:141] libmachine: (ha-118173-m02) DBG | unable to find current IP address of domain ha-118173-m02 in network mk-ha-118173
	I0806 07:23:24.318059   31793 main.go:141] libmachine: (ha-118173-m02) DBG | I0806 07:23:24.317972   32199 retry.go:31] will retry after 2.818119412s: waiting for machine to come up
	I0806 07:23:27.138815   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:23:27.139209   31793 main.go:141] libmachine: (ha-118173-m02) DBG | unable to find current IP address of domain ha-118173-m02 in network mk-ha-118173
	I0806 07:23:27.139236   31793 main.go:141] libmachine: (ha-118173-m02) DBG | I0806 07:23:27.139166   32199 retry.go:31] will retry after 3.589573898s: waiting for machine to come up
	I0806 07:23:30.730336   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:23:30.730764   31793 main.go:141] libmachine: (ha-118173-m02) DBG | unable to find current IP address of domain ha-118173-m02 in network mk-ha-118173
	I0806 07:23:30.730797   31793 main.go:141] libmachine: (ha-118173-m02) DBG | I0806 07:23:30.730715   32199 retry.go:31] will retry after 3.029021985s: waiting for machine to come up
	I0806 07:23:33.762866   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:23:33.763260   31793 main.go:141] libmachine: (ha-118173-m02) DBG | unable to find current IP address of domain ha-118173-m02 in network mk-ha-118173
	I0806 07:23:33.763286   31793 main.go:141] libmachine: (ha-118173-m02) DBG | I0806 07:23:33.763228   32199 retry.go:31] will retry after 4.735805318s: waiting for machine to come up
	I0806 07:23:38.501919   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:23:38.502361   31793 main.go:141] libmachine: (ha-118173-m02) Found IP for machine: 192.168.39.203
	I0806 07:23:38.502388   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has current primary IP address 192.168.39.203 and MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:23:38.502398   31793 main.go:141] libmachine: (ha-118173-m02) Reserving static IP address...
	I0806 07:23:38.502761   31793 main.go:141] libmachine: (ha-118173-m02) DBG | unable to find host DHCP lease matching {name: "ha-118173-m02", mac: "52:54:00:b4:e9:38", ip: "192.168.39.203"} in network mk-ha-118173
	I0806 07:23:38.573170   31793 main.go:141] libmachine: (ha-118173-m02) DBG | Getting to WaitForSSH function...
	I0806 07:23:38.573210   31793 main.go:141] libmachine: (ha-118173-m02) Reserved static IP address: 192.168.39.203
	I0806 07:23:38.573280   31793 main.go:141] libmachine: (ha-118173-m02) Waiting for SSH to be available...
	I0806 07:23:38.576055   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:23:38.576497   31793 main.go:141] libmachine: (ha-118173-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:e9:38", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:23:29 +0000 UTC Type:0 Mac:52:54:00:b4:e9:38 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:minikube Clientid:01:52:54:00:b4:e9:38}
	I0806 07:23:38.576525   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined IP address 192.168.39.203 and MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:23:38.576690   31793 main.go:141] libmachine: (ha-118173-m02) DBG | Using SSH client type: external
	I0806 07:23:38.576717   31793 main.go:141] libmachine: (ha-118173-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173-m02/id_rsa (-rw-------)
	I0806 07:23:38.576743   31793 main.go:141] libmachine: (ha-118173-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.203 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0806 07:23:38.576758   31793 main.go:141] libmachine: (ha-118173-m02) DBG | About to run SSH command:
	I0806 07:23:38.576775   31793 main.go:141] libmachine: (ha-118173-m02) DBG | exit 0
	I0806 07:23:38.704670   31793 main.go:141] libmachine: (ha-118173-m02) DBG | SSH cmd err, output: <nil>: 
	I0806 07:23:38.704958   31793 main.go:141] libmachine: (ha-118173-m02) KVM machine creation complete!
	I0806 07:23:38.705236   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetConfigRaw
	I0806 07:23:38.705780   31793 main.go:141] libmachine: (ha-118173-m02) Calling .DriverName
	I0806 07:23:38.705989   31793 main.go:141] libmachine: (ha-118173-m02) Calling .DriverName
	I0806 07:23:38.706130   31793 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0806 07:23:38.706144   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetState
	I0806 07:23:38.707347   31793 main.go:141] libmachine: Detecting operating system of created instance...
	I0806 07:23:38.707360   31793 main.go:141] libmachine: Waiting for SSH to be available...
	I0806 07:23:38.707370   31793 main.go:141] libmachine: Getting to WaitForSSH function...
	I0806 07:23:38.707376   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHHostname
	I0806 07:23:38.709658   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:23:38.710063   31793 main.go:141] libmachine: (ha-118173-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:e9:38", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:23:29 +0000 UTC Type:0 Mac:52:54:00:b4:e9:38 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-118173-m02 Clientid:01:52:54:00:b4:e9:38}
	I0806 07:23:38.710088   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined IP address 192.168.39.203 and MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:23:38.710237   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHPort
	I0806 07:23:38.710410   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHKeyPath
	I0806 07:23:38.710579   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHKeyPath
	I0806 07:23:38.710702   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHUsername
	I0806 07:23:38.710891   31793 main.go:141] libmachine: Using SSH client type: native
	I0806 07:23:38.711083   31793 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I0806 07:23:38.711096   31793 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0806 07:23:38.811830   31793 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 07:23:38.811851   31793 main.go:141] libmachine: Detecting the provisioner...
	I0806 07:23:38.811858   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHHostname
	I0806 07:23:38.814481   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:23:38.814894   31793 main.go:141] libmachine: (ha-118173-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:e9:38", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:23:29 +0000 UTC Type:0 Mac:52:54:00:b4:e9:38 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-118173-m02 Clientid:01:52:54:00:b4:e9:38}
	I0806 07:23:38.814919   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined IP address 192.168.39.203 and MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:23:38.815118   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHPort
	I0806 07:23:38.815319   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHKeyPath
	I0806 07:23:38.815502   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHKeyPath
	I0806 07:23:38.815683   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHUsername
	I0806 07:23:38.815922   31793 main.go:141] libmachine: Using SSH client type: native
	I0806 07:23:38.816122   31793 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I0806 07:23:38.816138   31793 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0806 07:23:38.921318   31793 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0806 07:23:38.921369   31793 main.go:141] libmachine: found compatible host: buildroot
	I0806 07:23:38.921375   31793 main.go:141] libmachine: Provisioning with buildroot...
	I0806 07:23:38.921384   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetMachineName
	I0806 07:23:38.921682   31793 buildroot.go:166] provisioning hostname "ha-118173-m02"
	I0806 07:23:38.921709   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetMachineName
	I0806 07:23:38.921900   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHHostname
	I0806 07:23:38.924353   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:23:38.924774   31793 main.go:141] libmachine: (ha-118173-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:e9:38", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:23:29 +0000 UTC Type:0 Mac:52:54:00:b4:e9:38 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-118173-m02 Clientid:01:52:54:00:b4:e9:38}
	I0806 07:23:38.924797   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined IP address 192.168.39.203 and MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:23:38.924944   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHPort
	I0806 07:23:38.925131   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHKeyPath
	I0806 07:23:38.925276   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHKeyPath
	I0806 07:23:38.925394   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHUsername
	I0806 07:23:38.925567   31793 main.go:141] libmachine: Using SSH client type: native
	I0806 07:23:38.925723   31793 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I0806 07:23:38.925734   31793 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-118173-m02 && echo "ha-118173-m02" | sudo tee /etc/hostname
	I0806 07:23:39.049943   31793 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-118173-m02
	
	I0806 07:23:39.049968   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHHostname
	I0806 07:23:39.052707   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:23:39.053074   31793 main.go:141] libmachine: (ha-118173-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:e9:38", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:23:29 +0000 UTC Type:0 Mac:52:54:00:b4:e9:38 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-118173-m02 Clientid:01:52:54:00:b4:e9:38}
	I0806 07:23:39.053116   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined IP address 192.168.39.203 and MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:23:39.053251   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHPort
	I0806 07:23:39.053452   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHKeyPath
	I0806 07:23:39.053676   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHKeyPath
	I0806 07:23:39.053849   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHUsername
	I0806 07:23:39.053979   31793 main.go:141] libmachine: Using SSH client type: native
	I0806 07:23:39.054134   31793 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I0806 07:23:39.054150   31793 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-118173-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-118173-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-118173-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0806 07:23:39.165468   31793 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 07:23:39.165500   31793 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19370-13057/.minikube CaCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19370-13057/.minikube}
	I0806 07:23:39.165536   31793 buildroot.go:174] setting up certificates
	I0806 07:23:39.165552   31793 provision.go:84] configureAuth start
	I0806 07:23:39.165567   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetMachineName
	I0806 07:23:39.165824   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetIP
	I0806 07:23:39.168581   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:23:39.168901   31793 main.go:141] libmachine: (ha-118173-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:e9:38", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:23:29 +0000 UTC Type:0 Mac:52:54:00:b4:e9:38 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-118173-m02 Clientid:01:52:54:00:b4:e9:38}
	I0806 07:23:39.168922   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined IP address 192.168.39.203 and MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:23:39.169112   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHHostname
	I0806 07:23:39.171068   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:23:39.171373   31793 main.go:141] libmachine: (ha-118173-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:e9:38", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:23:29 +0000 UTC Type:0 Mac:52:54:00:b4:e9:38 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-118173-m02 Clientid:01:52:54:00:b4:e9:38}
	I0806 07:23:39.171395   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined IP address 192.168.39.203 and MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:23:39.171569   31793 provision.go:143] copyHostCerts
	I0806 07:23:39.171619   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem
	I0806 07:23:39.171659   31793 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem, removing ...
	I0806 07:23:39.171670   31793 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem
	I0806 07:23:39.171744   31793 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem (1078 bytes)
	I0806 07:23:39.171869   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem
	I0806 07:23:39.171902   31793 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem, removing ...
	I0806 07:23:39.171912   31793 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem
	I0806 07:23:39.171957   31793 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem (1123 bytes)
	I0806 07:23:39.172026   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem
	I0806 07:23:39.172050   31793 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem, removing ...
	I0806 07:23:39.172058   31793 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem
	I0806 07:23:39.172093   31793 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem (1675 bytes)
	I0806 07:23:39.172160   31793 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem org=jenkins.ha-118173-m02 san=[127.0.0.1 192.168.39.203 ha-118173-m02 localhost minikube]
	I0806 07:23:39.428272   31793 provision.go:177] copyRemoteCerts
	I0806 07:23:39.428336   31793 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0806 07:23:39.428381   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHHostname
	I0806 07:23:39.431425   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:23:39.431759   31793 main.go:141] libmachine: (ha-118173-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:e9:38", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:23:29 +0000 UTC Type:0 Mac:52:54:00:b4:e9:38 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-118173-m02 Clientid:01:52:54:00:b4:e9:38}
	I0806 07:23:39.431790   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined IP address 192.168.39.203 and MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:23:39.431946   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHPort
	I0806 07:23:39.432186   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHKeyPath
	I0806 07:23:39.432408   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHUsername
	I0806 07:23:39.432547   31793 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173-m02/id_rsa Username:docker}
	I0806 07:23:39.515304   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0806 07:23:39.515394   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0806 07:23:39.540013   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0806 07:23:39.540074   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0806 07:23:39.564148   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0806 07:23:39.564220   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0806 07:23:39.589345   31793 provision.go:87] duration metric: took 423.777348ms to configureAuth
	I0806 07:23:39.589376   31793 buildroot.go:189] setting minikube options for container-runtime
	I0806 07:23:39.589625   31793 config.go:182] Loaded profile config "ha-118173": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 07:23:39.589711   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHHostname
	I0806 07:23:39.592583   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:23:39.593013   31793 main.go:141] libmachine: (ha-118173-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:e9:38", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:23:29 +0000 UTC Type:0 Mac:52:54:00:b4:e9:38 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-118173-m02 Clientid:01:52:54:00:b4:e9:38}
	I0806 07:23:39.593043   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined IP address 192.168.39.203 and MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:23:39.593244   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHPort
	I0806 07:23:39.593439   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHKeyPath
	I0806 07:23:39.593605   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHKeyPath
	I0806 07:23:39.593757   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHUsername
	I0806 07:23:39.593968   31793 main.go:141] libmachine: Using SSH client type: native
	I0806 07:23:39.594126   31793 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I0806 07:23:39.594141   31793 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0806 07:23:39.856759   31793 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0806 07:23:39.856787   31793 main.go:141] libmachine: Checking connection to Docker...
	I0806 07:23:39.856798   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetURL
	I0806 07:23:39.858208   31793 main.go:141] libmachine: (ha-118173-m02) DBG | Using libvirt version 6000000
	I0806 07:23:39.860425   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:23:39.860808   31793 main.go:141] libmachine: (ha-118173-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:e9:38", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:23:29 +0000 UTC Type:0 Mac:52:54:00:b4:e9:38 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-118173-m02 Clientid:01:52:54:00:b4:e9:38}
	I0806 07:23:39.860838   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined IP address 192.168.39.203 and MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:23:39.861014   31793 main.go:141] libmachine: Docker is up and running!
	I0806 07:23:39.861025   31793 main.go:141] libmachine: Reticulating splines...
	I0806 07:23:39.861032   31793 client.go:171] duration metric: took 25.298767274s to LocalClient.Create
	I0806 07:23:39.861066   31793 start.go:167] duration metric: took 25.298821987s to libmachine.API.Create "ha-118173"
	I0806 07:23:39.861080   31793 start.go:293] postStartSetup for "ha-118173-m02" (driver="kvm2")
	I0806 07:23:39.861093   31793 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0806 07:23:39.861126   31793 main.go:141] libmachine: (ha-118173-m02) Calling .DriverName
	I0806 07:23:39.861394   31793 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0806 07:23:39.861422   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHHostname
	I0806 07:23:39.863275   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:23:39.863555   31793 main.go:141] libmachine: (ha-118173-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:e9:38", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:23:29 +0000 UTC Type:0 Mac:52:54:00:b4:e9:38 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-118173-m02 Clientid:01:52:54:00:b4:e9:38}
	I0806 07:23:39.863578   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined IP address 192.168.39.203 and MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:23:39.863740   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHPort
	I0806 07:23:39.863897   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHKeyPath
	I0806 07:23:39.864034   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHUsername
	I0806 07:23:39.864174   31793 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173-m02/id_rsa Username:docker}
	I0806 07:23:39.943366   31793 ssh_runner.go:195] Run: cat /etc/os-release
	I0806 07:23:39.947556   31793 info.go:137] Remote host: Buildroot 2023.02.9
	I0806 07:23:39.947581   31793 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-13057/.minikube/addons for local assets ...
	I0806 07:23:39.947645   31793 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-13057/.minikube/files for local assets ...
	I0806 07:23:39.947715   31793 filesync.go:149] local asset: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem -> 202462.pem in /etc/ssl/certs
	I0806 07:23:39.947725   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem -> /etc/ssl/certs/202462.pem
	I0806 07:23:39.947804   31793 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0806 07:23:39.957875   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem --> /etc/ssl/certs/202462.pem (1708 bytes)
	I0806 07:23:39.980916   31793 start.go:296] duration metric: took 119.823767ms for postStartSetup
	I0806 07:23:39.980972   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetConfigRaw
	I0806 07:23:39.981487   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetIP
	I0806 07:23:39.983974   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:23:39.984290   31793 main.go:141] libmachine: (ha-118173-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:e9:38", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:23:29 +0000 UTC Type:0 Mac:52:54:00:b4:e9:38 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-118173-m02 Clientid:01:52:54:00:b4:e9:38}
	I0806 07:23:39.984310   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined IP address 192.168.39.203 and MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:23:39.984545   31793 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/config.json ...
	I0806 07:23:39.984788   31793 start.go:128] duration metric: took 25.441444463s to createHost
	I0806 07:23:39.984816   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHHostname
	I0806 07:23:39.986891   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:23:39.987261   31793 main.go:141] libmachine: (ha-118173-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:e9:38", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:23:29 +0000 UTC Type:0 Mac:52:54:00:b4:e9:38 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-118173-m02 Clientid:01:52:54:00:b4:e9:38}
	I0806 07:23:39.987285   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined IP address 192.168.39.203 and MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:23:39.987437   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHPort
	I0806 07:23:39.987624   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHKeyPath
	I0806 07:23:39.987778   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHKeyPath
	I0806 07:23:39.987916   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHUsername
	I0806 07:23:39.988070   31793 main.go:141] libmachine: Using SSH client type: native
	I0806 07:23:39.988260   31793 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I0806 07:23:39.988271   31793 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0806 07:23:40.089389   31793 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722929020.047280056
	
	I0806 07:23:40.089406   31793 fix.go:216] guest clock: 1722929020.047280056
	I0806 07:23:40.089414   31793 fix.go:229] Guest: 2024-08-06 07:23:40.047280056 +0000 UTC Remote: 2024-08-06 07:23:39.984804217 +0000 UTC m=+80.852542965 (delta=62.475839ms)
	I0806 07:23:40.089429   31793 fix.go:200] guest clock delta is within tolerance: 62.475839ms
	I0806 07:23:40.089434   31793 start.go:83] releasing machines lock for "ha-118173-m02", held for 25.546207878s
	I0806 07:23:40.089454   31793 main.go:141] libmachine: (ha-118173-m02) Calling .DriverName
	I0806 07:23:40.089709   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetIP
	I0806 07:23:40.092489   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:23:40.092829   31793 main.go:141] libmachine: (ha-118173-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:e9:38", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:23:29 +0000 UTC Type:0 Mac:52:54:00:b4:e9:38 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-118173-m02 Clientid:01:52:54:00:b4:e9:38}
	I0806 07:23:40.092862   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined IP address 192.168.39.203 and MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:23:40.095214   31793 out.go:177] * Found network options:
	I0806 07:23:40.096417   31793 out.go:177]   - NO_PROXY=192.168.39.164
	W0806 07:23:40.097790   31793 proxy.go:119] fail to check proxy env: Error ip not in block
	I0806 07:23:40.097820   31793 main.go:141] libmachine: (ha-118173-m02) Calling .DriverName
	I0806 07:23:40.098370   31793 main.go:141] libmachine: (ha-118173-m02) Calling .DriverName
	I0806 07:23:40.098548   31793 main.go:141] libmachine: (ha-118173-m02) Calling .DriverName
	I0806 07:23:40.098624   31793 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0806 07:23:40.098665   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHHostname
	W0806 07:23:40.098709   31793 proxy.go:119] fail to check proxy env: Error ip not in block
	I0806 07:23:40.098767   31793 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0806 07:23:40.098786   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHHostname
	I0806 07:23:40.101347   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:23:40.101606   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:23:40.101764   31793 main.go:141] libmachine: (ha-118173-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:e9:38", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:23:29 +0000 UTC Type:0 Mac:52:54:00:b4:e9:38 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-118173-m02 Clientid:01:52:54:00:b4:e9:38}
	I0806 07:23:40.101799   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined IP address 192.168.39.203 and MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:23:40.101913   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHPort
	I0806 07:23:40.102020   31793 main.go:141] libmachine: (ha-118173-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:e9:38", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:23:29 +0000 UTC Type:0 Mac:52:54:00:b4:e9:38 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-118173-m02 Clientid:01:52:54:00:b4:e9:38}
	I0806 07:23:40.102044   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined IP address 192.168.39.203 and MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:23:40.102125   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHKeyPath
	I0806 07:23:40.102264   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHPort
	I0806 07:23:40.102339   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHUsername
	I0806 07:23:40.102401   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHKeyPath
	I0806 07:23:40.102467   31793 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173-m02/id_rsa Username:docker}
	I0806 07:23:40.102543   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHUsername
	I0806 07:23:40.102688   31793 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173-m02/id_rsa Username:docker}
	I0806 07:23:40.342985   31793 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0806 07:23:40.349096   31793 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0806 07:23:40.349155   31793 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0806 07:23:40.365655   31793 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0806 07:23:40.365676   31793 start.go:495] detecting cgroup driver to use...
	I0806 07:23:40.365733   31793 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0806 07:23:40.381889   31793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 07:23:40.396850   31793 docker.go:217] disabling cri-docker service (if available) ...
	I0806 07:23:40.396913   31793 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0806 07:23:40.410497   31793 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0806 07:23:40.424845   31793 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0806 07:23:40.539692   31793 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0806 07:23:40.685184   31793 docker.go:233] disabling docker service ...
	I0806 07:23:40.685273   31793 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0806 07:23:40.700727   31793 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0806 07:23:40.714382   31793 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0806 07:23:40.850462   31793 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0806 07:23:40.964754   31793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0806 07:23:40.980912   31793 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 07:23:40.999347   31793 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0806 07:23:40.999409   31793 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 07:23:41.010272   31793 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0806 07:23:41.010334   31793 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 07:23:41.021257   31793 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 07:23:41.032088   31793 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 07:23:41.042973   31793 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0806 07:23:41.053877   31793 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 07:23:41.065099   31793 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 07:23:41.082457   31793 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 07:23:41.093443   31793 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0806 07:23:41.103313   31793 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0806 07:23:41.103365   31793 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0806 07:23:41.117155   31793 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0806 07:23:41.126922   31793 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 07:23:41.247215   31793 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0806 07:23:41.388168   31793 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0806 07:23:41.388244   31793 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0806 07:23:41.393399   31793 start.go:563] Will wait 60s for crictl version
	I0806 07:23:41.393463   31793 ssh_runner.go:195] Run: which crictl
	I0806 07:23:41.397232   31793 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0806 07:23:41.438841   31793 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0806 07:23:41.438927   31793 ssh_runner.go:195] Run: crio --version
	I0806 07:23:41.468576   31793 ssh_runner.go:195] Run: crio --version
	I0806 07:23:41.499325   31793 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0806 07:23:41.500991   31793 out.go:177]   - env NO_PROXY=192.168.39.164
	I0806 07:23:41.502673   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetIP
	I0806 07:23:41.505533   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:23:41.505877   31793 main.go:141] libmachine: (ha-118173-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:e9:38", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:23:29 +0000 UTC Type:0 Mac:52:54:00:b4:e9:38 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-118173-m02 Clientid:01:52:54:00:b4:e9:38}
	I0806 07:23:41.505906   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined IP address 192.168.39.203 and MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:23:41.506119   31793 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0806 07:23:41.510695   31793 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 07:23:41.524289   31793 mustload.go:65] Loading cluster: ha-118173
	I0806 07:23:41.524528   31793 config.go:182] Loaded profile config "ha-118173": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 07:23:41.524834   31793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:23:41.524862   31793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:23:41.539057   31793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35173
	I0806 07:23:41.539472   31793 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:23:41.539888   31793 main.go:141] libmachine: Using API Version  1
	I0806 07:23:41.539905   31793 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:23:41.540191   31793 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:23:41.540399   31793 main.go:141] libmachine: (ha-118173) Calling .GetState
	I0806 07:23:41.541915   31793 host.go:66] Checking if "ha-118173" exists ...
	I0806 07:23:41.542205   31793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:23:41.542230   31793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:23:41.556556   31793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43703
	I0806 07:23:41.556964   31793 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:23:41.557543   31793 main.go:141] libmachine: Using API Version  1
	I0806 07:23:41.557566   31793 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:23:41.557904   31793 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:23:41.558089   31793 main.go:141] libmachine: (ha-118173) Calling .DriverName
	I0806 07:23:41.558293   31793 certs.go:68] Setting up /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173 for IP: 192.168.39.203
	I0806 07:23:41.558303   31793 certs.go:194] generating shared ca certs ...
	I0806 07:23:41.558319   31793 certs.go:226] acquiring lock for ca certs: {Name:mkf5c5aed91f4108054d9f2c742cad66c86afbed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 07:23:41.558436   31793 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.key
	I0806 07:23:41.558473   31793 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.key
	I0806 07:23:41.558481   31793 certs.go:256] generating profile certs ...
	I0806 07:23:41.558546   31793 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/client.key
	I0806 07:23:41.558571   31793 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.key.d5d45db8
	I0806 07:23:41.558585   31793 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.crt.d5d45db8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.164 192.168.39.203 192.168.39.254]
	I0806 07:23:41.718862   31793 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.crt.d5d45db8 ...
	I0806 07:23:41.718888   31793 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.crt.d5d45db8: {Name:mkd112cb4f72e74564806ce8d17c2ab9a5a6a539 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 07:23:41.719039   31793 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.key.d5d45db8 ...
	I0806 07:23:41.719051   31793 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.key.d5d45db8: {Name:mk1db63b82844c3bee083fbe16f3f90a3f7a181d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 07:23:41.719114   31793 certs.go:381] copying /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.crt.d5d45db8 -> /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.crt
	I0806 07:23:41.719243   31793 certs.go:385] copying /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.key.d5d45db8 -> /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.key
	I0806 07:23:41.719363   31793 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/proxy-client.key
	I0806 07:23:41.719378   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0806 07:23:41.719390   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0806 07:23:41.719401   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0806 07:23:41.719414   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0806 07:23:41.719424   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0806 07:23:41.719436   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0806 07:23:41.719446   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0806 07:23:41.719458   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0806 07:23:41.719501   31793 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246.pem (1338 bytes)
	W0806 07:23:41.719527   31793 certs.go:480] ignoring /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246_empty.pem, impossibly tiny 0 bytes
	I0806 07:23:41.719535   31793 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem (1675 bytes)
	I0806 07:23:41.719555   31793 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem (1078 bytes)
	I0806 07:23:41.719574   31793 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem (1123 bytes)
	I0806 07:23:41.719595   31793 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem (1675 bytes)
	I0806 07:23:41.719633   31793 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem (1708 bytes)
	I0806 07:23:41.719657   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0806 07:23:41.719670   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246.pem -> /usr/share/ca-certificates/20246.pem
	I0806 07:23:41.719682   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem -> /usr/share/ca-certificates/202462.pem
	I0806 07:23:41.719712   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHHostname
	I0806 07:23:41.722946   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:23:41.723406   31793 main.go:141] libmachine: (ha-118173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:df:f0", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:22:33 +0000 UTC Type:0 Mac:52:54:00:ef:df:f0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-118173 Clientid:01:52:54:00:ef:df:f0}
	I0806 07:23:41.723426   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined IP address 192.168.39.164 and MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:23:41.723686   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHPort
	I0806 07:23:41.723916   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHKeyPath
	I0806 07:23:41.724091   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHUsername
	I0806 07:23:41.724219   31793 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173/id_rsa Username:docker}
	I0806 07:23:41.792714   31793 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0806 07:23:41.797774   31793 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0806 07:23:41.815063   31793 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0806 07:23:41.820824   31793 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0806 07:23:41.837161   31793 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0806 07:23:41.842160   31793 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0806 07:23:41.852850   31793 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0806 07:23:41.856991   31793 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0806 07:23:41.867945   31793 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0806 07:23:41.872003   31793 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0806 07:23:41.882560   31793 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0806 07:23:41.886785   31793 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0806 07:23:41.896850   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0806 07:23:41.922379   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0806 07:23:41.946524   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0806 07:23:41.970191   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0806 07:23:41.992950   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0806 07:23:42.017118   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0806 07:23:42.041296   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0806 07:23:42.065962   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0806 07:23:42.090847   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0806 07:23:42.115845   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246.pem --> /usr/share/ca-certificates/20246.pem (1338 bytes)
	I0806 07:23:42.145312   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem --> /usr/share/ca-certificates/202462.pem (1708 bytes)
	I0806 07:23:42.182159   31793 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0806 07:23:42.199461   31793 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0806 07:23:42.217013   31793 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0806 07:23:42.234444   31793 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0806 07:23:42.251274   31793 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0806 07:23:42.269243   31793 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0806 07:23:42.286419   31793 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0806 07:23:42.303920   31793 ssh_runner.go:195] Run: openssl version
	I0806 07:23:42.310241   31793 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0806 07:23:42.322691   31793 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0806 07:23:42.328134   31793 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  6 07:07 /usr/share/ca-certificates/minikubeCA.pem
	I0806 07:23:42.328189   31793 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0806 07:23:42.334083   31793 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0806 07:23:42.345785   31793 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20246.pem && ln -fs /usr/share/ca-certificates/20246.pem /etc/ssl/certs/20246.pem"
	I0806 07:23:42.357553   31793 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20246.pem
	I0806 07:23:42.362666   31793 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  6 07:17 /usr/share/ca-certificates/20246.pem
	I0806 07:23:42.362723   31793 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20246.pem
	I0806 07:23:42.368742   31793 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20246.pem /etc/ssl/certs/51391683.0"
	I0806 07:23:42.380083   31793 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202462.pem && ln -fs /usr/share/ca-certificates/202462.pem /etc/ssl/certs/202462.pem"
	I0806 07:23:42.391907   31793 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202462.pem
	I0806 07:23:42.396595   31793 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  6 07:17 /usr/share/ca-certificates/202462.pem
	I0806 07:23:42.396640   31793 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202462.pem
	I0806 07:23:42.402341   31793 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202462.pem /etc/ssl/certs/3ec20f2e.0"
	I0806 07:23:42.413138   31793 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0806 07:23:42.417256   31793 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0806 07:23:42.417312   31793 kubeadm.go:934] updating node {m02 192.168.39.203 8443 v1.30.3 crio true true} ...
	I0806 07:23:42.417405   31793 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-118173-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.203
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-118173 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0806 07:23:42.417427   31793 kube-vip.go:115] generating kube-vip config ...
	I0806 07:23:42.417459   31793 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0806 07:23:42.434308   31793 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0806 07:23:42.434367   31793 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0806 07:23:42.434427   31793 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0806 07:23:42.444954   31793 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0806 07:23:42.445004   31793 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0806 07:23:42.455394   31793 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0806 07:23:42.455421   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0806 07:23:42.455484   31793 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19370-13057/.minikube/cache/linux/amd64/v1.30.3/kubeadm
	I0806 07:23:42.455484   31793 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19370-13057/.minikube/cache/linux/amd64/v1.30.3/kubelet
	I0806 07:23:42.455491   31793 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0806 07:23:42.460303   31793 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0806 07:23:42.460331   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0806 07:24:21.437846   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0806 07:24:21.437925   31793 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0806 07:24:21.443805   31793 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0806 07:24:21.443838   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0806 07:24:52.474908   31793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 07:24:52.492777   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0806 07:24:52.492881   31793 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0806 07:24:52.497512   31793 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0806 07:24:52.497544   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0806 07:24:52.908416   31793 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0806 07:24:52.919832   31793 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0806 07:24:52.938153   31793 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0806 07:24:52.956563   31793 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0806 07:24:52.976542   31793 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0806 07:24:52.981271   31793 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 07:24:52.995889   31793 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 07:24:53.127774   31793 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 07:24:53.145529   31793 host.go:66] Checking if "ha-118173" exists ...
	I0806 07:24:53.145838   31793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:24:53.145882   31793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:24:53.160853   31793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36913
	I0806 07:24:53.161309   31793 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:24:53.161774   31793 main.go:141] libmachine: Using API Version  1
	I0806 07:24:53.161794   31793 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:24:53.162254   31793 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:24:53.162449   31793 main.go:141] libmachine: (ha-118173) Calling .DriverName
	I0806 07:24:53.162622   31793 start.go:317] joinCluster: &{Name:ha-118173 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-118173 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.164 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.203 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 07:24:53.162744   31793 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0806 07:24:53.162761   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHHostname
	I0806 07:24:53.165603   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:24:53.166017   31793 main.go:141] libmachine: (ha-118173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:df:f0", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:22:33 +0000 UTC Type:0 Mac:52:54:00:ef:df:f0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-118173 Clientid:01:52:54:00:ef:df:f0}
	I0806 07:24:53.166042   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined IP address 192.168.39.164 and MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:24:53.166240   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHPort
	I0806 07:24:53.166416   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHKeyPath
	I0806 07:24:53.166555   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHUsername
	I0806 07:24:53.166686   31793 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173/id_rsa Username:docker}
	I0806 07:24:53.333140   31793 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.203 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0806 07:24:53.333189   31793 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token avyxt8.s35nwnnva82f8kjh --discovery-token-ca-cert-hash sha256:d36e5c1dfe989c7d561c809d7c06e0fc13926ac8f6d664cc5d6865ea5f79931b --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-118173-m02 --control-plane --apiserver-advertise-address=192.168.39.203 --apiserver-bind-port=8443"
	I0806 07:25:16.031076   31793 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token avyxt8.s35nwnnva82f8kjh --discovery-token-ca-cert-hash sha256:d36e5c1dfe989c7d561c809d7c06e0fc13926ac8f6d664cc5d6865ea5f79931b --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-118173-m02 --control-plane --apiserver-advertise-address=192.168.39.203 --apiserver-bind-port=8443": (22.697849276s)
	I0806 07:25:16.031117   31793 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0806 07:25:16.664310   31793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-118173-m02 minikube.k8s.io/updated_at=2024_08_06T07_25_16_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=e92cb06692f5ea1ba801d10d148e5e92e807f9c8 minikube.k8s.io/name=ha-118173 minikube.k8s.io/primary=false
	I0806 07:25:16.770689   31793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-118173-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0806 07:25:16.893611   31793 start.go:319] duration metric: took 23.730984068s to joinCluster
	I0806 07:25:16.893746   31793 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.203 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0806 07:25:16.894005   31793 config.go:182] Loaded profile config "ha-118173": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 07:25:16.895507   31793 out.go:177] * Verifying Kubernetes components...
	I0806 07:25:16.896985   31793 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 07:25:17.162479   31793 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 07:25:17.230606   31793 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19370-13057/kubeconfig
	I0806 07:25:17.230961   31793 kapi.go:59] client config for ha-118173: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/client.crt", KeyFile:"/home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/client.key", CAFile:"/home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02a80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0806 07:25:17.231049   31793 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.164:8443
	I0806 07:25:17.231317   31793 node_ready.go:35] waiting up to 6m0s for node "ha-118173-m02" to be "Ready" ...
	I0806 07:25:17.231431   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:25:17.231441   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:17.231453   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:17.231465   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:17.240891   31793 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0806 07:25:17.731957   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:25:17.731979   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:17.731987   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:17.731991   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:17.735860   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:25:18.232222   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:25:18.232240   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:18.232248   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:18.232253   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:18.235036   31793 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 07:25:18.731997   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:25:18.732018   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:18.732027   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:18.732031   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:18.735954   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:25:19.232194   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:25:19.232220   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:19.232231   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:19.232236   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:19.235505   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:25:19.236223   31793 node_ready.go:53] node "ha-118173-m02" has status "Ready":"False"
	I0806 07:25:19.732527   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:25:19.732549   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:19.732557   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:19.732561   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:19.736607   31793 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0806 07:25:20.231627   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:25:20.231654   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:20.231666   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:20.231674   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:20.234999   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:25:20.731732   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:25:20.731754   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:20.731762   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:20.731768   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:20.735635   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:25:21.232487   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:25:21.232510   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:21.232522   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:21.232528   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:21.235987   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:25:21.236526   31793 node_ready.go:53] node "ha-118173-m02" has status "Ready":"False"
	I0806 07:25:21.731825   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:25:21.731848   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:21.731856   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:21.731861   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:21.735273   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:25:22.232516   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:25:22.232544   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:22.232556   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:22.232565   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:22.237203   31793 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0806 07:25:22.732292   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:25:22.732311   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:22.732319   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:22.732324   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:22.736115   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:25:23.232186   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:25:23.232212   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:23.232223   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:23.232228   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:23.235284   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:25:23.731507   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:25:23.731528   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:23.731536   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:23.731539   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:23.734683   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:25:23.735299   31793 node_ready.go:53] node "ha-118173-m02" has status "Ready":"False"
	I0806 07:25:24.232015   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:25:24.232037   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:24.232045   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:24.232050   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:24.235359   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:25:24.731509   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:25:24.731531   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:24.731539   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:24.731544   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:24.735451   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:25:25.231659   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:25:25.231684   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:25.231692   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:25.231695   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:25.235553   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:25:25.731997   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:25:25.732024   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:25.732035   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:25.732040   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:25.735579   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:25:25.736264   31793 node_ready.go:53] node "ha-118173-m02" has status "Ready":"False"
	I0806 07:25:26.231536   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:25:26.231577   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:26.231585   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:26.231589   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:26.237718   31793 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0806 07:25:26.731570   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:25:26.731591   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:26.731599   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:26.731604   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:26.734799   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:25:27.231635   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:25:27.231657   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:27.231665   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:27.231670   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:27.235147   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:25:27.731608   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:25:27.731632   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:27.731644   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:27.731651   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:27.734845   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:25:28.232292   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:25:28.232316   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:28.232324   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:28.232329   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:28.235996   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:25:28.236527   31793 node_ready.go:53] node "ha-118173-m02" has status "Ready":"False"
	I0806 07:25:28.731928   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:25:28.731951   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:28.731959   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:28.731963   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:28.735401   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:25:29.232520   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:25:29.232542   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:29.232550   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:29.232555   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:29.237426   31793 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0806 07:25:29.732192   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:25:29.732213   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:29.732221   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:29.732225   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:29.736147   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:25:30.232194   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:25:30.232218   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:30.232226   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:30.232229   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:30.235717   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:25:30.731754   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:25:30.731775   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:30.731783   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:30.731786   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:30.735011   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:25:30.735571   31793 node_ready.go:53] node "ha-118173-m02" has status "Ready":"False"
	I0806 07:25:31.231922   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:25:31.231947   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:31.231959   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:31.231967   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:31.235232   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:25:31.731880   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:25:31.731905   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:31.731912   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:31.731916   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:31.735005   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:25:32.232000   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:25:32.232023   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:32.232030   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:32.232035   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:32.235450   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:25:32.732344   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:25:32.732377   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:32.732388   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:32.732393   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:32.736800   31793 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0806 07:25:32.737518   31793 node_ready.go:53] node "ha-118173-m02" has status "Ready":"False"
	I0806 07:25:33.232391   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:25:33.232411   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:33.232417   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:33.232421   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:33.235952   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:25:33.732111   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:25:33.732219   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:33.732245   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:33.732256   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:33.736463   31793 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0806 07:25:34.231988   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:25:34.232011   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:34.232020   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:34.232025   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:34.237949   31793 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0806 07:25:34.238523   31793 node_ready.go:49] node "ha-118173-m02" has status "Ready":"True"
	I0806 07:25:34.238550   31793 node_ready.go:38] duration metric: took 17.007202474s for node "ha-118173-m02" to be "Ready" ...
	I0806 07:25:34.238559   31793 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 07:25:34.238632   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods
	I0806 07:25:34.238640   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:34.238647   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:34.238651   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:34.242925   31793 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0806 07:25:34.249754   31793 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-t24ww" in "kube-system" namespace to be "Ready" ...
	I0806 07:25:34.249849   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-t24ww
	I0806 07:25:34.249862   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:34.249872   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:34.249881   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:34.252542   31793 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 07:25:34.253126   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173
	I0806 07:25:34.253147   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:34.253156   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:34.253162   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:34.255680   31793 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 07:25:34.256221   31793 pod_ready.go:92] pod "coredns-7db6d8ff4d-t24ww" in "kube-system" namespace has status "Ready":"True"
	I0806 07:25:34.256245   31793 pod_ready.go:81] duration metric: took 6.464038ms for pod "coredns-7db6d8ff4d-t24ww" in "kube-system" namespace to be "Ready" ...
	I0806 07:25:34.256262   31793 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-zscz4" in "kube-system" namespace to be "Ready" ...
	I0806 07:25:34.256333   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-zscz4
	I0806 07:25:34.256344   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:34.256355   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:34.256376   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:34.258788   31793 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 07:25:34.259613   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173
	I0806 07:25:34.259625   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:34.259632   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:34.259636   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:34.261936   31793 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 07:25:34.262707   31793 pod_ready.go:92] pod "coredns-7db6d8ff4d-zscz4" in "kube-system" namespace has status "Ready":"True"
	I0806 07:25:34.262723   31793 pod_ready.go:81] duration metric: took 6.449074ms for pod "coredns-7db6d8ff4d-zscz4" in "kube-system" namespace to be "Ready" ...
	I0806 07:25:34.262734   31793 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-118173" in "kube-system" namespace to be "Ready" ...
	I0806 07:25:34.262797   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods/etcd-ha-118173
	I0806 07:25:34.262806   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:34.262817   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:34.262826   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:34.268435   31793 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0806 07:25:34.268983   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173
	I0806 07:25:34.268996   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:34.269004   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:34.269008   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:34.271298   31793 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 07:25:34.271793   31793 pod_ready.go:92] pod "etcd-ha-118173" in "kube-system" namespace has status "Ready":"True"
	I0806 07:25:34.271806   31793 pod_ready.go:81] duration metric: took 9.061638ms for pod "etcd-ha-118173" in "kube-system" namespace to be "Ready" ...
	I0806 07:25:34.271816   31793 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-118173-m02" in "kube-system" namespace to be "Ready" ...
	I0806 07:25:34.271864   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods/etcd-ha-118173-m02
	I0806 07:25:34.271873   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:34.271881   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:34.271884   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:34.274574   31793 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 07:25:34.275005   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:25:34.275017   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:34.275023   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:34.275027   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:34.277285   31793 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 07:25:34.277675   31793 pod_ready.go:92] pod "etcd-ha-118173-m02" in "kube-system" namespace has status "Ready":"True"
	I0806 07:25:34.277690   31793 pod_ready.go:81] duration metric: took 5.863353ms for pod "etcd-ha-118173-m02" in "kube-system" namespace to be "Ready" ...
	I0806 07:25:34.277702   31793 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-118173" in "kube-system" namespace to be "Ready" ...
	I0806 07:25:34.432109   31793 request.go:629] Waited for 154.339738ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-118173
	I0806 07:25:34.432176   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-118173
	I0806 07:25:34.432184   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:34.432191   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:34.432194   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:34.435540   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:25:34.632520   31793 request.go:629] Waited for 196.269991ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.164:8443/api/v1/nodes/ha-118173
	I0806 07:25:34.632578   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173
	I0806 07:25:34.632591   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:34.632603   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:34.632611   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:34.635963   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:25:34.636695   31793 pod_ready.go:92] pod "kube-apiserver-ha-118173" in "kube-system" namespace has status "Ready":"True"
	I0806 07:25:34.636719   31793 pod_ready.go:81] duration metric: took 359.009332ms for pod "kube-apiserver-ha-118173" in "kube-system" namespace to be "Ready" ...
	I0806 07:25:34.636731   31793 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-118173-m02" in "kube-system" namespace to be "Ready" ...
	I0806 07:25:34.832805   31793 request.go:629] Waited for 196.001383ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-118173-m02
	I0806 07:25:34.832882   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-118173-m02
	I0806 07:25:34.832889   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:34.832899   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:34.832909   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:34.836332   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:25:35.032727   31793 request.go:629] Waited for 195.35856ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:25:35.032799   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:25:35.032806   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:35.032814   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:35.032823   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:35.035990   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:25:35.036627   31793 pod_ready.go:92] pod "kube-apiserver-ha-118173-m02" in "kube-system" namespace has status "Ready":"True"
	I0806 07:25:35.036645   31793 pod_ready.go:81] duration metric: took 399.907011ms for pod "kube-apiserver-ha-118173-m02" in "kube-system" namespace to be "Ready" ...
	I0806 07:25:35.036656   31793 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-118173" in "kube-system" namespace to be "Ready" ...
	I0806 07:25:35.232774   31793 request.go:629] Waited for 196.044112ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-118173
	I0806 07:25:35.232829   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-118173
	I0806 07:25:35.232834   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:35.232840   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:35.232843   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:35.236478   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:25:35.432555   31793 request.go:629] Waited for 195.362892ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.164:8443/api/v1/nodes/ha-118173
	I0806 07:25:35.432628   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173
	I0806 07:25:35.432633   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:35.432640   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:35.432646   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:35.436864   31793 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0806 07:25:35.437312   31793 pod_ready.go:92] pod "kube-controller-manager-ha-118173" in "kube-system" namespace has status "Ready":"True"
	I0806 07:25:35.437331   31793 pod_ready.go:81] duration metric: took 400.668512ms for pod "kube-controller-manager-ha-118173" in "kube-system" namespace to be "Ready" ...
	I0806 07:25:35.437341   31793 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-118173-m02" in "kube-system" namespace to be "Ready" ...
	I0806 07:25:35.632448   31793 request.go:629] Waited for 195.038471ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-118173-m02
	I0806 07:25:35.632511   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-118173-m02
	I0806 07:25:35.632516   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:35.632523   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:35.632528   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:35.635789   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:25:35.832759   31793 request.go:629] Waited for 196.377115ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:25:35.832816   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:25:35.832821   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:35.832829   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:35.832832   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:35.837090   31793 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0806 07:25:35.838172   31793 pod_ready.go:92] pod "kube-controller-manager-ha-118173-m02" in "kube-system" namespace has status "Ready":"True"
	I0806 07:25:35.838191   31793 pod_ready.go:81] duration metric: took 400.844363ms for pod "kube-controller-manager-ha-118173-m02" in "kube-system" namespace to be "Ready" ...
	I0806 07:25:35.838200   31793 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-k8sq6" in "kube-system" namespace to be "Ready" ...
	I0806 07:25:36.032421   31793 request.go:629] Waited for 194.163572ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods/kube-proxy-k8sq6
	I0806 07:25:36.032479   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods/kube-proxy-k8sq6
	I0806 07:25:36.032485   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:36.032492   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:36.032496   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:36.036194   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:25:36.232451   31793 request.go:629] Waited for 195.377758ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:25:36.232514   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:25:36.232520   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:36.232527   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:36.232531   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:36.235770   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:25:36.236203   31793 pod_ready.go:92] pod "kube-proxy-k8sq6" in "kube-system" namespace has status "Ready":"True"
	I0806 07:25:36.236222   31793 pod_ready.go:81] duration metric: took 398.015496ms for pod "kube-proxy-k8sq6" in "kube-system" namespace to be "Ready" ...
	I0806 07:25:36.236230   31793 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ksm6v" in "kube-system" namespace to be "Ready" ...
	I0806 07:25:36.432475   31793 request.go:629] Waited for 196.164598ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ksm6v
	I0806 07:25:36.432539   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ksm6v
	I0806 07:25:36.432544   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:36.432552   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:36.432555   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:36.436514   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:25:36.632167   31793 request.go:629] Waited for 194.780345ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.164:8443/api/v1/nodes/ha-118173
	I0806 07:25:36.632250   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173
	I0806 07:25:36.632257   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:36.632267   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:36.632271   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:36.635472   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:25:36.636150   31793 pod_ready.go:92] pod "kube-proxy-ksm6v" in "kube-system" namespace has status "Ready":"True"
	I0806 07:25:36.636167   31793 pod_ready.go:81] duration metric: took 399.931961ms for pod "kube-proxy-ksm6v" in "kube-system" namespace to be "Ready" ...
	I0806 07:25:36.636176   31793 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-118173" in "kube-system" namespace to be "Ready" ...
	I0806 07:25:36.832171   31793 request.go:629] Waited for 195.934613ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-118173
	I0806 07:25:36.832232   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-118173
	I0806 07:25:36.832237   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:36.832244   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:36.832249   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:36.835668   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:25:37.032717   31793 request.go:629] Waited for 196.372822ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.164:8443/api/v1/nodes/ha-118173
	I0806 07:25:37.032802   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173
	I0806 07:25:37.032812   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:37.032821   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:37.032830   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:37.036271   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:25:37.037079   31793 pod_ready.go:92] pod "kube-scheduler-ha-118173" in "kube-system" namespace has status "Ready":"True"
	I0806 07:25:37.037107   31793 pod_ready.go:81] duration metric: took 400.922437ms for pod "kube-scheduler-ha-118173" in "kube-system" namespace to be "Ready" ...
	I0806 07:25:37.037136   31793 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-118173-m02" in "kube-system" namespace to be "Ready" ...
	I0806 07:25:37.232073   31793 request.go:629] Waited for 194.871904ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-118173-m02
	I0806 07:25:37.232163   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-118173-m02
	I0806 07:25:37.232169   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:37.232177   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:37.232182   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:37.235509   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:25:37.432425   31793 request.go:629] Waited for 196.3954ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:25:37.432508   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:25:37.432519   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:37.432533   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:37.432541   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:37.435728   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:25:37.436289   31793 pod_ready.go:92] pod "kube-scheduler-ha-118173-m02" in "kube-system" namespace has status "Ready":"True"
	I0806 07:25:37.436307   31793 pod_ready.go:81] duration metric: took 399.162402ms for pod "kube-scheduler-ha-118173-m02" in "kube-system" namespace to be "Ready" ...
	I0806 07:25:37.436317   31793 pod_ready.go:38] duration metric: took 3.197729224s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 07:25:37.436329   31793 api_server.go:52] waiting for apiserver process to appear ...
	I0806 07:25:37.436400   31793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 07:25:37.452668   31793 api_server.go:72] duration metric: took 20.558880334s to wait for apiserver process to appear ...
	I0806 07:25:37.452691   31793 api_server.go:88] waiting for apiserver healthz status ...
	I0806 07:25:37.452713   31793 api_server.go:253] Checking apiserver healthz at https://192.168.39.164:8443/healthz ...
	I0806 07:25:37.458850   31793 api_server.go:279] https://192.168.39.164:8443/healthz returned 200:
	ok
	I0806 07:25:37.458921   31793 round_trippers.go:463] GET https://192.168.39.164:8443/version
	I0806 07:25:37.458930   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:37.458937   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:37.458941   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:37.459754   31793 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0806 07:25:37.459851   31793 api_server.go:141] control plane version: v1.30.3
	I0806 07:25:37.459867   31793 api_server.go:131] duration metric: took 7.16968ms to wait for apiserver health ...
	I0806 07:25:37.459875   31793 system_pods.go:43] waiting for kube-system pods to appear ...
	I0806 07:25:37.632308   31793 request.go:629] Waited for 172.341978ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods
	I0806 07:25:37.632373   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods
	I0806 07:25:37.632380   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:37.632390   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:37.632396   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:37.638640   31793 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0806 07:25:37.642715   31793 system_pods.go:59] 17 kube-system pods found
	I0806 07:25:37.642746   31793 system_pods.go:61] "coredns-7db6d8ff4d-t24ww" [8effbe6c-3107-409e-9c94-dee4103d12b0] Running
	I0806 07:25:37.642752   31793 system_pods.go:61] "coredns-7db6d8ff4d-zscz4" [ef588a32-ac32-4e38-926e-6ee7a662a399] Running
	I0806 07:25:37.642758   31793 system_pods.go:61] "etcd-ha-118173" [a72e8b83-1e33-429a-8d98-98936a0e3ab8] Running
	I0806 07:25:37.642763   31793 system_pods.go:61] "etcd-ha-118173-m02" [cb090a5d-5760-48c6-abea-735b88daf0be] Running
	I0806 07:25:37.642767   31793 system_pods.go:61] "kindnet-n7jmb" [fa496bfb-7bf0-4ebc-adbd-36d992d8e279] Running
	I0806 07:25:37.642771   31793 system_pods.go:61] "kindnet-z9w6l" [c02fe720-31c6-4146-af1a-0ada3aa7e530] Running
	I0806 07:25:37.642776   31793 system_pods.go:61] "kube-apiserver-ha-118173" [902898f9-8ba6-4d78-a648-49d412dce202] Running
	I0806 07:25:37.642780   31793 system_pods.go:61] "kube-apiserver-ha-118173-m02" [1b9cea4a-7755-4cd8-a240-b4911dc45edf] Running
	I0806 07:25:37.642785   31793 system_pods.go:61] "kube-controller-manager-ha-118173" [1af832f2-bceb-4c89-afa8-09a78b7782e1] Running
	I0806 07:25:37.642790   31793 system_pods.go:61] "kube-controller-manager-ha-118173-m02" [7be7c963-d9af-4cda-95dd-9a953b40f284] Running
	I0806 07:25:37.642795   31793 system_pods.go:61] "kube-proxy-k8sq6" [b1ec850b-6ce0-4710-aa99-419bda3b3e93] Running
	I0806 07:25:37.642802   31793 system_pods.go:61] "kube-proxy-ksm6v" [594bc178-8319-4922-9132-69a9c6890b95] Running
	I0806 07:25:37.642809   31793 system_pods.go:61] "kube-scheduler-ha-118173" [6fda2f13-0df0-42ed-9d67-8dd30b33fa3d] Running
	I0806 07:25:37.642814   31793 system_pods.go:61] "kube-scheduler-ha-118173-m02" [f097af54-fb79-40f2-aaf4-6cfbb0d8c449] Running
	I0806 07:25:37.642821   31793 system_pods.go:61] "kube-vip-ha-118173" [594d77d2-83e6-420d-8f0a-f54e9fd704f3] Running
	I0806 07:25:37.642826   31793 system_pods.go:61] "kube-vip-ha-118173-m02" [66b58218-4fc2-4a96-81b9-31ade7dda807] Running
	I0806 07:25:37.642834   31793 system_pods.go:61] "storage-provisioner" [8379466d-0241-40e4-a262-d762b3cda2eb] Running
	I0806 07:25:37.642847   31793 system_pods.go:74] duration metric: took 182.962704ms to wait for pod list to return data ...
	I0806 07:25:37.642861   31793 default_sa.go:34] waiting for default service account to be created ...
	I0806 07:25:37.832084   31793 request.go:629] Waited for 189.137863ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.164:8443/api/v1/namespaces/default/serviceaccounts
	I0806 07:25:37.832155   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/namespaces/default/serviceaccounts
	I0806 07:25:37.832164   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:37.832178   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:37.832187   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:37.835212   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:25:37.835448   31793 default_sa.go:45] found service account: "default"
	I0806 07:25:37.835464   31793 default_sa.go:55] duration metric: took 192.597186ms for default service account to be created ...
	I0806 07:25:37.835473   31793 system_pods.go:116] waiting for k8s-apps to be running ...
	I0806 07:25:38.032946   31793 request.go:629] Waited for 197.387462ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods
	I0806 07:25:38.033012   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods
	I0806 07:25:38.033018   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:38.033026   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:38.033031   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:38.038210   31793 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0806 07:25:38.042288   31793 system_pods.go:86] 17 kube-system pods found
	I0806 07:25:38.042313   31793 system_pods.go:89] "coredns-7db6d8ff4d-t24ww" [8effbe6c-3107-409e-9c94-dee4103d12b0] Running
	I0806 07:25:38.042318   31793 system_pods.go:89] "coredns-7db6d8ff4d-zscz4" [ef588a32-ac32-4e38-926e-6ee7a662a399] Running
	I0806 07:25:38.042323   31793 system_pods.go:89] "etcd-ha-118173" [a72e8b83-1e33-429a-8d98-98936a0e3ab8] Running
	I0806 07:25:38.042327   31793 system_pods.go:89] "etcd-ha-118173-m02" [cb090a5d-5760-48c6-abea-735b88daf0be] Running
	I0806 07:25:38.042331   31793 system_pods.go:89] "kindnet-n7jmb" [fa496bfb-7bf0-4ebc-adbd-36d992d8e279] Running
	I0806 07:25:38.042335   31793 system_pods.go:89] "kindnet-z9w6l" [c02fe720-31c6-4146-af1a-0ada3aa7e530] Running
	I0806 07:25:38.042339   31793 system_pods.go:89] "kube-apiserver-ha-118173" [902898f9-8ba6-4d78-a648-49d412dce202] Running
	I0806 07:25:38.042343   31793 system_pods.go:89] "kube-apiserver-ha-118173-m02" [1b9cea4a-7755-4cd8-a240-b4911dc45edf] Running
	I0806 07:25:38.042347   31793 system_pods.go:89] "kube-controller-manager-ha-118173" [1af832f2-bceb-4c89-afa8-09a78b7782e1] Running
	I0806 07:25:38.042354   31793 system_pods.go:89] "kube-controller-manager-ha-118173-m02" [7be7c963-d9af-4cda-95dd-9a953b40f284] Running
	I0806 07:25:38.042357   31793 system_pods.go:89] "kube-proxy-k8sq6" [b1ec850b-6ce0-4710-aa99-419bda3b3e93] Running
	I0806 07:25:38.042364   31793 system_pods.go:89] "kube-proxy-ksm6v" [594bc178-8319-4922-9132-69a9c6890b95] Running
	I0806 07:25:38.042368   31793 system_pods.go:89] "kube-scheduler-ha-118173" [6fda2f13-0df0-42ed-9d67-8dd30b33fa3d] Running
	I0806 07:25:38.042375   31793 system_pods.go:89] "kube-scheduler-ha-118173-m02" [f097af54-fb79-40f2-aaf4-6cfbb0d8c449] Running
	I0806 07:25:38.042379   31793 system_pods.go:89] "kube-vip-ha-118173" [594d77d2-83e6-420d-8f0a-f54e9fd704f3] Running
	I0806 07:25:38.042382   31793 system_pods.go:89] "kube-vip-ha-118173-m02" [66b58218-4fc2-4a96-81b9-31ade7dda807] Running
	I0806 07:25:38.042385   31793 system_pods.go:89] "storage-provisioner" [8379466d-0241-40e4-a262-d762b3cda2eb] Running
	I0806 07:25:38.042394   31793 system_pods.go:126] duration metric: took 206.915419ms to wait for k8s-apps to be running ...
	I0806 07:25:38.042415   31793 system_svc.go:44] waiting for kubelet service to be running ....
	I0806 07:25:38.042458   31793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 07:25:38.059099   31793 system_svc.go:56] duration metric: took 16.685529ms WaitForService to wait for kubelet
	I0806 07:25:38.059130   31793 kubeadm.go:582] duration metric: took 21.165344066s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0806 07:25:38.059158   31793 node_conditions.go:102] verifying NodePressure condition ...
	I0806 07:25:38.232061   31793 request.go:629] Waited for 172.812266ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.164:8443/api/v1/nodes
	I0806 07:25:38.232132   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes
	I0806 07:25:38.232139   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:38.232150   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:38.232157   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:38.235719   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:25:38.236625   31793 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0806 07:25:38.236652   31793 node_conditions.go:123] node cpu capacity is 2
	I0806 07:25:38.236665   31793 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0806 07:25:38.236670   31793 node_conditions.go:123] node cpu capacity is 2
	I0806 07:25:38.236679   31793 node_conditions.go:105] duration metric: took 177.515342ms to run NodePressure ...
	I0806 07:25:38.236696   31793 start.go:241] waiting for startup goroutines ...
	I0806 07:25:38.236725   31793 start.go:255] writing updated cluster config ...
	I0806 07:25:38.238847   31793 out.go:177] 
	I0806 07:25:38.240286   31793 config.go:182] Loaded profile config "ha-118173": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 07:25:38.240397   31793 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/config.json ...
	I0806 07:25:38.242101   31793 out.go:177] * Starting "ha-118173-m03" control-plane node in "ha-118173" cluster
	I0806 07:25:38.243383   31793 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0806 07:25:38.243402   31793 cache.go:56] Caching tarball of preloaded images
	I0806 07:25:38.243486   31793 preload.go:172] Found /home/jenkins/minikube-integration/19370-13057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0806 07:25:38.243496   31793 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0806 07:25:38.243572   31793 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/config.json ...
	I0806 07:25:38.243722   31793 start.go:360] acquireMachinesLock for ha-118173-m03: {Name:mkc602ac9c8cc3360dcc0fcb8104303837aaab61 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 07:25:38.243760   31793 start.go:364] duration metric: took 21.021µs to acquireMachinesLock for "ha-118173-m03"
	I0806 07:25:38.243775   31793 start.go:93] Provisioning new machine with config: &{Name:ha-118173 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-118173 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.164 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.203 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0806 07:25:38.243863   31793 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0806 07:25:38.245301   31793 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0806 07:25:38.245373   31793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:25:38.245413   31793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:25:38.259751   31793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35003
	I0806 07:25:38.260142   31793 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:25:38.260638   31793 main.go:141] libmachine: Using API Version  1
	I0806 07:25:38.260658   31793 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:25:38.260969   31793 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:25:38.261159   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetMachineName
	I0806 07:25:38.261340   31793 main.go:141] libmachine: (ha-118173-m03) Calling .DriverName
	I0806 07:25:38.261601   31793 start.go:159] libmachine.API.Create for "ha-118173" (driver="kvm2")
	I0806 07:25:38.261631   31793 client.go:168] LocalClient.Create starting
	I0806 07:25:38.261680   31793 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem
	I0806 07:25:38.261720   31793 main.go:141] libmachine: Decoding PEM data...
	I0806 07:25:38.261741   31793 main.go:141] libmachine: Parsing certificate...
	I0806 07:25:38.261804   31793 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem
	I0806 07:25:38.261831   31793 main.go:141] libmachine: Decoding PEM data...
	I0806 07:25:38.261845   31793 main.go:141] libmachine: Parsing certificate...
	I0806 07:25:38.261871   31793 main.go:141] libmachine: Running pre-create checks...
	I0806 07:25:38.261883   31793 main.go:141] libmachine: (ha-118173-m03) Calling .PreCreateCheck
	I0806 07:25:38.262125   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetConfigRaw
	I0806 07:25:38.262505   31793 main.go:141] libmachine: Creating machine...
	I0806 07:25:38.262521   31793 main.go:141] libmachine: (ha-118173-m03) Calling .Create
	I0806 07:25:38.262648   31793 main.go:141] libmachine: (ha-118173-m03) Creating KVM machine...
	I0806 07:25:38.263779   31793 main.go:141] libmachine: (ha-118173-m03) DBG | found existing default KVM network
	I0806 07:25:38.263871   31793 main.go:141] libmachine: (ha-118173-m03) DBG | found existing private KVM network mk-ha-118173
	I0806 07:25:38.263996   31793 main.go:141] libmachine: (ha-118173-m03) Setting up store path in /home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173-m03 ...
	I0806 07:25:38.264020   31793 main.go:141] libmachine: (ha-118173-m03) Building disk image from file:///home/jenkins/minikube-integration/19370-13057/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0806 07:25:38.264073   31793 main.go:141] libmachine: (ha-118173-m03) DBG | I0806 07:25:38.263979   32846 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19370-13057/.minikube
	I0806 07:25:38.264175   31793 main.go:141] libmachine: (ha-118173-m03) Downloading /home/jenkins/minikube-integration/19370-13057/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19370-13057/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0806 07:25:38.519561   31793 main.go:141] libmachine: (ha-118173-m03) DBG | I0806 07:25:38.519444   32846 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173-m03/id_rsa...
	I0806 07:25:38.699444   31793 main.go:141] libmachine: (ha-118173-m03) DBG | I0806 07:25:38.699332   32846 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173-m03/ha-118173-m03.rawdisk...
	I0806 07:25:38.699476   31793 main.go:141] libmachine: (ha-118173-m03) DBG | Writing magic tar header
	I0806 07:25:38.699489   31793 main.go:141] libmachine: (ha-118173-m03) DBG | Writing SSH key tar header
	I0806 07:25:38.699644   31793 main.go:141] libmachine: (ha-118173-m03) DBG | I0806 07:25:38.699561   32846 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173-m03 ...
	I0806 07:25:38.699720   31793 main.go:141] libmachine: (ha-118173-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173-m03
	I0806 07:25:38.699753   31793 main.go:141] libmachine: (ha-118173-m03) Setting executable bit set on /home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173-m03 (perms=drwx------)
	I0806 07:25:38.699774   31793 main.go:141] libmachine: (ha-118173-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19370-13057/.minikube/machines
	I0806 07:25:38.699790   31793 main.go:141] libmachine: (ha-118173-m03) Setting executable bit set on /home/jenkins/minikube-integration/19370-13057/.minikube/machines (perms=drwxr-xr-x)
	I0806 07:25:38.699823   31793 main.go:141] libmachine: (ha-118173-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19370-13057/.minikube
	I0806 07:25:38.699879   31793 main.go:141] libmachine: (ha-118173-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19370-13057
	I0806 07:25:38.699895   31793 main.go:141] libmachine: (ha-118173-m03) Setting executable bit set on /home/jenkins/minikube-integration/19370-13057/.minikube (perms=drwxr-xr-x)
	I0806 07:25:38.699909   31793 main.go:141] libmachine: (ha-118173-m03) Setting executable bit set on /home/jenkins/minikube-integration/19370-13057 (perms=drwxrwxr-x)
	I0806 07:25:38.699920   31793 main.go:141] libmachine: (ha-118173-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0806 07:25:38.699937   31793 main.go:141] libmachine: (ha-118173-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0806 07:25:38.699947   31793 main.go:141] libmachine: (ha-118173-m03) Creating domain...
	I0806 07:25:38.699960   31793 main.go:141] libmachine: (ha-118173-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0806 07:25:38.699973   31793 main.go:141] libmachine: (ha-118173-m03) DBG | Checking permissions on dir: /home/jenkins
	I0806 07:25:38.699986   31793 main.go:141] libmachine: (ha-118173-m03) DBG | Checking permissions on dir: /home
	I0806 07:25:38.699996   31793 main.go:141] libmachine: (ha-118173-m03) DBG | Skipping /home - not owner
	I0806 07:25:38.701096   31793 main.go:141] libmachine: (ha-118173-m03) define libvirt domain using xml: 
	I0806 07:25:38.701113   31793 main.go:141] libmachine: (ha-118173-m03) <domain type='kvm'>
	I0806 07:25:38.701124   31793 main.go:141] libmachine: (ha-118173-m03)   <name>ha-118173-m03</name>
	I0806 07:25:38.701132   31793 main.go:141] libmachine: (ha-118173-m03)   <memory unit='MiB'>2200</memory>
	I0806 07:25:38.701139   31793 main.go:141] libmachine: (ha-118173-m03)   <vcpu>2</vcpu>
	I0806 07:25:38.701145   31793 main.go:141] libmachine: (ha-118173-m03)   <features>
	I0806 07:25:38.701153   31793 main.go:141] libmachine: (ha-118173-m03)     <acpi/>
	I0806 07:25:38.701161   31793 main.go:141] libmachine: (ha-118173-m03)     <apic/>
	I0806 07:25:38.701169   31793 main.go:141] libmachine: (ha-118173-m03)     <pae/>
	I0806 07:25:38.701183   31793 main.go:141] libmachine: (ha-118173-m03)     
	I0806 07:25:38.701190   31793 main.go:141] libmachine: (ha-118173-m03)   </features>
	I0806 07:25:38.701196   31793 main.go:141] libmachine: (ha-118173-m03)   <cpu mode='host-passthrough'>
	I0806 07:25:38.701224   31793 main.go:141] libmachine: (ha-118173-m03)   
	I0806 07:25:38.701244   31793 main.go:141] libmachine: (ha-118173-m03)   </cpu>
	I0806 07:25:38.701254   31793 main.go:141] libmachine: (ha-118173-m03)   <os>
	I0806 07:25:38.701266   31793 main.go:141] libmachine: (ha-118173-m03)     <type>hvm</type>
	I0806 07:25:38.701277   31793 main.go:141] libmachine: (ha-118173-m03)     <boot dev='cdrom'/>
	I0806 07:25:38.701288   31793 main.go:141] libmachine: (ha-118173-m03)     <boot dev='hd'/>
	I0806 07:25:38.701301   31793 main.go:141] libmachine: (ha-118173-m03)     <bootmenu enable='no'/>
	I0806 07:25:38.701312   31793 main.go:141] libmachine: (ha-118173-m03)   </os>
	I0806 07:25:38.701334   31793 main.go:141] libmachine: (ha-118173-m03)   <devices>
	I0806 07:25:38.701356   31793 main.go:141] libmachine: (ha-118173-m03)     <disk type='file' device='cdrom'>
	I0806 07:25:38.701367   31793 main.go:141] libmachine: (ha-118173-m03)       <source file='/home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173-m03/boot2docker.iso'/>
	I0806 07:25:38.701375   31793 main.go:141] libmachine: (ha-118173-m03)       <target dev='hdc' bus='scsi'/>
	I0806 07:25:38.701382   31793 main.go:141] libmachine: (ha-118173-m03)       <readonly/>
	I0806 07:25:38.701390   31793 main.go:141] libmachine: (ha-118173-m03)     </disk>
	I0806 07:25:38.701397   31793 main.go:141] libmachine: (ha-118173-m03)     <disk type='file' device='disk'>
	I0806 07:25:38.701407   31793 main.go:141] libmachine: (ha-118173-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0806 07:25:38.701418   31793 main.go:141] libmachine: (ha-118173-m03)       <source file='/home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173-m03/ha-118173-m03.rawdisk'/>
	I0806 07:25:38.701428   31793 main.go:141] libmachine: (ha-118173-m03)       <target dev='hda' bus='virtio'/>
	I0806 07:25:38.701443   31793 main.go:141] libmachine: (ha-118173-m03)     </disk>
	I0806 07:25:38.701460   31793 main.go:141] libmachine: (ha-118173-m03)     <interface type='network'>
	I0806 07:25:38.701475   31793 main.go:141] libmachine: (ha-118173-m03)       <source network='mk-ha-118173'/>
	I0806 07:25:38.701485   31793 main.go:141] libmachine: (ha-118173-m03)       <model type='virtio'/>
	I0806 07:25:38.701496   31793 main.go:141] libmachine: (ha-118173-m03)     </interface>
	I0806 07:25:38.701502   31793 main.go:141] libmachine: (ha-118173-m03)     <interface type='network'>
	I0806 07:25:38.701513   31793 main.go:141] libmachine: (ha-118173-m03)       <source network='default'/>
	I0806 07:25:38.701521   31793 main.go:141] libmachine: (ha-118173-m03)       <model type='virtio'/>
	I0806 07:25:38.701541   31793 main.go:141] libmachine: (ha-118173-m03)     </interface>
	I0806 07:25:38.701560   31793 main.go:141] libmachine: (ha-118173-m03)     <serial type='pty'>
	I0806 07:25:38.701570   31793 main.go:141] libmachine: (ha-118173-m03)       <target port='0'/>
	I0806 07:25:38.701577   31793 main.go:141] libmachine: (ha-118173-m03)     </serial>
	I0806 07:25:38.701582   31793 main.go:141] libmachine: (ha-118173-m03)     <console type='pty'>
	I0806 07:25:38.701589   31793 main.go:141] libmachine: (ha-118173-m03)       <target type='serial' port='0'/>
	I0806 07:25:38.701597   31793 main.go:141] libmachine: (ha-118173-m03)     </console>
	I0806 07:25:38.701617   31793 main.go:141] libmachine: (ha-118173-m03)     <rng model='virtio'>
	I0806 07:25:38.701646   31793 main.go:141] libmachine: (ha-118173-m03)       <backend model='random'>/dev/random</backend>
	I0806 07:25:38.701668   31793 main.go:141] libmachine: (ha-118173-m03)     </rng>
	I0806 07:25:38.701680   31793 main.go:141] libmachine: (ha-118173-m03)     
	I0806 07:25:38.701687   31793 main.go:141] libmachine: (ha-118173-m03)     
	I0806 07:25:38.701699   31793 main.go:141] libmachine: (ha-118173-m03)   </devices>
	I0806 07:25:38.701725   31793 main.go:141] libmachine: (ha-118173-m03) </domain>
	I0806 07:25:38.701735   31793 main.go:141] libmachine: (ha-118173-m03) 
	I0806 07:25:38.708186   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined MAC address 52:54:00:3d:80:df in network default
	I0806 07:25:38.708909   31793 main.go:141] libmachine: (ha-118173-m03) Ensuring networks are active...
	I0806 07:25:38.708935   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:25:38.709762   31793 main.go:141] libmachine: (ha-118173-m03) Ensuring network default is active
	I0806 07:25:38.710102   31793 main.go:141] libmachine: (ha-118173-m03) Ensuring network mk-ha-118173 is active
	I0806 07:25:38.710565   31793 main.go:141] libmachine: (ha-118173-m03) Getting domain xml...
	I0806 07:25:38.711391   31793 main.go:141] libmachine: (ha-118173-m03) Creating domain...
	I0806 07:25:39.962106   31793 main.go:141] libmachine: (ha-118173-m03) Waiting to get IP...
	I0806 07:25:39.962843   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:25:39.963231   31793 main.go:141] libmachine: (ha-118173-m03) DBG | unable to find current IP address of domain ha-118173-m03 in network mk-ha-118173
	I0806 07:25:39.963252   31793 main.go:141] libmachine: (ha-118173-m03) DBG | I0806 07:25:39.963211   32846 retry.go:31] will retry after 253.81788ms: waiting for machine to come up
	I0806 07:25:40.218554   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:25:40.219002   31793 main.go:141] libmachine: (ha-118173-m03) DBG | unable to find current IP address of domain ha-118173-m03 in network mk-ha-118173
	I0806 07:25:40.219027   31793 main.go:141] libmachine: (ha-118173-m03) DBG | I0806 07:25:40.218954   32846 retry.go:31] will retry after 329.858258ms: waiting for machine to come up
	I0806 07:25:40.550462   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:25:40.550944   31793 main.go:141] libmachine: (ha-118173-m03) DBG | unable to find current IP address of domain ha-118173-m03 in network mk-ha-118173
	I0806 07:25:40.550974   31793 main.go:141] libmachine: (ha-118173-m03) DBG | I0806 07:25:40.550896   32846 retry.go:31] will retry after 424.863606ms: waiting for machine to come up
	I0806 07:25:40.977595   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:25:40.978124   31793 main.go:141] libmachine: (ha-118173-m03) DBG | unable to find current IP address of domain ha-118173-m03 in network mk-ha-118173
	I0806 07:25:40.978153   31793 main.go:141] libmachine: (ha-118173-m03) DBG | I0806 07:25:40.978078   32846 retry.go:31] will retry after 507.514765ms: waiting for machine to come up
	I0806 07:25:41.487166   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:25:41.487571   31793 main.go:141] libmachine: (ha-118173-m03) DBG | unable to find current IP address of domain ha-118173-m03 in network mk-ha-118173
	I0806 07:25:41.487603   31793 main.go:141] libmachine: (ha-118173-m03) DBG | I0806 07:25:41.487518   32846 retry.go:31] will retry after 519.071599ms: waiting for machine to come up
	I0806 07:25:42.008220   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:25:42.008682   31793 main.go:141] libmachine: (ha-118173-m03) DBG | unable to find current IP address of domain ha-118173-m03 in network mk-ha-118173
	I0806 07:25:42.008728   31793 main.go:141] libmachine: (ha-118173-m03) DBG | I0806 07:25:42.008623   32846 retry.go:31] will retry after 742.185102ms: waiting for machine to come up
	I0806 07:25:42.751904   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:25:42.752388   31793 main.go:141] libmachine: (ha-118173-m03) DBG | unable to find current IP address of domain ha-118173-m03 in network mk-ha-118173
	I0806 07:25:42.752421   31793 main.go:141] libmachine: (ha-118173-m03) DBG | I0806 07:25:42.752323   32846 retry.go:31] will retry after 997.028177ms: waiting for machine to come up
	I0806 07:25:43.751392   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:25:43.751833   31793 main.go:141] libmachine: (ha-118173-m03) DBG | unable to find current IP address of domain ha-118173-m03 in network mk-ha-118173
	I0806 07:25:43.751863   31793 main.go:141] libmachine: (ha-118173-m03) DBG | I0806 07:25:43.751795   32846 retry.go:31] will retry after 1.349314892s: waiting for machine to come up
	I0806 07:25:45.103401   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:25:45.103893   31793 main.go:141] libmachine: (ha-118173-m03) DBG | unable to find current IP address of domain ha-118173-m03 in network mk-ha-118173
	I0806 07:25:45.103925   31793 main.go:141] libmachine: (ha-118173-m03) DBG | I0806 07:25:45.103835   32846 retry.go:31] will retry after 1.43342486s: waiting for machine to come up
	I0806 07:25:46.538389   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:25:46.538917   31793 main.go:141] libmachine: (ha-118173-m03) DBG | unable to find current IP address of domain ha-118173-m03 in network mk-ha-118173
	I0806 07:25:46.538945   31793 main.go:141] libmachine: (ha-118173-m03) DBG | I0806 07:25:46.538866   32846 retry.go:31] will retry after 1.86809666s: waiting for machine to come up
	I0806 07:25:48.408706   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:25:48.409180   31793 main.go:141] libmachine: (ha-118173-m03) DBG | unable to find current IP address of domain ha-118173-m03 in network mk-ha-118173
	I0806 07:25:48.409213   31793 main.go:141] libmachine: (ha-118173-m03) DBG | I0806 07:25:48.409118   32846 retry.go:31] will retry after 2.833688321s: waiting for machine to come up
	I0806 07:25:51.245831   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:25:51.246286   31793 main.go:141] libmachine: (ha-118173-m03) DBG | unable to find current IP address of domain ha-118173-m03 in network mk-ha-118173
	I0806 07:25:51.246310   31793 main.go:141] libmachine: (ha-118173-m03) DBG | I0806 07:25:51.246235   32846 retry.go:31] will retry after 2.559468863s: waiting for machine to come up
	I0806 07:25:53.806894   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:25:53.807177   31793 main.go:141] libmachine: (ha-118173-m03) DBG | unable to find current IP address of domain ha-118173-m03 in network mk-ha-118173
	I0806 07:25:53.807203   31793 main.go:141] libmachine: (ha-118173-m03) DBG | I0806 07:25:53.807132   32846 retry.go:31] will retry after 2.791250888s: waiting for machine to come up
	I0806 07:25:56.601795   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:25:56.602245   31793 main.go:141] libmachine: (ha-118173-m03) DBG | unable to find current IP address of domain ha-118173-m03 in network mk-ha-118173
	I0806 07:25:56.602277   31793 main.go:141] libmachine: (ha-118173-m03) DBG | I0806 07:25:56.602203   32846 retry.go:31] will retry after 3.476094859s: waiting for machine to come up
	I0806 07:26:00.080126   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:26:00.080656   31793 main.go:141] libmachine: (ha-118173-m03) Found IP for machine: 192.168.39.5
	I0806 07:26:00.080686   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has current primary IP address 192.168.39.5 and MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:26:00.080695   31793 main.go:141] libmachine: (ha-118173-m03) Reserving static IP address...
	I0806 07:26:00.081120   31793 main.go:141] libmachine: (ha-118173-m03) DBG | unable to find host DHCP lease matching {name: "ha-118173-m03", mac: "52:54:00:d8:90:55", ip: "192.168.39.5"} in network mk-ha-118173
	I0806 07:26:00.156514   31793 main.go:141] libmachine: (ha-118173-m03) DBG | Getting to WaitForSSH function...
	I0806 07:26:00.156541   31793 main.go:141] libmachine: (ha-118173-m03) Reserved static IP address: 192.168.39.5
	I0806 07:26:00.156587   31793 main.go:141] libmachine: (ha-118173-m03) Waiting for SSH to be available...
	I0806 07:26:00.159715   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:26:00.160006   31793 main.go:141] libmachine: (ha-118173-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:90:55", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:25:53 +0000 UTC Type:0 Mac:52:54:00:d8:90:55 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:minikube Clientid:01:52:54:00:d8:90:55}
	I0806 07:26:00.160031   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined IP address 192.168.39.5 and MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:26:00.160229   31793 main.go:141] libmachine: (ha-118173-m03) DBG | Using SSH client type: external
	I0806 07:26:00.160256   31793 main.go:141] libmachine: (ha-118173-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173-m03/id_rsa (-rw-------)
	I0806 07:26:00.160286   31793 main.go:141] libmachine: (ha-118173-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.5 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0806 07:26:00.160302   31793 main.go:141] libmachine: (ha-118173-m03) DBG | About to run SSH command:
	I0806 07:26:00.160319   31793 main.go:141] libmachine: (ha-118173-m03) DBG | exit 0
	I0806 07:26:00.284464   31793 main.go:141] libmachine: (ha-118173-m03) DBG | SSH cmd err, output: <nil>: 
	I0806 07:26:00.284733   31793 main.go:141] libmachine: (ha-118173-m03) KVM machine creation complete!
	I0806 07:26:00.285098   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetConfigRaw
	I0806 07:26:00.285617   31793 main.go:141] libmachine: (ha-118173-m03) Calling .DriverName
	I0806 07:26:00.285808   31793 main.go:141] libmachine: (ha-118173-m03) Calling .DriverName
	I0806 07:26:00.286024   31793 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0806 07:26:00.286038   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetState
	I0806 07:26:00.287085   31793 main.go:141] libmachine: Detecting operating system of created instance...
	I0806 07:26:00.287100   31793 main.go:141] libmachine: Waiting for SSH to be available...
	I0806 07:26:00.287108   31793 main.go:141] libmachine: Getting to WaitForSSH function...
	I0806 07:26:00.287115   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHHostname
	I0806 07:26:00.289837   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:26:00.290335   31793 main.go:141] libmachine: (ha-118173-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:90:55", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:25:53 +0000 UTC Type:0 Mac:52:54:00:d8:90:55 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-118173-m03 Clientid:01:52:54:00:d8:90:55}
	I0806 07:26:00.290365   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined IP address 192.168.39.5 and MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:26:00.290528   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHPort
	I0806 07:26:00.290699   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHKeyPath
	I0806 07:26:00.290859   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHKeyPath
	I0806 07:26:00.291008   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHUsername
	I0806 07:26:00.291214   31793 main.go:141] libmachine: Using SSH client type: native
	I0806 07:26:00.291432   31793 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.5 22 <nil> <nil>}
	I0806 07:26:00.291446   31793 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0806 07:26:00.387933   31793 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 07:26:00.387962   31793 main.go:141] libmachine: Detecting the provisioner...
	I0806 07:26:00.387973   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHHostname
	I0806 07:26:00.390810   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:26:00.391158   31793 main.go:141] libmachine: (ha-118173-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:90:55", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:25:53 +0000 UTC Type:0 Mac:52:54:00:d8:90:55 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-118173-m03 Clientid:01:52:54:00:d8:90:55}
	I0806 07:26:00.391194   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined IP address 192.168.39.5 and MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:26:00.391294   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHPort
	I0806 07:26:00.391521   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHKeyPath
	I0806 07:26:00.391706   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHKeyPath
	I0806 07:26:00.391834   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHUsername
	I0806 07:26:00.392051   31793 main.go:141] libmachine: Using SSH client type: native
	I0806 07:26:00.392233   31793 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.5 22 <nil> <nil>}
	I0806 07:26:00.392245   31793 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0806 07:26:00.489407   31793 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0806 07:26:00.489472   31793 main.go:141] libmachine: found compatible host: buildroot
	I0806 07:26:00.489481   31793 main.go:141] libmachine: Provisioning with buildroot...
	I0806 07:26:00.489489   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetMachineName
	I0806 07:26:00.489721   31793 buildroot.go:166] provisioning hostname "ha-118173-m03"
	I0806 07:26:00.489744   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetMachineName
	I0806 07:26:00.489940   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHHostname
	I0806 07:26:00.492500   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:26:00.492853   31793 main.go:141] libmachine: (ha-118173-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:90:55", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:25:53 +0000 UTC Type:0 Mac:52:54:00:d8:90:55 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-118173-m03 Clientid:01:52:54:00:d8:90:55}
	I0806 07:26:00.492872   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined IP address 192.168.39.5 and MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:26:00.493049   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHPort
	I0806 07:26:00.493228   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHKeyPath
	I0806 07:26:00.493363   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHKeyPath
	I0806 07:26:00.493526   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHUsername
	I0806 07:26:00.493716   31793 main.go:141] libmachine: Using SSH client type: native
	I0806 07:26:00.493865   31793 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.5 22 <nil> <nil>}
	I0806 07:26:00.493877   31793 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-118173-m03 && echo "ha-118173-m03" | sudo tee /etc/hostname
	I0806 07:26:00.608866   31793 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-118173-m03
	
	I0806 07:26:00.608897   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHHostname
	I0806 07:26:00.611843   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:26:00.612310   31793 main.go:141] libmachine: (ha-118173-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:90:55", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:25:53 +0000 UTC Type:0 Mac:52:54:00:d8:90:55 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-118173-m03 Clientid:01:52:54:00:d8:90:55}
	I0806 07:26:00.612336   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined IP address 192.168.39.5 and MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:26:00.612542   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHPort
	I0806 07:26:00.612712   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHKeyPath
	I0806 07:26:00.612948   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHKeyPath
	I0806 07:26:00.613125   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHUsername
	I0806 07:26:00.613329   31793 main.go:141] libmachine: Using SSH client type: native
	I0806 07:26:00.613538   31793 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.5 22 <nil> <nil>}
	I0806 07:26:00.613563   31793 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-118173-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-118173-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-118173-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0806 07:26:00.729098   31793 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 07:26:00.729126   31793 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19370-13057/.minikube CaCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19370-13057/.minikube}
	I0806 07:26:00.729148   31793 buildroot.go:174] setting up certificates
	I0806 07:26:00.729158   31793 provision.go:84] configureAuth start
	I0806 07:26:00.729167   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetMachineName
	I0806 07:26:00.729449   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetIP
	I0806 07:26:00.731911   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:26:00.732347   31793 main.go:141] libmachine: (ha-118173-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:90:55", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:25:53 +0000 UTC Type:0 Mac:52:54:00:d8:90:55 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-118173-m03 Clientid:01:52:54:00:d8:90:55}
	I0806 07:26:00.732404   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined IP address 192.168.39.5 and MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:26:00.732591   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHHostname
	I0806 07:26:00.734978   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:26:00.735326   31793 main.go:141] libmachine: (ha-118173-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:90:55", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:25:53 +0000 UTC Type:0 Mac:52:54:00:d8:90:55 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-118173-m03 Clientid:01:52:54:00:d8:90:55}
	I0806 07:26:00.735348   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined IP address 192.168.39.5 and MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:26:00.735575   31793 provision.go:143] copyHostCerts
	I0806 07:26:00.735613   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem
	I0806 07:26:00.735647   31793 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem, removing ...
	I0806 07:26:00.735657   31793 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem
	I0806 07:26:00.735722   31793 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem (1675 bytes)
	I0806 07:26:00.735799   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem
	I0806 07:26:00.735817   31793 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem, removing ...
	I0806 07:26:00.735823   31793 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem
	I0806 07:26:00.735848   31793 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem (1078 bytes)
	I0806 07:26:00.735894   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem
	I0806 07:26:00.735910   31793 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem, removing ...
	I0806 07:26:00.735917   31793 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem
	I0806 07:26:00.735937   31793 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem (1123 bytes)
	I0806 07:26:00.735986   31793 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem org=jenkins.ha-118173-m03 san=[127.0.0.1 192.168.39.5 ha-118173-m03 localhost minikube]
	I0806 07:26:00.984026   31793 provision.go:177] copyRemoteCerts
	I0806 07:26:00.984092   31793 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0806 07:26:00.984121   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHHostname
	I0806 07:26:00.986810   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:26:00.987181   31793 main.go:141] libmachine: (ha-118173-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:90:55", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:25:53 +0000 UTC Type:0 Mac:52:54:00:d8:90:55 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-118173-m03 Clientid:01:52:54:00:d8:90:55}
	I0806 07:26:00.987206   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined IP address 192.168.39.5 and MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:26:00.987324   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHPort
	I0806 07:26:00.987524   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHKeyPath
	I0806 07:26:00.987692   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHUsername
	I0806 07:26:00.987864   31793 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173-m03/id_rsa Username:docker}
	I0806 07:26:01.067777   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0806 07:26:01.067854   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0806 07:26:01.094064   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0806 07:26:01.094141   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0806 07:26:01.120123   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0806 07:26:01.120212   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0806 07:26:01.150115   31793 provision.go:87] duration metric: took 420.942527ms to configureAuth
	I0806 07:26:01.150151   31793 buildroot.go:189] setting minikube options for container-runtime
	I0806 07:26:01.150461   31793 config.go:182] Loaded profile config "ha-118173": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 07:26:01.150558   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHHostname
	I0806 07:26:01.153106   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:26:01.153396   31793 main.go:141] libmachine: (ha-118173-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:90:55", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:25:53 +0000 UTC Type:0 Mac:52:54:00:d8:90:55 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-118173-m03 Clientid:01:52:54:00:d8:90:55}
	I0806 07:26:01.153422   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined IP address 192.168.39.5 and MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:26:01.153643   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHPort
	I0806 07:26:01.153848   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHKeyPath
	I0806 07:26:01.154014   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHKeyPath
	I0806 07:26:01.154160   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHUsername
	I0806 07:26:01.154305   31793 main.go:141] libmachine: Using SSH client type: native
	I0806 07:26:01.154543   31793 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.5 22 <nil> <nil>}
	I0806 07:26:01.154565   31793 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0806 07:26:01.421958   31793 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0806 07:26:01.421989   31793 main.go:141] libmachine: Checking connection to Docker...
	I0806 07:26:01.421997   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetURL
	I0806 07:26:01.423367   31793 main.go:141] libmachine: (ha-118173-m03) DBG | Using libvirt version 6000000
	I0806 07:26:01.425864   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:26:01.426219   31793 main.go:141] libmachine: (ha-118173-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:90:55", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:25:53 +0000 UTC Type:0 Mac:52:54:00:d8:90:55 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-118173-m03 Clientid:01:52:54:00:d8:90:55}
	I0806 07:26:01.426236   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined IP address 192.168.39.5 and MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:26:01.426448   31793 main.go:141] libmachine: Docker is up and running!
	I0806 07:26:01.426466   31793 main.go:141] libmachine: Reticulating splines...
	I0806 07:26:01.426475   31793 client.go:171] duration metric: took 23.164820044s to LocalClient.Create
	I0806 07:26:01.426502   31793 start.go:167] duration metric: took 23.164903305s to libmachine.API.Create "ha-118173"
	I0806 07:26:01.426513   31793 start.go:293] postStartSetup for "ha-118173-m03" (driver="kvm2")
	I0806 07:26:01.426529   31793 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0806 07:26:01.426550   31793 main.go:141] libmachine: (ha-118173-m03) Calling .DriverName
	I0806 07:26:01.426769   31793 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0806 07:26:01.426796   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHHostname
	I0806 07:26:01.429026   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:26:01.429415   31793 main.go:141] libmachine: (ha-118173-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:90:55", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:25:53 +0000 UTC Type:0 Mac:52:54:00:d8:90:55 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-118173-m03 Clientid:01:52:54:00:d8:90:55}
	I0806 07:26:01.429442   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined IP address 192.168.39.5 and MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:26:01.429570   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHPort
	I0806 07:26:01.429896   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHKeyPath
	I0806 07:26:01.430061   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHUsername
	I0806 07:26:01.430206   31793 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173-m03/id_rsa Username:docker}
	I0806 07:26:01.506828   31793 ssh_runner.go:195] Run: cat /etc/os-release
	I0806 07:26:01.511729   31793 info.go:137] Remote host: Buildroot 2023.02.9
	I0806 07:26:01.511756   31793 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-13057/.minikube/addons for local assets ...
	I0806 07:26:01.511837   31793 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-13057/.minikube/files for local assets ...
	I0806 07:26:01.511933   31793 filesync.go:149] local asset: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem -> 202462.pem in /etc/ssl/certs
	I0806 07:26:01.511943   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem -> /etc/ssl/certs/202462.pem
	I0806 07:26:01.512024   31793 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0806 07:26:01.522139   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem --> /etc/ssl/certs/202462.pem (1708 bytes)
	I0806 07:26:01.546443   31793 start.go:296] duration metric: took 119.913363ms for postStartSetup
	I0806 07:26:01.546501   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetConfigRaw
	I0806 07:26:01.547117   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetIP
	I0806 07:26:01.549806   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:26:01.550193   31793 main.go:141] libmachine: (ha-118173-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:90:55", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:25:53 +0000 UTC Type:0 Mac:52:54:00:d8:90:55 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-118173-m03 Clientid:01:52:54:00:d8:90:55}
	I0806 07:26:01.550218   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined IP address 192.168.39.5 and MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:26:01.550469   31793 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/config.json ...
	I0806 07:26:01.550702   31793 start.go:128] duration metric: took 23.306828362s to createHost
	I0806 07:26:01.550729   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHHostname
	I0806 07:26:01.552946   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:26:01.553298   31793 main.go:141] libmachine: (ha-118173-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:90:55", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:25:53 +0000 UTC Type:0 Mac:52:54:00:d8:90:55 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-118173-m03 Clientid:01:52:54:00:d8:90:55}
	I0806 07:26:01.553322   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined IP address 192.168.39.5 and MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:26:01.553499   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHPort
	I0806 07:26:01.553674   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHKeyPath
	I0806 07:26:01.553853   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHKeyPath
	I0806 07:26:01.554007   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHUsername
	I0806 07:26:01.554208   31793 main.go:141] libmachine: Using SSH client type: native
	I0806 07:26:01.554358   31793 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.5 22 <nil> <nil>}
	I0806 07:26:01.554368   31793 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0806 07:26:01.653350   31793 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722929161.626097717
	
	I0806 07:26:01.653375   31793 fix.go:216] guest clock: 1722929161.626097717
	I0806 07:26:01.653386   31793 fix.go:229] Guest: 2024-08-06 07:26:01.626097717 +0000 UTC Remote: 2024-08-06 07:26:01.550715976 +0000 UTC m=+222.418454735 (delta=75.381741ms)
	I0806 07:26:01.653402   31793 fix.go:200] guest clock delta is within tolerance: 75.381741ms
	I0806 07:26:01.653407   31793 start.go:83] releasing machines lock for "ha-118173-m03", held for 23.409638424s
	I0806 07:26:01.653427   31793 main.go:141] libmachine: (ha-118173-m03) Calling .DriverName
	I0806 07:26:01.653708   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetIP
	I0806 07:26:01.656171   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:26:01.656521   31793 main.go:141] libmachine: (ha-118173-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:90:55", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:25:53 +0000 UTC Type:0 Mac:52:54:00:d8:90:55 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-118173-m03 Clientid:01:52:54:00:d8:90:55}
	I0806 07:26:01.656565   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined IP address 192.168.39.5 and MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:26:01.658586   31793 out.go:177] * Found network options:
	I0806 07:26:01.659857   31793 out.go:177]   - NO_PROXY=192.168.39.164,192.168.39.203
	W0806 07:26:01.661537   31793 proxy.go:119] fail to check proxy env: Error ip not in block
	W0806 07:26:01.661563   31793 proxy.go:119] fail to check proxy env: Error ip not in block
	I0806 07:26:01.661580   31793 main.go:141] libmachine: (ha-118173-m03) Calling .DriverName
	I0806 07:26:01.662093   31793 main.go:141] libmachine: (ha-118173-m03) Calling .DriverName
	I0806 07:26:01.662244   31793 main.go:141] libmachine: (ha-118173-m03) Calling .DriverName
	I0806 07:26:01.662317   31793 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0806 07:26:01.662350   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHHostname
	W0806 07:26:01.662409   31793 proxy.go:119] fail to check proxy env: Error ip not in block
	W0806 07:26:01.662432   31793 proxy.go:119] fail to check proxy env: Error ip not in block
	I0806 07:26:01.662499   31793 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0806 07:26:01.662520   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHHostname
	I0806 07:26:01.665281   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:26:01.665505   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:26:01.665754   31793 main.go:141] libmachine: (ha-118173-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:90:55", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:25:53 +0000 UTC Type:0 Mac:52:54:00:d8:90:55 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-118173-m03 Clientid:01:52:54:00:d8:90:55}
	I0806 07:26:01.665781   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined IP address 192.168.39.5 and MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:26:01.665953   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHPort
	I0806 07:26:01.666073   31793 main.go:141] libmachine: (ha-118173-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:90:55", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:25:53 +0000 UTC Type:0 Mac:52:54:00:d8:90:55 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-118173-m03 Clientid:01:52:54:00:d8:90:55}
	I0806 07:26:01.666098   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined IP address 192.168.39.5 and MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:26:01.666114   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHKeyPath
	I0806 07:26:01.666221   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHPort
	I0806 07:26:01.666300   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHUsername
	I0806 07:26:01.666372   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHKeyPath
	I0806 07:26:01.666457   31793 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173-m03/id_rsa Username:docker}
	I0806 07:26:01.666542   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHUsername
	I0806 07:26:01.666767   31793 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173-m03/id_rsa Username:docker}
	I0806 07:26:01.900625   31793 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0806 07:26:01.907540   31793 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0806 07:26:01.907612   31793 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0806 07:26:01.923703   31793 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0806 07:26:01.923735   31793 start.go:495] detecting cgroup driver to use...
	I0806 07:26:01.923802   31793 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0806 07:26:01.942642   31793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 07:26:01.959561   31793 docker.go:217] disabling cri-docker service (if available) ...
	I0806 07:26:01.959620   31793 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0806 07:26:01.976256   31793 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0806 07:26:01.991578   31793 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0806 07:26:02.112813   31793 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0806 07:26:02.295165   31793 docker.go:233] disabling docker service ...
	I0806 07:26:02.295252   31793 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0806 07:26:02.310496   31793 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0806 07:26:02.323908   31793 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0806 07:26:02.457283   31793 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0806 07:26:02.595447   31793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0806 07:26:02.610752   31793 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 07:26:02.630674   31793 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0806 07:26:02.630747   31793 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 07:26:02.642400   31793 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0806 07:26:02.642475   31793 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 07:26:02.655832   31793 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 07:26:02.667712   31793 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 07:26:02.678799   31793 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0806 07:26:02.690213   31793 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 07:26:02.701911   31793 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 07:26:02.720289   31793 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 07:26:02.731273   31793 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0806 07:26:02.741656   31793 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0806 07:26:02.741716   31793 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0806 07:26:02.755714   31793 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0806 07:26:02.766032   31793 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 07:26:02.889089   31793 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0806 07:26:03.035858   31793 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0806 07:26:03.035945   31793 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0806 07:26:03.040995   31793 start.go:563] Will wait 60s for crictl version
	I0806 07:26:03.041065   31793 ssh_runner.go:195] Run: which crictl
	I0806 07:26:03.045396   31793 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0806 07:26:03.085950   31793 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0806 07:26:03.086031   31793 ssh_runner.go:195] Run: crio --version
	I0806 07:26:03.120483   31793 ssh_runner.go:195] Run: crio --version
	I0806 07:26:03.155295   31793 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0806 07:26:03.156976   31793 out.go:177]   - env NO_PROXY=192.168.39.164
	I0806 07:26:03.158491   31793 out.go:177]   - env NO_PROXY=192.168.39.164,192.168.39.203
	I0806 07:26:03.160002   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetIP
	I0806 07:26:03.163210   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:26:03.163624   31793 main.go:141] libmachine: (ha-118173-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:90:55", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:25:53 +0000 UTC Type:0 Mac:52:54:00:d8:90:55 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-118173-m03 Clientid:01:52:54:00:d8:90:55}
	I0806 07:26:03.163652   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined IP address 192.168.39.5 and MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:26:03.163939   31793 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0806 07:26:03.168887   31793 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 07:26:03.183278   31793 mustload.go:65] Loading cluster: ha-118173
	I0806 07:26:03.183505   31793 config.go:182] Loaded profile config "ha-118173": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 07:26:03.183798   31793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:26:03.183840   31793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:26:03.199392   31793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39879
	I0806 07:26:03.199825   31793 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:26:03.200334   31793 main.go:141] libmachine: Using API Version  1
	I0806 07:26:03.200356   31793 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:26:03.200667   31793 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:26:03.200899   31793 main.go:141] libmachine: (ha-118173) Calling .GetState
	I0806 07:26:03.202517   31793 host.go:66] Checking if "ha-118173" exists ...
	I0806 07:26:03.202859   31793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:26:03.202901   31793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:26:03.217607   31793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41951
	I0806 07:26:03.218077   31793 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:26:03.218543   31793 main.go:141] libmachine: Using API Version  1
	I0806 07:26:03.218572   31793 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:26:03.218849   31793 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:26:03.219022   31793 main.go:141] libmachine: (ha-118173) Calling .DriverName
	I0806 07:26:03.219170   31793 certs.go:68] Setting up /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173 for IP: 192.168.39.5
	I0806 07:26:03.219180   31793 certs.go:194] generating shared ca certs ...
	I0806 07:26:03.219200   31793 certs.go:226] acquiring lock for ca certs: {Name:mkf5c5aed91f4108054d9f2c742cad66c86afbed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 07:26:03.219352   31793 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.key
	I0806 07:26:03.219405   31793 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.key
	I0806 07:26:03.219418   31793 certs.go:256] generating profile certs ...
	I0806 07:26:03.219512   31793 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/client.key
	I0806 07:26:03.219545   31793 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.key.118ed01f
	I0806 07:26:03.219565   31793 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.crt.118ed01f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.164 192.168.39.203 192.168.39.5 192.168.39.254]
	I0806 07:26:03.790973   31793 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.crt.118ed01f ...
	I0806 07:26:03.791005   31793 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.crt.118ed01f: {Name:mk160d5b503440707e900fe943ebfd12722f0369 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 07:26:03.791193   31793 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.key.118ed01f ...
	I0806 07:26:03.791209   31793 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.key.118ed01f: {Name:mk29b8e58ad4c290b5a99b264ef4e8d778b897a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 07:26:03.791577   31793 certs.go:381] copying /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.crt.118ed01f -> /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.crt
	I0806 07:26:03.791742   31793 certs.go:385] copying /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.key.118ed01f -> /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.key
	I0806 07:26:03.791922   31793 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/proxy-client.key
	I0806 07:26:03.791938   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0806 07:26:03.791956   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0806 07:26:03.791972   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0806 07:26:03.791987   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0806 07:26:03.792002   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0806 07:26:03.792016   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0806 07:26:03.792035   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0806 07:26:03.792051   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0806 07:26:03.792140   31793 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246.pem (1338 bytes)
	W0806 07:26:03.792174   31793 certs.go:480] ignoring /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246_empty.pem, impossibly tiny 0 bytes
	I0806 07:26:03.792184   31793 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem (1675 bytes)
	I0806 07:26:03.792212   31793 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem (1078 bytes)
	I0806 07:26:03.792256   31793 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem (1123 bytes)
	I0806 07:26:03.792287   31793 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem (1675 bytes)
	I0806 07:26:03.792337   31793 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem (1708 bytes)
	I0806 07:26:03.792395   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246.pem -> /usr/share/ca-certificates/20246.pem
	I0806 07:26:03.792419   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem -> /usr/share/ca-certificates/202462.pem
	I0806 07:26:03.792434   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0806 07:26:03.792472   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHHostname
	I0806 07:26:03.795656   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:26:03.796025   31793 main.go:141] libmachine: (ha-118173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:df:f0", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:22:33 +0000 UTC Type:0 Mac:52:54:00:ef:df:f0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-118173 Clientid:01:52:54:00:ef:df:f0}
	I0806 07:26:03.796052   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined IP address 192.168.39.164 and MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:26:03.796266   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHPort
	I0806 07:26:03.796517   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHKeyPath
	I0806 07:26:03.796716   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHUsername
	I0806 07:26:03.796892   31793 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173/id_rsa Username:docker}
	I0806 07:26:03.868736   31793 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0806 07:26:03.874200   31793 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0806 07:26:03.886613   31793 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0806 07:26:03.891395   31793 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0806 07:26:03.902189   31793 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0806 07:26:03.906694   31793 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0806 07:26:03.918181   31793 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0806 07:26:03.923501   31793 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0806 07:26:03.933783   31793 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0806 07:26:03.938009   31793 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0806 07:26:03.948897   31793 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0806 07:26:03.953325   31793 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0806 07:26:03.973911   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0806 07:26:04.000150   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0806 07:26:04.027442   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0806 07:26:04.054785   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0806 07:26:04.085177   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0806 07:26:04.109453   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0806 07:26:04.133341   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0806 07:26:04.159382   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0806 07:26:04.187603   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246.pem --> /usr/share/ca-certificates/20246.pem (1338 bytes)
	I0806 07:26:04.213249   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem --> /usr/share/ca-certificates/202462.pem (1708 bytes)
	I0806 07:26:04.238384   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0806 07:26:04.263823   31793 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0806 07:26:04.283177   31793 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0806 07:26:04.301534   31793 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0806 07:26:04.318498   31793 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0806 07:26:04.336549   31793 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0806 07:26:04.355271   31793 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0806 07:26:04.375707   31793 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0806 07:26:04.394842   31793 ssh_runner.go:195] Run: openssl version
	I0806 07:26:04.401153   31793 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20246.pem && ln -fs /usr/share/ca-certificates/20246.pem /etc/ssl/certs/20246.pem"
	I0806 07:26:04.413407   31793 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20246.pem
	I0806 07:26:04.418337   31793 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  6 07:17 /usr/share/ca-certificates/20246.pem
	I0806 07:26:04.418392   31793 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20246.pem
	I0806 07:26:04.424880   31793 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20246.pem /etc/ssl/certs/51391683.0"
	I0806 07:26:04.437017   31793 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202462.pem && ln -fs /usr/share/ca-certificates/202462.pem /etc/ssl/certs/202462.pem"
	I0806 07:26:04.449104   31793 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202462.pem
	I0806 07:26:04.454119   31793 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  6 07:17 /usr/share/ca-certificates/202462.pem
	I0806 07:26:04.454176   31793 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202462.pem
	I0806 07:26:04.460183   31793 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202462.pem /etc/ssl/certs/3ec20f2e.0"
	I0806 07:26:04.474665   31793 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0806 07:26:04.486694   31793 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0806 07:26:04.491177   31793 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  6 07:07 /usr/share/ca-certificates/minikubeCA.pem
	I0806 07:26:04.491232   31793 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0806 07:26:04.497002   31793 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0806 07:26:04.508071   31793 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0806 07:26:04.512272   31793 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0806 07:26:04.512326   31793 kubeadm.go:934] updating node {m03 192.168.39.5 8443 v1.30.3 crio true true} ...
	I0806 07:26:04.512448   31793 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-118173-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-118173 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0806 07:26:04.512481   31793 kube-vip.go:115] generating kube-vip config ...
	I0806 07:26:04.512538   31793 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0806 07:26:04.530021   31793 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0806 07:26:04.530095   31793 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0806 07:26:04.530149   31793 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0806 07:26:04.540726   31793 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0806 07:26:04.540775   31793 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0806 07:26:04.551763   31793 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0806 07:26:04.551783   31793 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256
	I0806 07:26:04.551792   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0806 07:26:04.551783   31793 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256
	I0806 07:26:04.551832   31793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 07:26:04.551842   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0806 07:26:04.551873   31793 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0806 07:26:04.551923   31793 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0806 07:26:04.568440   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0806 07:26:04.568472   31793 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0806 07:26:04.568502   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0806 07:26:04.568524   31793 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0806 07:26:04.568549   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0806 07:26:04.568559   31793 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0806 07:26:04.588276   31793 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0806 07:26:04.588309   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0806 07:26:05.487753   31793 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0806 07:26:05.497735   31793 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0806 07:26:05.516951   31793 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0806 07:26:05.534727   31793 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0806 07:26:05.552747   31793 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0806 07:26:05.557382   31793 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 07:26:05.570646   31793 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 07:26:05.695533   31793 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 07:26:05.713632   31793 host.go:66] Checking if "ha-118173" exists ...
	I0806 07:26:05.714151   31793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:26:05.714203   31793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:26:05.730480   31793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35975
	I0806 07:26:05.730971   31793 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:26:05.731548   31793 main.go:141] libmachine: Using API Version  1
	I0806 07:26:05.731573   31793 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:26:05.731912   31793 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:26:05.732099   31793 main.go:141] libmachine: (ha-118173) Calling .DriverName
	I0806 07:26:05.732224   31793 start.go:317] joinCluster: &{Name:ha-118173 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-118173 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.164 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.203 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false in
spektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 07:26:05.732350   31793 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0806 07:26:05.732385   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHHostname
	I0806 07:26:05.735411   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:26:05.735896   31793 main.go:141] libmachine: (ha-118173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:df:f0", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:22:33 +0000 UTC Type:0 Mac:52:54:00:ef:df:f0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-118173 Clientid:01:52:54:00:ef:df:f0}
	I0806 07:26:05.735921   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined IP address 192.168.39.164 and MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:26:05.736112   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHPort
	I0806 07:26:05.736295   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHKeyPath
	I0806 07:26:05.736453   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHUsername
	I0806 07:26:05.736590   31793 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173/id_rsa Username:docker}
	I0806 07:26:05.903963   31793 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0806 07:26:05.904008   31793 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token g3wtqj.q988fugfbkq4crnx --discovery-token-ca-cert-hash sha256:d36e5c1dfe989c7d561c809d7c06e0fc13926ac8f6d664cc5d6865ea5f79931b --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-118173-m03 --control-plane --apiserver-advertise-address=192.168.39.5 --apiserver-bind-port=8443"
	I0806 07:26:29.635900   31793 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token g3wtqj.q988fugfbkq4crnx --discovery-token-ca-cert-hash sha256:d36e5c1dfe989c7d561c809d7c06e0fc13926ac8f6d664cc5d6865ea5f79931b --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-118173-m03 --control-plane --apiserver-advertise-address=192.168.39.5 --apiserver-bind-port=8443": (23.731845764s)
	I0806 07:26:29.635941   31793 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0806 07:26:30.248222   31793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-118173-m03 minikube.k8s.io/updated_at=2024_08_06T07_26_30_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=e92cb06692f5ea1ba801d10d148e5e92e807f9c8 minikube.k8s.io/name=ha-118173 minikube.k8s.io/primary=false
	I0806 07:26:30.395083   31793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-118173-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0806 07:26:30.517574   31793 start.go:319] duration metric: took 24.785347039s to joinCluster
	I0806 07:26:30.517656   31793 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0806 07:26:30.517993   31793 config.go:182] Loaded profile config "ha-118173": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 07:26:30.519229   31793 out.go:177] * Verifying Kubernetes components...
	I0806 07:26:30.520557   31793 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 07:26:30.820588   31793 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 07:26:30.893001   31793 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19370-13057/kubeconfig
	I0806 07:26:30.893238   31793 kapi.go:59] client config for ha-118173: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/client.crt", KeyFile:"/home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/client.key", CAFile:"/home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02a80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0806 07:26:30.893308   31793 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.164:8443
	I0806 07:26:30.893495   31793 node_ready.go:35] waiting up to 6m0s for node "ha-118173-m03" to be "Ready" ...
	I0806 07:26:30.893561   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m03
	I0806 07:26:30.893568   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:30.893576   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:30.893581   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:30.898173   31793 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0806 07:26:31.394312   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m03
	I0806 07:26:31.394330   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:31.394339   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:31.394343   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:31.404531   31793 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0806 07:26:31.893911   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m03
	I0806 07:26:31.893932   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:31.893942   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:31.893946   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:31.897287   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:32.394296   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m03
	I0806 07:26:32.394317   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:32.394326   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:32.394329   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:32.397248   31793 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 07:26:32.894446   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m03
	I0806 07:26:32.894465   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:32.894473   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:32.894477   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:32.898223   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:32.898920   31793 node_ready.go:53] node "ha-118173-m03" has status "Ready":"False"
	I0806 07:26:33.394070   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m03
	I0806 07:26:33.394101   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:33.394113   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:33.394121   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:33.397822   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:33.894436   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m03
	I0806 07:26:33.894457   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:33.894467   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:33.894473   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:33.897835   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:34.393883   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m03
	I0806 07:26:34.393906   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:34.393913   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:34.393917   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:34.397798   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:34.894314   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m03
	I0806 07:26:34.894337   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:34.894346   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:34.894352   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:34.897374   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:35.394441   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m03
	I0806 07:26:35.394461   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:35.394470   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:35.394474   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:35.398022   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:35.398632   31793 node_ready.go:53] node "ha-118173-m03" has status "Ready":"False"
	I0806 07:26:35.894485   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m03
	I0806 07:26:35.894511   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:35.894522   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:35.894527   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:35.900795   31793 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0806 07:26:36.393723   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m03
	I0806 07:26:36.393749   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:36.393759   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:36.393764   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:36.397167   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:36.894035   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m03
	I0806 07:26:36.894056   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:36.894065   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:36.894069   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:36.897659   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:37.393672   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m03
	I0806 07:26:37.393693   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:37.393700   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:37.393704   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:37.397785   31793 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0806 07:26:37.399056   31793 node_ready.go:53] node "ha-118173-m03" has status "Ready":"False"
	I0806 07:26:37.894161   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m03
	I0806 07:26:37.894180   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:37.894188   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:37.894192   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:37.897420   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:38.394383   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m03
	I0806 07:26:38.394405   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:38.394412   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:38.394429   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:38.398194   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:38.894482   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m03
	I0806 07:26:38.894506   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:38.894514   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:38.894517   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:38.897948   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:39.393760   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m03
	I0806 07:26:39.393782   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:39.393790   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:39.393795   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:39.397172   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:39.894595   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m03
	I0806 07:26:39.894616   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:39.894624   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:39.894630   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:39.897283   31793 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 07:26:39.897870   31793 node_ready.go:53] node "ha-118173-m03" has status "Ready":"False"
	I0806 07:26:40.394307   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m03
	I0806 07:26:40.394327   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:40.394335   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:40.394338   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:40.397953   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:40.894692   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m03
	I0806 07:26:40.894714   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:40.894724   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:40.894730   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:40.898332   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:41.394367   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m03
	I0806 07:26:41.394387   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:41.394396   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:41.394398   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:41.398227   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:41.894258   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m03
	I0806 07:26:41.894278   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:41.894286   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:41.894292   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:41.897447   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:41.897975   31793 node_ready.go:53] node "ha-118173-m03" has status "Ready":"False"
	I0806 07:26:42.394339   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m03
	I0806 07:26:42.394362   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:42.394375   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:42.394396   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:42.397881   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:42.894463   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m03
	I0806 07:26:42.894485   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:42.894493   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:42.894497   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:42.897853   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:43.394582   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m03
	I0806 07:26:43.394602   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:43.394609   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:43.394612   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:43.398469   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:43.893791   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m03
	I0806 07:26:43.893817   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:43.893827   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:43.893832   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:43.896507   31793 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 07:26:44.393832   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m03
	I0806 07:26:44.393853   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:44.393859   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:44.393863   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:44.397200   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:44.397677   31793 node_ready.go:53] node "ha-118173-m03" has status "Ready":"False"
	I0806 07:26:44.894407   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m03
	I0806 07:26:44.894429   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:44.894437   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:44.894443   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:44.898032   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:45.394292   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m03
	I0806 07:26:45.394317   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:45.394328   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:45.394334   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:45.397652   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:45.894446   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m03
	I0806 07:26:45.894468   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:45.894477   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:45.894481   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:45.905209   31793 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0806 07:26:46.394412   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m03
	I0806 07:26:46.394435   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:46.394443   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:46.394449   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:46.398644   31793 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0806 07:26:46.399085   31793 node_ready.go:53] node "ha-118173-m03" has status "Ready":"False"
	I0806 07:26:46.894480   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m03
	I0806 07:26:46.894500   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:46.894512   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:46.894516   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:46.897733   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:47.393735   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m03
	I0806 07:26:47.393757   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:47.393765   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:47.393769   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:47.397905   31793 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0806 07:26:47.894101   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m03
	I0806 07:26:47.894122   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:47.894130   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:47.894136   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:47.897026   31793 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 07:26:48.394653   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m03
	I0806 07:26:48.394679   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:48.394686   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:48.394691   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:48.398489   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:48.399199   31793 node_ready.go:53] node "ha-118173-m03" has status "Ready":"False"
	I0806 07:26:48.894555   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m03
	I0806 07:26:48.894578   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:48.894586   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:48.894589   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:48.898291   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:49.394177   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m03
	I0806 07:26:49.394197   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:49.394206   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:49.394210   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:49.397263   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:49.893721   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m03
	I0806 07:26:49.893746   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:49.893756   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:49.893763   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:49.898053   31793 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0806 07:26:49.898806   31793 node_ready.go:49] node "ha-118173-m03" has status "Ready":"True"
	I0806 07:26:49.898832   31793 node_ready.go:38] duration metric: took 19.005324006s for node "ha-118173-m03" to be "Ready" ...
	I0806 07:26:49.898841   31793 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 07:26:49.898909   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods
	I0806 07:26:49.898917   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:49.898925   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:49.898931   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:49.905307   31793 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0806 07:26:49.911989   31793 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-t24ww" in "kube-system" namespace to be "Ready" ...
	I0806 07:26:49.912086   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-t24ww
	I0806 07:26:49.912093   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:49.912103   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:49.912114   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:49.916830   31793 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0806 07:26:49.917808   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173
	I0806 07:26:49.917824   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:49.917839   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:49.917857   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:49.921623   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:49.922142   31793 pod_ready.go:92] pod "coredns-7db6d8ff4d-t24ww" in "kube-system" namespace has status "Ready":"True"
	I0806 07:26:49.922160   31793 pod_ready.go:81] duration metric: took 10.141106ms for pod "coredns-7db6d8ff4d-t24ww" in "kube-system" namespace to be "Ready" ...
	I0806 07:26:49.922169   31793 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-zscz4" in "kube-system" namespace to be "Ready" ...
	I0806 07:26:49.922240   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-zscz4
	I0806 07:26:49.922250   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:49.922260   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:49.922265   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:49.926952   31793 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0806 07:26:49.927652   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173
	I0806 07:26:49.927673   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:49.927684   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:49.927689   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:49.933246   31793 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0806 07:26:49.934440   31793 pod_ready.go:92] pod "coredns-7db6d8ff4d-zscz4" in "kube-system" namespace has status "Ready":"True"
	I0806 07:26:49.934458   31793 pod_ready.go:81] duration metric: took 12.283667ms for pod "coredns-7db6d8ff4d-zscz4" in "kube-system" namespace to be "Ready" ...
	I0806 07:26:49.934467   31793 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-118173" in "kube-system" namespace to be "Ready" ...
	I0806 07:26:49.934523   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods/etcd-ha-118173
	I0806 07:26:49.934531   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:49.934539   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:49.934546   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:49.937746   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:49.938336   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173
	I0806 07:26:49.938352   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:49.938360   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:49.938366   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:49.940809   31793 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 07:26:49.941273   31793 pod_ready.go:92] pod "etcd-ha-118173" in "kube-system" namespace has status "Ready":"True"
	I0806 07:26:49.941292   31793 pod_ready.go:81] duration metric: took 6.818286ms for pod "etcd-ha-118173" in "kube-system" namespace to be "Ready" ...
	I0806 07:26:49.941305   31793 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-118173-m02" in "kube-system" namespace to be "Ready" ...
	I0806 07:26:49.941368   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods/etcd-ha-118173-m02
	I0806 07:26:49.941379   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:49.941389   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:49.941398   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:49.943935   31793 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 07:26:49.944548   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:26:49.944565   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:49.944573   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:49.944579   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:49.947018   31793 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 07:26:49.947464   31793 pod_ready.go:92] pod "etcd-ha-118173-m02" in "kube-system" namespace has status "Ready":"True"
	I0806 07:26:49.947485   31793 pod_ready.go:81] duration metric: took 6.172581ms for pod "etcd-ha-118173-m02" in "kube-system" namespace to be "Ready" ...
	I0806 07:26:49.947498   31793 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-118173-m03" in "kube-system" namespace to be "Ready" ...
	I0806 07:26:50.093774   31793 request.go:629] Waited for 146.217844ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods/etcd-ha-118173-m03
	I0806 07:26:50.093836   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods/etcd-ha-118173-m03
	I0806 07:26:50.093843   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:50.093875   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:50.093890   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:50.097338   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:50.294585   31793 request.go:629] Waited for 196.342537ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.164:8443/api/v1/nodes/ha-118173-m03
	I0806 07:26:50.294658   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m03
	I0806 07:26:50.294668   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:50.294679   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:50.294686   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:50.298098   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:50.298714   31793 pod_ready.go:92] pod "etcd-ha-118173-m03" in "kube-system" namespace has status "Ready":"True"
	I0806 07:26:50.298733   31793 pod_ready.go:81] duration metric: took 351.227961ms for pod "etcd-ha-118173-m03" in "kube-system" namespace to be "Ready" ...
	I0806 07:26:50.298753   31793 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-118173" in "kube-system" namespace to be "Ready" ...
	I0806 07:26:50.494109   31793 request.go:629] Waited for 195.27208ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-118173
	I0806 07:26:50.494174   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-118173
	I0806 07:26:50.494180   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:50.494190   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:50.494197   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:50.497605   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:50.694672   31793 request.go:629] Waited for 196.360731ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.164:8443/api/v1/nodes/ha-118173
	I0806 07:26:50.694748   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173
	I0806 07:26:50.694756   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:50.694764   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:50.694769   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:50.698064   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:50.698765   31793 pod_ready.go:92] pod "kube-apiserver-ha-118173" in "kube-system" namespace has status "Ready":"True"
	I0806 07:26:50.698794   31793 pod_ready.go:81] duration metric: took 400.030299ms for pod "kube-apiserver-ha-118173" in "kube-system" namespace to be "Ready" ...
	I0806 07:26:50.698813   31793 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-118173-m02" in "kube-system" namespace to be "Ready" ...
	I0806 07:26:50.894310   31793 request.go:629] Waited for 195.425844ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-118173-m02
	I0806 07:26:50.894402   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-118173-m02
	I0806 07:26:50.894411   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:50.894423   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:50.894432   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:50.898426   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:51.094114   31793 request.go:629] Waited for 194.23602ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:26:51.094165   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:26:51.094170   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:51.094177   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:51.094186   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:51.097389   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:51.098083   31793 pod_ready.go:92] pod "kube-apiserver-ha-118173-m02" in "kube-system" namespace has status "Ready":"True"
	I0806 07:26:51.098112   31793 pod_ready.go:81] duration metric: took 399.290638ms for pod "kube-apiserver-ha-118173-m02" in "kube-system" namespace to be "Ready" ...
	I0806 07:26:51.098124   31793 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-118173-m03" in "kube-system" namespace to be "Ready" ...
	I0806 07:26:51.294154   31793 request.go:629] Waited for 195.956609ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-118173-m03
	I0806 07:26:51.294220   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-118173-m03
	I0806 07:26:51.294225   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:51.294233   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:51.294238   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:51.297482   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:51.494507   31793 request.go:629] Waited for 196.312988ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.164:8443/api/v1/nodes/ha-118173-m03
	I0806 07:26:51.494579   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m03
	I0806 07:26:51.494585   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:51.494592   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:51.494599   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:51.497907   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:51.498505   31793 pod_ready.go:92] pod "kube-apiserver-ha-118173-m03" in "kube-system" namespace has status "Ready":"True"
	I0806 07:26:51.498525   31793 pod_ready.go:81] duration metric: took 400.392272ms for pod "kube-apiserver-ha-118173-m03" in "kube-system" namespace to be "Ready" ...
	I0806 07:26:51.498536   31793 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-118173" in "kube-system" namespace to be "Ready" ...
	I0806 07:26:51.694347   31793 request.go:629] Waited for 195.742134ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-118173
	I0806 07:26:51.694423   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-118173
	I0806 07:26:51.694433   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:51.694442   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:51.694452   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:51.697903   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:51.894150   31793 request.go:629] Waited for 195.450271ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.164:8443/api/v1/nodes/ha-118173
	I0806 07:26:51.894218   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173
	I0806 07:26:51.894225   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:51.894235   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:51.894243   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:51.897758   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:51.898472   31793 pod_ready.go:92] pod "kube-controller-manager-ha-118173" in "kube-system" namespace has status "Ready":"True"
	I0806 07:26:51.898489   31793 pod_ready.go:81] duration metric: took 399.945238ms for pod "kube-controller-manager-ha-118173" in "kube-system" namespace to be "Ready" ...
	I0806 07:26:51.898498   31793 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-118173-m02" in "kube-system" namespace to be "Ready" ...
	I0806 07:26:52.094678   31793 request.go:629] Waited for 196.122099ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-118173-m02
	I0806 07:26:52.094762   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-118173-m02
	I0806 07:26:52.094773   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:52.094784   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:52.094794   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:52.097635   31793 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 07:26:52.294550   31793 request.go:629] Waited for 196.349599ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:26:52.294599   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:26:52.294604   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:52.294612   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:52.294617   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:52.297756   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:52.298622   31793 pod_ready.go:92] pod "kube-controller-manager-ha-118173-m02" in "kube-system" namespace has status "Ready":"True"
	I0806 07:26:52.298640   31793 pod_ready.go:81] duration metric: took 400.135914ms for pod "kube-controller-manager-ha-118173-m02" in "kube-system" namespace to be "Ready" ...
	I0806 07:26:52.298650   31793 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-118173-m03" in "kube-system" namespace to be "Ready" ...
	I0806 07:26:52.494669   31793 request.go:629] Waited for 195.954169ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-118173-m03
	I0806 07:26:52.494752   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-118173-m03
	I0806 07:26:52.494763   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:52.494772   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:52.494780   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:52.498294   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:52.694057   31793 request.go:629] Waited for 194.283931ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.164:8443/api/v1/nodes/ha-118173-m03
	I0806 07:26:52.694118   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m03
	I0806 07:26:52.694125   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:52.694135   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:52.694140   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:52.697249   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:52.697936   31793 pod_ready.go:92] pod "kube-controller-manager-ha-118173-m03" in "kube-system" namespace has status "Ready":"True"
	I0806 07:26:52.697955   31793 pod_ready.go:81] duration metric: took 399.29885ms for pod "kube-controller-manager-ha-118173-m03" in "kube-system" namespace to be "Ready" ...
	I0806 07:26:52.697964   31793 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-k8sq6" in "kube-system" namespace to be "Ready" ...
	I0806 07:26:52.894022   31793 request.go:629] Waited for 195.997855ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods/kube-proxy-k8sq6
	I0806 07:26:52.894085   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods/kube-proxy-k8sq6
	I0806 07:26:52.894090   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:52.894097   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:52.894102   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:52.897672   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:53.093725   31793 request.go:629] Waited for 195.278717ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:26:53.093775   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:26:53.093780   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:53.093787   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:53.093792   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:53.097778   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:53.098338   31793 pod_ready.go:92] pod "kube-proxy-k8sq6" in "kube-system" namespace has status "Ready":"True"
	I0806 07:26:53.098362   31793 pod_ready.go:81] duration metric: took 400.389883ms for pod "kube-proxy-k8sq6" in "kube-system" namespace to be "Ready" ...
	I0806 07:26:53.098374   31793 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ksm6v" in "kube-system" namespace to be "Ready" ...
	I0806 07:26:53.294181   31793 request.go:629] Waited for 195.738103ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ksm6v
	I0806 07:26:53.294260   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ksm6v
	I0806 07:26:53.294270   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:53.294278   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:53.294286   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:53.297478   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:53.494327   31793 request.go:629] Waited for 195.99984ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.164:8443/api/v1/nodes/ha-118173
	I0806 07:26:53.494398   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173
	I0806 07:26:53.494403   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:53.494410   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:53.494417   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:53.497588   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:53.498416   31793 pod_ready.go:92] pod "kube-proxy-ksm6v" in "kube-system" namespace has status "Ready":"True"
	I0806 07:26:53.498437   31793 pod_ready.go:81] duration metric: took 400.055677ms for pod "kube-proxy-ksm6v" in "kube-system" namespace to be "Ready" ...
	I0806 07:26:53.498446   31793 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-sw4nj" in "kube-system" namespace to be "Ready" ...
	I0806 07:26:53.694567   31793 request.go:629] Waited for 196.056434ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sw4nj
	I0806 07:26:53.694638   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sw4nj
	I0806 07:26:53.694643   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:53.694651   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:53.694655   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:53.698323   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:53.894528   31793 request.go:629] Waited for 195.375859ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.164:8443/api/v1/nodes/ha-118173-m03
	I0806 07:26:53.894586   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m03
	I0806 07:26:53.894591   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:53.894606   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:53.894614   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:53.898117   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:53.898684   31793 pod_ready.go:92] pod "kube-proxy-sw4nj" in "kube-system" namespace has status "Ready":"True"
	I0806 07:26:53.898707   31793 pod_ready.go:81] duration metric: took 400.253401ms for pod "kube-proxy-sw4nj" in "kube-system" namespace to be "Ready" ...
	I0806 07:26:53.898719   31793 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-118173" in "kube-system" namespace to be "Ready" ...
	I0806 07:26:54.093723   31793 request.go:629] Waited for 194.937669ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-118173
	I0806 07:26:54.093808   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-118173
	I0806 07:26:54.093820   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:54.093829   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:54.093835   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:54.097356   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:54.294184   31793 request.go:629] Waited for 196.178585ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.164:8443/api/v1/nodes/ha-118173
	I0806 07:26:54.294238   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173
	I0806 07:26:54.294242   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:54.294251   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:54.294255   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:54.297777   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:54.298275   31793 pod_ready.go:92] pod "kube-scheduler-ha-118173" in "kube-system" namespace has status "Ready":"True"
	I0806 07:26:54.298300   31793 pod_ready.go:81] duration metric: took 399.572268ms for pod "kube-scheduler-ha-118173" in "kube-system" namespace to be "Ready" ...
	I0806 07:26:54.298313   31793 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-118173-m02" in "kube-system" namespace to be "Ready" ...
	I0806 07:26:54.493783   31793 request.go:629] Waited for 195.383958ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-118173-m02
	I0806 07:26:54.493862   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-118173-m02
	I0806 07:26:54.493873   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:54.493884   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:54.493892   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:54.497242   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:54.694212   31793 request.go:629] Waited for 196.322219ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:26:54.694284   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:26:54.694291   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:54.694302   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:54.694307   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:54.697506   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:54.698023   31793 pod_ready.go:92] pod "kube-scheduler-ha-118173-m02" in "kube-system" namespace has status "Ready":"True"
	I0806 07:26:54.698040   31793 pod_ready.go:81] duration metric: took 399.718807ms for pod "kube-scheduler-ha-118173-m02" in "kube-system" namespace to be "Ready" ...
	I0806 07:26:54.698052   31793 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-118173-m03" in "kube-system" namespace to be "Ready" ...
	I0806 07:26:54.894184   31793 request.go:629] Waited for 196.056976ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-118173-m03
	I0806 07:26:54.894248   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-118173-m03
	I0806 07:26:54.894255   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:54.894263   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:54.894266   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:54.897428   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:55.094353   31793 request.go:629] Waited for 196.221298ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.164:8443/api/v1/nodes/ha-118173-m03
	I0806 07:26:55.094421   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m03
	I0806 07:26:55.094435   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:55.094444   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:55.094455   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:55.097913   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:55.098569   31793 pod_ready.go:92] pod "kube-scheduler-ha-118173-m03" in "kube-system" namespace has status "Ready":"True"
	I0806 07:26:55.098587   31793 pod_ready.go:81] duration metric: took 400.528767ms for pod "kube-scheduler-ha-118173-m03" in "kube-system" namespace to be "Ready" ...
	I0806 07:26:55.098597   31793 pod_ready.go:38] duration metric: took 5.199746098s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 07:26:55.098611   31793 api_server.go:52] waiting for apiserver process to appear ...
	I0806 07:26:55.098669   31793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 07:26:55.115674   31793 api_server.go:72] duration metric: took 24.597982005s to wait for apiserver process to appear ...
	I0806 07:26:55.115742   31793 api_server.go:88] waiting for apiserver healthz status ...
	I0806 07:26:55.115771   31793 api_server.go:253] Checking apiserver healthz at https://192.168.39.164:8443/healthz ...
	I0806 07:26:55.121113   31793 api_server.go:279] https://192.168.39.164:8443/healthz returned 200:
	ok
	I0806 07:26:55.121190   31793 round_trippers.go:463] GET https://192.168.39.164:8443/version
	I0806 07:26:55.121200   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:55.121211   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:55.121219   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:55.122118   31793 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0806 07:26:55.122192   31793 api_server.go:141] control plane version: v1.30.3
	I0806 07:26:55.122211   31793 api_server.go:131] duration metric: took 6.456898ms to wait for apiserver health ...
	I0806 07:26:55.122224   31793 system_pods.go:43] waiting for kube-system pods to appear ...
	I0806 07:26:55.294643   31793 request.go:629] Waited for 172.315519ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods
	I0806 07:26:55.294709   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods
	I0806 07:26:55.294719   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:55.294727   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:55.294735   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:55.301274   31793 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0806 07:26:55.307768   31793 system_pods.go:59] 24 kube-system pods found
	I0806 07:26:55.307797   31793 system_pods.go:61] "coredns-7db6d8ff4d-t24ww" [8effbe6c-3107-409e-9c94-dee4103d12b0] Running
	I0806 07:26:55.307802   31793 system_pods.go:61] "coredns-7db6d8ff4d-zscz4" [ef588a32-ac32-4e38-926e-6ee7a662a399] Running
	I0806 07:26:55.307805   31793 system_pods.go:61] "etcd-ha-118173" [a72e8b83-1e33-429a-8d98-98936a0e3ab8] Running
	I0806 07:26:55.307814   31793 system_pods.go:61] "etcd-ha-118173-m02" [cb090a5d-5760-48c6-abea-735b88daf0be] Running
	I0806 07:26:55.307818   31793 system_pods.go:61] "etcd-ha-118173-m03" [176012b3-c835-4dcc-9adc-46ce943a3570] Running
	I0806 07:26:55.307821   31793 system_pods.go:61] "kindnet-2bmzh" [8b1ecbca-a98b-473c-949b-a8a6d3ba1354] Running
	I0806 07:26:55.307824   31793 system_pods.go:61] "kindnet-n7jmb" [fa496bfb-7bf0-4ebc-adbd-36d992d8e279] Running
	I0806 07:26:55.307827   31793 system_pods.go:61] "kindnet-z9w6l" [c02fe720-31c6-4146-af1a-0ada3aa7e530] Running
	I0806 07:26:55.307830   31793 system_pods.go:61] "kube-apiserver-ha-118173" [902898f9-8ba6-4d78-a648-49d412dce202] Running
	I0806 07:26:55.307833   31793 system_pods.go:61] "kube-apiserver-ha-118173-m02" [1b9cea4a-7755-4cd8-a240-b4911dc45edf] Running
	I0806 07:26:55.307836   31793 system_pods.go:61] "kube-apiserver-ha-118173-m03" [a1795be8-57bf-4e24-8165-f66d7b4778b4] Running
	I0806 07:26:55.307840   31793 system_pods.go:61] "kube-controller-manager-ha-118173" [1af832f2-bceb-4c89-afa8-09a78b7782e1] Running
	I0806 07:26:55.307843   31793 system_pods.go:61] "kube-controller-manager-ha-118173-m02" [7be7c963-d9af-4cda-95dd-9a953b40f284] Running
	I0806 07:26:55.307846   31793 system_pods.go:61] "kube-controller-manager-ha-118173-m03" [a777af70-7701-41aa-b149-ccbe376d6464] Running
	I0806 07:26:55.307850   31793 system_pods.go:61] "kube-proxy-k8sq6" [b1ec850b-6ce0-4710-aa99-419bda3b3e93] Running
	I0806 07:26:55.307853   31793 system_pods.go:61] "kube-proxy-ksm6v" [594bc178-8319-4922-9132-69a9c6890b95] Running
	I0806 07:26:55.307856   31793 system_pods.go:61] "kube-proxy-sw4nj" [a75ef027-13c0-49c6-8580-6adb8afa497b] Running
	I0806 07:26:55.307859   31793 system_pods.go:61] "kube-scheduler-ha-118173" [6fda2f13-0df0-42ed-9d67-8dd30b33fa3d] Running
	I0806 07:26:55.307862   31793 system_pods.go:61] "kube-scheduler-ha-118173-m02" [f097af54-fb79-40f2-aaf4-6cfbb0d8c449] Running
	I0806 07:26:55.307865   31793 system_pods.go:61] "kube-scheduler-ha-118173-m03" [2360b607-29c0-4731-afc7-c76d7499e1fb] Running
	I0806 07:26:55.307867   31793 system_pods.go:61] "kube-vip-ha-118173" [594d77d2-83e6-420d-8f0a-f54e9fd704f3] Running
	I0806 07:26:55.307870   31793 system_pods.go:61] "kube-vip-ha-118173-m02" [66b58218-4fc2-4a96-81b9-31ade7dda807] Running
	I0806 07:26:55.307873   31793 system_pods.go:61] "kube-vip-ha-118173-m03" [af675cae-7ed6-4e95-a6a9-8a6b16e4c450] Running
	I0806 07:26:55.307876   31793 system_pods.go:61] "storage-provisioner" [8379466d-0241-40e4-a262-d762b3cda2eb] Running
	I0806 07:26:55.307881   31793 system_pods.go:74] duration metric: took 185.648809ms to wait for pod list to return data ...
	I0806 07:26:55.307891   31793 default_sa.go:34] waiting for default service account to be created ...
	I0806 07:26:55.494342   31793 request.go:629] Waited for 186.376671ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.164:8443/api/v1/namespaces/default/serviceaccounts
	I0806 07:26:55.494397   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/namespaces/default/serviceaccounts
	I0806 07:26:55.494402   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:55.494409   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:55.494414   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:55.497896   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:55.498002   31793 default_sa.go:45] found service account: "default"
	I0806 07:26:55.498017   31793 default_sa.go:55] duration metric: took 190.118149ms for default service account to be created ...
	I0806 07:26:55.498024   31793 system_pods.go:116] waiting for k8s-apps to be running ...
	I0806 07:26:55.694421   31793 request.go:629] Waited for 196.305041ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods
	I0806 07:26:55.694487   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods
	I0806 07:26:55.694492   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:55.694499   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:55.694506   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:55.701186   31793 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0806 07:26:55.707131   31793 system_pods.go:86] 24 kube-system pods found
	I0806 07:26:55.707159   31793 system_pods.go:89] "coredns-7db6d8ff4d-t24ww" [8effbe6c-3107-409e-9c94-dee4103d12b0] Running
	I0806 07:26:55.707165   31793 system_pods.go:89] "coredns-7db6d8ff4d-zscz4" [ef588a32-ac32-4e38-926e-6ee7a662a399] Running
	I0806 07:26:55.707169   31793 system_pods.go:89] "etcd-ha-118173" [a72e8b83-1e33-429a-8d98-98936a0e3ab8] Running
	I0806 07:26:55.707173   31793 system_pods.go:89] "etcd-ha-118173-m02" [cb090a5d-5760-48c6-abea-735b88daf0be] Running
	I0806 07:26:55.707179   31793 system_pods.go:89] "etcd-ha-118173-m03" [176012b3-c835-4dcc-9adc-46ce943a3570] Running
	I0806 07:26:55.707184   31793 system_pods.go:89] "kindnet-2bmzh" [8b1ecbca-a98b-473c-949b-a8a6d3ba1354] Running
	I0806 07:26:55.707190   31793 system_pods.go:89] "kindnet-n7jmb" [fa496bfb-7bf0-4ebc-adbd-36d992d8e279] Running
	I0806 07:26:55.707196   31793 system_pods.go:89] "kindnet-z9w6l" [c02fe720-31c6-4146-af1a-0ada3aa7e530] Running
	I0806 07:26:55.707205   31793 system_pods.go:89] "kube-apiserver-ha-118173" [902898f9-8ba6-4d78-a648-49d412dce202] Running
	I0806 07:26:55.707213   31793 system_pods.go:89] "kube-apiserver-ha-118173-m02" [1b9cea4a-7755-4cd8-a240-b4911dc45edf] Running
	I0806 07:26:55.707223   31793 system_pods.go:89] "kube-apiserver-ha-118173-m03" [a1795be8-57bf-4e24-8165-f66d7b4778b4] Running
	I0806 07:26:55.707228   31793 system_pods.go:89] "kube-controller-manager-ha-118173" [1af832f2-bceb-4c89-afa8-09a78b7782e1] Running
	I0806 07:26:55.707233   31793 system_pods.go:89] "kube-controller-manager-ha-118173-m02" [7be7c963-d9af-4cda-95dd-9a953b40f284] Running
	I0806 07:26:55.707240   31793 system_pods.go:89] "kube-controller-manager-ha-118173-m03" [a777af70-7701-41aa-b149-ccbe376d6464] Running
	I0806 07:26:55.707244   31793 system_pods.go:89] "kube-proxy-k8sq6" [b1ec850b-6ce0-4710-aa99-419bda3b3e93] Running
	I0806 07:26:55.707250   31793 system_pods.go:89] "kube-proxy-ksm6v" [594bc178-8319-4922-9132-69a9c6890b95] Running
	I0806 07:26:55.707257   31793 system_pods.go:89] "kube-proxy-sw4nj" [a75ef027-13c0-49c6-8580-6adb8afa497b] Running
	I0806 07:26:55.707261   31793 system_pods.go:89] "kube-scheduler-ha-118173" [6fda2f13-0df0-42ed-9d67-8dd30b33fa3d] Running
	I0806 07:26:55.707266   31793 system_pods.go:89] "kube-scheduler-ha-118173-m02" [f097af54-fb79-40f2-aaf4-6cfbb0d8c449] Running
	I0806 07:26:55.707270   31793 system_pods.go:89] "kube-scheduler-ha-118173-m03" [2360b607-29c0-4731-afc7-c76d7499e1fb] Running
	I0806 07:26:55.707274   31793 system_pods.go:89] "kube-vip-ha-118173" [594d77d2-83e6-420d-8f0a-f54e9fd704f3] Running
	I0806 07:26:55.707278   31793 system_pods.go:89] "kube-vip-ha-118173-m02" [66b58218-4fc2-4a96-81b9-31ade7dda807] Running
	I0806 07:26:55.707283   31793 system_pods.go:89] "kube-vip-ha-118173-m03" [af675cae-7ed6-4e95-a6a9-8a6b16e4c450] Running
	I0806 07:26:55.707292   31793 system_pods.go:89] "storage-provisioner" [8379466d-0241-40e4-a262-d762b3cda2eb] Running
	I0806 07:26:55.707303   31793 system_pods.go:126] duration metric: took 209.272042ms to wait for k8s-apps to be running ...
	I0806 07:26:55.707315   31793 system_svc.go:44] waiting for kubelet service to be running ....
	I0806 07:26:55.707363   31793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 07:26:55.724292   31793 system_svc.go:56] duration metric: took 16.970607ms WaitForService to wait for kubelet
	I0806 07:26:55.724317   31793 kubeadm.go:582] duration metric: took 25.206628498s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0806 07:26:55.724341   31793 node_conditions.go:102] verifying NodePressure condition ...
	I0806 07:26:55.894025   31793 request.go:629] Waited for 169.596181ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.164:8443/api/v1/nodes
	I0806 07:26:55.894077   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes
	I0806 07:26:55.894081   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:55.894088   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:55.894095   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:55.897684   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:55.898994   31793 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0806 07:26:55.899018   31793 node_conditions.go:123] node cpu capacity is 2
	I0806 07:26:55.899054   31793 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0806 07:26:55.899063   31793 node_conditions.go:123] node cpu capacity is 2
	I0806 07:26:55.899070   31793 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0806 07:26:55.899074   31793 node_conditions.go:123] node cpu capacity is 2
	I0806 07:26:55.899083   31793 node_conditions.go:105] duration metric: took 174.735498ms to run NodePressure ...
	I0806 07:26:55.899098   31793 start.go:241] waiting for startup goroutines ...
	I0806 07:26:55.899124   31793 start.go:255] writing updated cluster config ...
	I0806 07:26:55.899554   31793 ssh_runner.go:195] Run: rm -f paused
	I0806 07:26:55.950770   31793 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0806 07:26:55.953632   31793 out.go:177] * Done! kubectl is now configured to use "ha-118173" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 06 07:30:38 ha-118173 crio[676]: time="2024-08-06 07:30:38.123654619Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722929438123520069,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154770,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bffea025-f56b-4c66-8dfd-8ce5f40763de name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 07:30:38 ha-118173 crio[676]: time="2024-08-06 07:30:38.124746454Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=22258b6c-0e85-47e6-8397-205d22013d05 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 07:30:38 ha-118173 crio[676]: time="2024-08-06 07:30:38.124949181Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=22258b6c-0e85-47e6-8397-205d22013d05 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 07:30:38 ha-118173 crio[676]: time="2024-08-06 07:30:38.125288718Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:10437bdde23713cae1b222edffa34be0e9583f43a83c01f10aae555f0dd7542e,PodSandboxId:3f725d083d59ecb8b65c8b09466e36c8b01dd5742fa8d97e108de4a49f47de04,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722929221668199667,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-rml5d,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4a82c0ca-1596-4dfe-a2f9-3a34f22480e5,},Annotations:map[string]string{io.kubernetes.container.hash: 950719af,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c60313490ee322d54c5b3908e628bff77d0ca10249f7253eb9cbd44745a805af,PodSandboxId:b221107c8052d9be54b23c0ee2e882f4adc57ade5813f27e7dfc4e4f71006663,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722929010372163680,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zscz4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef588a32-ac32-4e38-926e-6ee7a662a399,},Annotations:map[string]string{io.kubernetes.container.hash: 38e96eed,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:321504a9bbbfc57376fccab559a6ebc7388670dc0bf123b5f9fd41b1da99adaa,PodSandboxId:4a9fe2c539265caf233fd5f3778ebc3e6092ce121ddf647e3f72622e31022e53,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722929010343273638,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-t24ww,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
8effbe6c-3107-409e-9c94-dee4103d12b0,},Annotations:map[string]string{io.kubernetes.container.hash: 7604f64,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44b9ff2f36bf90e89da591e552ad006221f4c5a515009116c79504170f19368e,PodSandboxId:cabb1138777e527cb2a559ae6e3b154f442e8d3b8d44cc59ed6a4b9a49ce65ea,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING
,CreatedAt:1722929010265192394,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8379466d-0241-40e4-a262-d762b3cda2eb,},Annotations:map[string]string{io.kubernetes.container.hash: d079fdad,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8f5ae0da505fc7f79225ba7115c83b9e909128bc28df767fce1fc3ba7a01392,PodSandboxId:5bbdb233d391e1127588924ac263113c9aa0148b24fc7113df3ca0891d7e0fdc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CON
TAINER_RUNNING,CreatedAt:1722928998315681833,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-n7jmb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa496bfb-7bf0-4ebc-adbd-36d992d8e279,},Annotations:map[string]string{io.kubernetes.container.hash: db2b7a8c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5a5bdf0b2a62163f6aff6abe864eedc47b461e233989f23ef7770c226763325,PodSandboxId:2ed57fdbf65f1846c200a67b50a1043fe8c70dd79c90d1a1c67e5e0634c8088b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722928994
187870823,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ksm6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 594bc178-8319-4922-9132-69a9c6890b95,},Annotations:map[string]string{io.kubernetes.container.hash: 50a78a81,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ea977593bf7ff53f7419e821deaa1cf5813b4c116730030cd8d417d078be7fa,PodSandboxId:eb00561ba0cb5a9d0f8cd015251b78530e734c81712da3c4aeb67fbd1dc5a274,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172292897749
4204189,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cabf9e912268f41f81e7fabcaf2337ce,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a94703035c76b778779a89b7fc9176c7a956ebec52628aa1720d26bf2a1c31f5,PodSandboxId:33f4dab8a223724c5140e304cb2bb6b2c32716d0818935a0a1f10a3e73c27f55,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722928974564889139,Labels:map[strin
g]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6346700b562b8d3f4c18c66b3813bc28,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27f4fdf6fa77a51e1295c12b459cda687ed6ae68875218e24ba7959fb37814ed,PodSandboxId:5ebdb09ee65d6de5671cacfdefaaebdf7b5b00f25aa61461d39015ab5cb5578b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722928974548891631,Labels:map[string]s
tring{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 080589493a4905d0fb10f52a1a0e25a5,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ffee4f9f096a33fc08f1f6b266ea17ad2181a58a153f7fb1703ded978b23a46,PodSandboxId:e8d8f5e72e2b25ee44de736fd829832c7731bada1325d1781559721e92200f54,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722928974494900073,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f74055e61dcf5f2385b1934a1a8e3b73,},Annotations:map[string]string{io.kubernetes.container.hash: 27fe2ed3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:306b97b37955570f0a18d38ff1afab7bb09e01c47a3418dba2aeed0f00d6052d,PodSandboxId:5055641759bdb7b0e57568f6d5310c72587ea715571f3fff96ff5d200be59dc2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722928974442532867,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernet
es.pod.name: etcd-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65b7cf59bb7c42e61e37dcbebce4eec4,},Annotations:map[string]string{io.kubernetes.container.hash: fa683f08,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=22258b6c-0e85-47e6-8397-205d22013d05 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 07:30:38 ha-118173 crio[676]: time="2024-08-06 07:30:38.167345660Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ef73bdb7-4076-440f-bb25-b6425880831e name=/runtime.v1.RuntimeService/Version
	Aug 06 07:30:38 ha-118173 crio[676]: time="2024-08-06 07:30:38.167724327Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ef73bdb7-4076-440f-bb25-b6425880831e name=/runtime.v1.RuntimeService/Version
	Aug 06 07:30:38 ha-118173 crio[676]: time="2024-08-06 07:30:38.168786083Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7c05949c-1328-455d-87d7-2f234bc99b33 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 07:30:38 ha-118173 crio[676]: time="2024-08-06 07:30:38.169217246Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722929438169195581,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154770,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7c05949c-1328-455d-87d7-2f234bc99b33 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 07:30:38 ha-118173 crio[676]: time="2024-08-06 07:30:38.169821713Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bce039f5-a843-42cf-b950-560a67486aa2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 07:30:38 ha-118173 crio[676]: time="2024-08-06 07:30:38.169879476Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bce039f5-a843-42cf-b950-560a67486aa2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 07:30:38 ha-118173 crio[676]: time="2024-08-06 07:30:38.170122055Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:10437bdde23713cae1b222edffa34be0e9583f43a83c01f10aae555f0dd7542e,PodSandboxId:3f725d083d59ecb8b65c8b09466e36c8b01dd5742fa8d97e108de4a49f47de04,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722929221668199667,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-rml5d,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4a82c0ca-1596-4dfe-a2f9-3a34f22480e5,},Annotations:map[string]string{io.kubernetes.container.hash: 950719af,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c60313490ee322d54c5b3908e628bff77d0ca10249f7253eb9cbd44745a805af,PodSandboxId:b221107c8052d9be54b23c0ee2e882f4adc57ade5813f27e7dfc4e4f71006663,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722929010372163680,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zscz4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef588a32-ac32-4e38-926e-6ee7a662a399,},Annotations:map[string]string{io.kubernetes.container.hash: 38e96eed,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:321504a9bbbfc57376fccab559a6ebc7388670dc0bf123b5f9fd41b1da99adaa,PodSandboxId:4a9fe2c539265caf233fd5f3778ebc3e6092ce121ddf647e3f72622e31022e53,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722929010343273638,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-t24ww,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
8effbe6c-3107-409e-9c94-dee4103d12b0,},Annotations:map[string]string{io.kubernetes.container.hash: 7604f64,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44b9ff2f36bf90e89da591e552ad006221f4c5a515009116c79504170f19368e,PodSandboxId:cabb1138777e527cb2a559ae6e3b154f442e8d3b8d44cc59ed6a4b9a49ce65ea,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING
,CreatedAt:1722929010265192394,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8379466d-0241-40e4-a262-d762b3cda2eb,},Annotations:map[string]string{io.kubernetes.container.hash: d079fdad,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8f5ae0da505fc7f79225ba7115c83b9e909128bc28df767fce1fc3ba7a01392,PodSandboxId:5bbdb233d391e1127588924ac263113c9aa0148b24fc7113df3ca0891d7e0fdc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CON
TAINER_RUNNING,CreatedAt:1722928998315681833,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-n7jmb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa496bfb-7bf0-4ebc-adbd-36d992d8e279,},Annotations:map[string]string{io.kubernetes.container.hash: db2b7a8c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5a5bdf0b2a62163f6aff6abe864eedc47b461e233989f23ef7770c226763325,PodSandboxId:2ed57fdbf65f1846c200a67b50a1043fe8c70dd79c90d1a1c67e5e0634c8088b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722928994
187870823,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ksm6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 594bc178-8319-4922-9132-69a9c6890b95,},Annotations:map[string]string{io.kubernetes.container.hash: 50a78a81,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ea977593bf7ff53f7419e821deaa1cf5813b4c116730030cd8d417d078be7fa,PodSandboxId:eb00561ba0cb5a9d0f8cd015251b78530e734c81712da3c4aeb67fbd1dc5a274,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172292897749
4204189,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cabf9e912268f41f81e7fabcaf2337ce,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a94703035c76b778779a89b7fc9176c7a956ebec52628aa1720d26bf2a1c31f5,PodSandboxId:33f4dab8a223724c5140e304cb2bb6b2c32716d0818935a0a1f10a3e73c27f55,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722928974564889139,Labels:map[strin
g]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6346700b562b8d3f4c18c66b3813bc28,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27f4fdf6fa77a51e1295c12b459cda687ed6ae68875218e24ba7959fb37814ed,PodSandboxId:5ebdb09ee65d6de5671cacfdefaaebdf7b5b00f25aa61461d39015ab5cb5578b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722928974548891631,Labels:map[string]s
tring{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 080589493a4905d0fb10f52a1a0e25a5,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ffee4f9f096a33fc08f1f6b266ea17ad2181a58a153f7fb1703ded978b23a46,PodSandboxId:e8d8f5e72e2b25ee44de736fd829832c7731bada1325d1781559721e92200f54,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722928974494900073,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f74055e61dcf5f2385b1934a1a8e3b73,},Annotations:map[string]string{io.kubernetes.container.hash: 27fe2ed3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:306b97b37955570f0a18d38ff1afab7bb09e01c47a3418dba2aeed0f00d6052d,PodSandboxId:5055641759bdb7b0e57568f6d5310c72587ea715571f3fff96ff5d200be59dc2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722928974442532867,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernet
es.pod.name: etcd-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65b7cf59bb7c42e61e37dcbebce4eec4,},Annotations:map[string]string{io.kubernetes.container.hash: fa683f08,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bce039f5-a843-42cf-b950-560a67486aa2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 07:30:38 ha-118173 crio[676]: time="2024-08-06 07:30:38.213384807Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=09108270-d7cb-4670-8bf8-470ff05142d0 name=/runtime.v1.RuntimeService/Version
	Aug 06 07:30:38 ha-118173 crio[676]: time="2024-08-06 07:30:38.213459236Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=09108270-d7cb-4670-8bf8-470ff05142d0 name=/runtime.v1.RuntimeService/Version
	Aug 06 07:30:38 ha-118173 crio[676]: time="2024-08-06 07:30:38.215099000Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6c5c6a58-d73a-4a5c-995d-1c72d6eda8df name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 07:30:38 ha-118173 crio[676]: time="2024-08-06 07:30:38.215602054Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722929438215530586,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154770,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6c5c6a58-d73a-4a5c-995d-1c72d6eda8df name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 07:30:38 ha-118173 crio[676]: time="2024-08-06 07:30:38.216075633Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3d5cae25-e151-47c0-ad50-601272d4c07e name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 07:30:38 ha-118173 crio[676]: time="2024-08-06 07:30:38.216154320Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3d5cae25-e151-47c0-ad50-601272d4c07e name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 07:30:38 ha-118173 crio[676]: time="2024-08-06 07:30:38.216631469Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:10437bdde23713cae1b222edffa34be0e9583f43a83c01f10aae555f0dd7542e,PodSandboxId:3f725d083d59ecb8b65c8b09466e36c8b01dd5742fa8d97e108de4a49f47de04,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722929221668199667,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-rml5d,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4a82c0ca-1596-4dfe-a2f9-3a34f22480e5,},Annotations:map[string]string{io.kubernetes.container.hash: 950719af,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c60313490ee322d54c5b3908e628bff77d0ca10249f7253eb9cbd44745a805af,PodSandboxId:b221107c8052d9be54b23c0ee2e882f4adc57ade5813f27e7dfc4e4f71006663,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722929010372163680,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zscz4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef588a32-ac32-4e38-926e-6ee7a662a399,},Annotations:map[string]string{io.kubernetes.container.hash: 38e96eed,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:321504a9bbbfc57376fccab559a6ebc7388670dc0bf123b5f9fd41b1da99adaa,PodSandboxId:4a9fe2c539265caf233fd5f3778ebc3e6092ce121ddf647e3f72622e31022e53,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722929010343273638,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-t24ww,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
8effbe6c-3107-409e-9c94-dee4103d12b0,},Annotations:map[string]string{io.kubernetes.container.hash: 7604f64,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44b9ff2f36bf90e89da591e552ad006221f4c5a515009116c79504170f19368e,PodSandboxId:cabb1138777e527cb2a559ae6e3b154f442e8d3b8d44cc59ed6a4b9a49ce65ea,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING
,CreatedAt:1722929010265192394,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8379466d-0241-40e4-a262-d762b3cda2eb,},Annotations:map[string]string{io.kubernetes.container.hash: d079fdad,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8f5ae0da505fc7f79225ba7115c83b9e909128bc28df767fce1fc3ba7a01392,PodSandboxId:5bbdb233d391e1127588924ac263113c9aa0148b24fc7113df3ca0891d7e0fdc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CON
TAINER_RUNNING,CreatedAt:1722928998315681833,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-n7jmb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa496bfb-7bf0-4ebc-adbd-36d992d8e279,},Annotations:map[string]string{io.kubernetes.container.hash: db2b7a8c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5a5bdf0b2a62163f6aff6abe864eedc47b461e233989f23ef7770c226763325,PodSandboxId:2ed57fdbf65f1846c200a67b50a1043fe8c70dd79c90d1a1c67e5e0634c8088b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722928994
187870823,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ksm6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 594bc178-8319-4922-9132-69a9c6890b95,},Annotations:map[string]string{io.kubernetes.container.hash: 50a78a81,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ea977593bf7ff53f7419e821deaa1cf5813b4c116730030cd8d417d078be7fa,PodSandboxId:eb00561ba0cb5a9d0f8cd015251b78530e734c81712da3c4aeb67fbd1dc5a274,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172292897749
4204189,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cabf9e912268f41f81e7fabcaf2337ce,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a94703035c76b778779a89b7fc9176c7a956ebec52628aa1720d26bf2a1c31f5,PodSandboxId:33f4dab8a223724c5140e304cb2bb6b2c32716d0818935a0a1f10a3e73c27f55,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722928974564889139,Labels:map[strin
g]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6346700b562b8d3f4c18c66b3813bc28,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27f4fdf6fa77a51e1295c12b459cda687ed6ae68875218e24ba7959fb37814ed,PodSandboxId:5ebdb09ee65d6de5671cacfdefaaebdf7b5b00f25aa61461d39015ab5cb5578b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722928974548891631,Labels:map[string]s
tring{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 080589493a4905d0fb10f52a1a0e25a5,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ffee4f9f096a33fc08f1f6b266ea17ad2181a58a153f7fb1703ded978b23a46,PodSandboxId:e8d8f5e72e2b25ee44de736fd829832c7731bada1325d1781559721e92200f54,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722928974494900073,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f74055e61dcf5f2385b1934a1a8e3b73,},Annotations:map[string]string{io.kubernetes.container.hash: 27fe2ed3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:306b97b37955570f0a18d38ff1afab7bb09e01c47a3418dba2aeed0f00d6052d,PodSandboxId:5055641759bdb7b0e57568f6d5310c72587ea715571f3fff96ff5d200be59dc2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722928974442532867,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernet
es.pod.name: etcd-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65b7cf59bb7c42e61e37dcbebce4eec4,},Annotations:map[string]string{io.kubernetes.container.hash: fa683f08,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3d5cae25-e151-47c0-ad50-601272d4c07e name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 07:30:38 ha-118173 crio[676]: time="2024-08-06 07:30:38.255938600Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=277446d8-e00a-4c35-9fff-c76f02927df9 name=/runtime.v1.RuntimeService/Version
	Aug 06 07:30:38 ha-118173 crio[676]: time="2024-08-06 07:30:38.256031215Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=277446d8-e00a-4c35-9fff-c76f02927df9 name=/runtime.v1.RuntimeService/Version
	Aug 06 07:30:38 ha-118173 crio[676]: time="2024-08-06 07:30:38.257265907Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=03931e52-2d3c-41c7-8068-6953d6201e27 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 07:30:38 ha-118173 crio[676]: time="2024-08-06 07:30:38.257757331Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722929438257734776,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154770,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=03931e52-2d3c-41c7-8068-6953d6201e27 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 07:30:38 ha-118173 crio[676]: time="2024-08-06 07:30:38.258303588Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=97dfaf45-ede2-413e-86ab-c4318504059b name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 07:30:38 ha-118173 crio[676]: time="2024-08-06 07:30:38.258406280Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=97dfaf45-ede2-413e-86ab-c4318504059b name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 07:30:38 ha-118173 crio[676]: time="2024-08-06 07:30:38.258857919Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:10437bdde23713cae1b222edffa34be0e9583f43a83c01f10aae555f0dd7542e,PodSandboxId:3f725d083d59ecb8b65c8b09466e36c8b01dd5742fa8d97e108de4a49f47de04,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722929221668199667,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-rml5d,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4a82c0ca-1596-4dfe-a2f9-3a34f22480e5,},Annotations:map[string]string{io.kubernetes.container.hash: 950719af,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c60313490ee322d54c5b3908e628bff77d0ca10249f7253eb9cbd44745a805af,PodSandboxId:b221107c8052d9be54b23c0ee2e882f4adc57ade5813f27e7dfc4e4f71006663,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722929010372163680,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zscz4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef588a32-ac32-4e38-926e-6ee7a662a399,},Annotations:map[string]string{io.kubernetes.container.hash: 38e96eed,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:321504a9bbbfc57376fccab559a6ebc7388670dc0bf123b5f9fd41b1da99adaa,PodSandboxId:4a9fe2c539265caf233fd5f3778ebc3e6092ce121ddf647e3f72622e31022e53,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722929010343273638,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-t24ww,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
8effbe6c-3107-409e-9c94-dee4103d12b0,},Annotations:map[string]string{io.kubernetes.container.hash: 7604f64,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44b9ff2f36bf90e89da591e552ad006221f4c5a515009116c79504170f19368e,PodSandboxId:cabb1138777e527cb2a559ae6e3b154f442e8d3b8d44cc59ed6a4b9a49ce65ea,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING
,CreatedAt:1722929010265192394,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8379466d-0241-40e4-a262-d762b3cda2eb,},Annotations:map[string]string{io.kubernetes.container.hash: d079fdad,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8f5ae0da505fc7f79225ba7115c83b9e909128bc28df767fce1fc3ba7a01392,PodSandboxId:5bbdb233d391e1127588924ac263113c9aa0148b24fc7113df3ca0891d7e0fdc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CON
TAINER_RUNNING,CreatedAt:1722928998315681833,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-n7jmb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa496bfb-7bf0-4ebc-adbd-36d992d8e279,},Annotations:map[string]string{io.kubernetes.container.hash: db2b7a8c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5a5bdf0b2a62163f6aff6abe864eedc47b461e233989f23ef7770c226763325,PodSandboxId:2ed57fdbf65f1846c200a67b50a1043fe8c70dd79c90d1a1c67e5e0634c8088b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722928994
187870823,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ksm6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 594bc178-8319-4922-9132-69a9c6890b95,},Annotations:map[string]string{io.kubernetes.container.hash: 50a78a81,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ea977593bf7ff53f7419e821deaa1cf5813b4c116730030cd8d417d078be7fa,PodSandboxId:eb00561ba0cb5a9d0f8cd015251b78530e734c81712da3c4aeb67fbd1dc5a274,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172292897749
4204189,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cabf9e912268f41f81e7fabcaf2337ce,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a94703035c76b778779a89b7fc9176c7a956ebec52628aa1720d26bf2a1c31f5,PodSandboxId:33f4dab8a223724c5140e304cb2bb6b2c32716d0818935a0a1f10a3e73c27f55,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722928974564889139,Labels:map[strin
g]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6346700b562b8d3f4c18c66b3813bc28,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27f4fdf6fa77a51e1295c12b459cda687ed6ae68875218e24ba7959fb37814ed,PodSandboxId:5ebdb09ee65d6de5671cacfdefaaebdf7b5b00f25aa61461d39015ab5cb5578b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722928974548891631,Labels:map[string]s
tring{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 080589493a4905d0fb10f52a1a0e25a5,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ffee4f9f096a33fc08f1f6b266ea17ad2181a58a153f7fb1703ded978b23a46,PodSandboxId:e8d8f5e72e2b25ee44de736fd829832c7731bada1325d1781559721e92200f54,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722928974494900073,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f74055e61dcf5f2385b1934a1a8e3b73,},Annotations:map[string]string{io.kubernetes.container.hash: 27fe2ed3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:306b97b37955570f0a18d38ff1afab7bb09e01c47a3418dba2aeed0f00d6052d,PodSandboxId:5055641759bdb7b0e57568f6d5310c72587ea715571f3fff96ff5d200be59dc2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722928974442532867,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernet
es.pod.name: etcd-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65b7cf59bb7c42e61e37dcbebce4eec4,},Annotations:map[string]string{io.kubernetes.container.hash: fa683f08,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=97dfaf45-ede2-413e-86ab-c4318504059b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	10437bdde2371       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   3f725d083d59e       busybox-fc5497c4f-rml5d
	c60313490ee32       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago       Running             coredns                   0                   b221107c8052d       coredns-7db6d8ff4d-zscz4
	321504a9bbbfc       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago       Running             coredns                   0                   4a9fe2c539265       coredns-7db6d8ff4d-t24ww
	44b9ff2f36bf9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago       Running             storage-provisioner       0                   cabb1138777e5       storage-provisioner
	b8f5ae0da505f       docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3    7 minutes ago       Running             kindnet-cni               0                   5bbdb233d391e       kindnet-n7jmb
	e5a5bdf0b2a62       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      7 minutes ago       Running             kube-proxy                0                   2ed57fdbf65f1       kube-proxy-ksm6v
	9ea977593bf7f       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     7 minutes ago       Running             kube-vip                  0                   eb00561ba0cb5       kube-vip-ha-118173
	a94703035c76b       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      7 minutes ago       Running             kube-controller-manager   0                   33f4dab8a2237       kube-controller-manager-ha-118173
	27f4fdf6fa77a       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      7 minutes ago       Running             kube-scheduler            0                   5ebdb09ee65d6       kube-scheduler-ha-118173
	1ffee4f9f096a       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      7 minutes ago       Running             kube-apiserver            0                   e8d8f5e72e2b2       kube-apiserver-ha-118173
	306b97b379555       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      7 minutes ago       Running             etcd                      0                   5055641759bdb       etcd-ha-118173
	
	
	==> coredns [321504a9bbbfc57376fccab559a6ebc7388670dc0bf123b5f9fd41b1da99adaa] <==
	[INFO] 127.0.0.1:52647 - 6927 "HINFO IN 6474732985926963853.8121263455686486253. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010761994s
	[INFO] 10.244.2.2:37657 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000745056s
	[INFO] 10.244.2.2:34304 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.01427884s
	[INFO] 10.244.1.2:50320 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000621303s
	[INFO] 10.244.1.2:50307 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001920609s
	[INFO] 10.244.0.4:35382 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.001483476s
	[INFO] 10.244.2.2:42914 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003426884s
	[INFO] 10.244.2.2:54544 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002889491s
	[INFO] 10.244.2.2:56241 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000153019s
	[INFO] 10.244.1.2:42127 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000287649s
	[INFO] 10.244.1.2:57726 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001778651s
	[INFO] 10.244.1.2:48193 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000186214s
	[INFO] 10.244.1.2:39745 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001882s
	[INFO] 10.244.0.4:43115 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00007628s
	[INFO] 10.244.0.4:43987 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001861604s
	[INFO] 10.244.0.4:36070 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000175723s
	[INFO] 10.244.0.4:57521 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000070761s
	[INFO] 10.244.2.2:54569 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000133483s
	[INFO] 10.244.2.2:45084 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000078772s
	[INFO] 10.244.0.4:53171 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000115977s
	[INFO] 10.244.0.4:46499 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000151186s
	[INFO] 10.244.1.2:46474 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000192738s
	[INFO] 10.244.1.2:42611 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000163773s
	[INFO] 10.244.0.4:56252 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000104237s
	[INFO] 10.244.0.4:53725 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000119201s
	
	
	==> coredns [c60313490ee322d54c5b3908e628bff77d0ca10249f7253eb9cbd44745a805af] <==
	[INFO] 10.244.2.2:39039 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000306066s
	[INFO] 10.244.1.2:52358 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000125862s
	[INFO] 10.244.1.2:36231 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001558919s
	[INFO] 10.244.1.2:48070 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000161985s
	[INFO] 10.244.1.2:52908 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000230784s
	[INFO] 10.244.0.4:40510 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000107856s
	[INFO] 10.244.0.4:53202 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001964559s
	[INFO] 10.244.0.4:37382 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000120987s
	[INFO] 10.244.0.4:55432 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000068014s
	[INFO] 10.244.2.2:55791 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000121334s
	[INFO] 10.244.2.2:41409 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000170119s
	[INFO] 10.244.1.2:38348 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000159643s
	[INFO] 10.244.1.2:59261 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000120304s
	[INFO] 10.244.1.2:34766 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001388s
	[INFO] 10.244.1.2:51650 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00006659s
	[INFO] 10.244.0.4:60377 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000100376s
	[INFO] 10.244.0.4:36616 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000046708s
	[INFO] 10.244.2.2:40205 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000126264s
	[INFO] 10.244.2.2:43822 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000154718s
	[INFO] 10.244.2.2:55803 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000127422s
	[INFO] 10.244.2.2:36078 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000107543s
	[INFO] 10.244.1.2:57979 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000156818s
	[INFO] 10.244.1.2:46537 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000172879s
	[INFO] 10.244.0.4:53503 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000186753s
	[INFO] 10.244.0.4:59777 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000063489s
	
	
	==> describe nodes <==
	Name:               ha-118173
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-118173
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e92cb06692f5ea1ba801d10d148e5e92e807f9c8
	                    minikube.k8s.io/name=ha-118173
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_06T07_23_01_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 06 Aug 2024 07:22:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-118173
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 06 Aug 2024 07:30:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 06 Aug 2024 07:27:36 +0000   Tue, 06 Aug 2024 07:22:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 06 Aug 2024 07:27:36 +0000   Tue, 06 Aug 2024 07:22:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 06 Aug 2024 07:27:36 +0000   Tue, 06 Aug 2024 07:22:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 06 Aug 2024 07:27:36 +0000   Tue, 06 Aug 2024 07:23:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.164
	  Hostname:    ha-118173
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 bc4f3dc5ed054dcb94a91cb696ade310
	  System UUID:                bc4f3dc5-ed05-4dcb-94a9-1cb696ade310
	  Boot ID:                    34e88bd3-feb2-44fb-9f4d-def7e685eb93
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-rml5d              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m42s
	  kube-system                 coredns-7db6d8ff4d-t24ww             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m25s
	  kube-system                 coredns-7db6d8ff4d-zscz4             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m25s
	  kube-system                 etcd-ha-118173                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m39s
	  kube-system                 kindnet-n7jmb                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m25s
	  kube-system                 kube-apiserver-ha-118173             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m38s
	  kube-system                 kube-controller-manager-ha-118173    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m38s
	  kube-system                 kube-proxy-ksm6v                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m25s
	  kube-system                 kube-scheduler-ha-118173             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m40s
	  kube-system                 kube-vip-ha-118173                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m40s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 7m23s  kube-proxy       
	  Normal  NodeHasSufficientMemory  7m45s  kubelet          Node ha-118173 status is now: NodeHasSufficientMemory
	  Normal  Starting                 7m38s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m38s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m38s  kubelet          Node ha-118173 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m38s  kubelet          Node ha-118173 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m38s  kubelet          Node ha-118173 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m26s  node-controller  Node ha-118173 event: Registered Node ha-118173 in Controller
	  Normal  NodeReady                7m9s   kubelet          Node ha-118173 status is now: NodeReady
	  Normal  RegisteredNode           5m7s   node-controller  Node ha-118173 event: Registered Node ha-118173 in Controller
	  Normal  RegisteredNode           3m54s  node-controller  Node ha-118173 event: Registered Node ha-118173 in Controller
	
	
	Name:               ha-118173-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-118173-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e92cb06692f5ea1ba801d10d148e5e92e807f9c8
	                    minikube.k8s.io/name=ha-118173
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_06T07_25_16_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 06 Aug 2024 07:25:13 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-118173-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 06 Aug 2024 07:28:16 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 06 Aug 2024 07:27:15 +0000   Tue, 06 Aug 2024 07:28:57 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 06 Aug 2024 07:27:15 +0000   Tue, 06 Aug 2024 07:28:57 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 06 Aug 2024 07:27:15 +0000   Tue, 06 Aug 2024 07:28:57 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 06 Aug 2024 07:27:15 +0000   Tue, 06 Aug 2024 07:28:57 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.203
	  Hostname:    ha-118173-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1d472203b4d8448095b6313de6675a72
	  System UUID:                1d472203-b4d8-4480-95b6-313de6675a72
	  Boot ID:                    6ab46e7a-6f1a-4acf-9384-ca4c93b30dfc
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-j9d92                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m42s
	  kube-system                 etcd-ha-118173-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m23s
	  kube-system                 kindnet-z9w6l                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m25s
	  kube-system                 kube-apiserver-ha-118173-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m24s
	  kube-system                 kube-controller-manager-ha-118173-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m24s
	  kube-system                 kube-proxy-k8sq6                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m25s
	  kube-system                 kube-scheduler-ha-118173-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m23s
	  kube-system                 kube-vip-ha-118173-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m21s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m25s (x8 over 5m26s)  kubelet          Node ha-118173-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m25s (x8 over 5m26s)  kubelet          Node ha-118173-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m25s (x7 over 5m26s)  kubelet          Node ha-118173-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m25s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m21s                  node-controller  Node ha-118173-m02 event: Registered Node ha-118173-m02 in Controller
	  Normal  RegisteredNode           5m7s                   node-controller  Node ha-118173-m02 event: Registered Node ha-118173-m02 in Controller
	  Normal  RegisteredNode           3m54s                  node-controller  Node ha-118173-m02 event: Registered Node ha-118173-m02 in Controller
	  Normal  NodeNotReady             101s                   node-controller  Node ha-118173-m02 status is now: NodeNotReady
	
	
	Name:               ha-118173-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-118173-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e92cb06692f5ea1ba801d10d148e5e92e807f9c8
	                    minikube.k8s.io/name=ha-118173
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_06T07_26_30_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 06 Aug 2024 07:26:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-118173-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 06 Aug 2024 07:30:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 06 Aug 2024 07:27:28 +0000   Tue, 06 Aug 2024 07:26:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 06 Aug 2024 07:27:28 +0000   Tue, 06 Aug 2024 07:26:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 06 Aug 2024 07:27:28 +0000   Tue, 06 Aug 2024 07:26:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 06 Aug 2024 07:27:28 +0000   Tue, 06 Aug 2024 07:26:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.5
	  Hostname:    ha-118173-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 323fad8ca5474069b5e10e092f6ed68a
	  System UUID:                323fad8c-a547-4069-b5e1-0e092f6ed68a
	  Boot ID:                    c712c8de-a26d-41b9-8050-14da15aeb29b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-852mp                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m41s
	  kube-system                 etcd-ha-118173-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m9s
	  kube-system                 kindnet-2bmzh                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m11s
	  kube-system                 kube-apiserver-ha-118173-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m10s
	  kube-system                 kube-controller-manager-ha-118173-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m10s
	  kube-system                 kube-proxy-sw4nj                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m11s
	  kube-system                 kube-scheduler-ha-118173-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m9s
	  kube-system                 kube-vip-ha-118173-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m7s                   kube-proxy       
	  Normal  RegisteredNode           4m11s                  node-controller  Node ha-118173-m03 event: Registered Node ha-118173-m03 in Controller
	  Normal  NodeHasSufficientMemory  4m11s (x8 over 4m11s)  kubelet          Node ha-118173-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m11s (x8 over 4m11s)  kubelet          Node ha-118173-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m11s (x7 over 4m11s)  kubelet          Node ha-118173-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m11s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m7s                   node-controller  Node ha-118173-m03 event: Registered Node ha-118173-m03 in Controller
	  Normal  RegisteredNode           3m54s                  node-controller  Node ha-118173-m03 event: Registered Node ha-118173-m03 in Controller
	
	
	Name:               ha-118173-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-118173-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e92cb06692f5ea1ba801d10d148e5e92e807f9c8
	                    minikube.k8s.io/name=ha-118173
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_06T07_27_39_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 06 Aug 2024 07:27:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-118173-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 06 Aug 2024 07:30:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 06 Aug 2024 07:28:09 +0000   Tue, 06 Aug 2024 07:27:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 06 Aug 2024 07:28:09 +0000   Tue, 06 Aug 2024 07:27:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 06 Aug 2024 07:28:09 +0000   Tue, 06 Aug 2024 07:27:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 06 Aug 2024 07:28:09 +0000   Tue, 06 Aug 2024 07:27:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.107
	  Hostname:    ha-118173-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a0510d6c5dda47be93ffda6ba4fd5419
	  System UUID:                a0510d6c-5dda-47be-93ff-da6ba4fd5419
	  Boot ID:                    8e2de43d-b0eb-44c3-8d61-ce7909ecbc3f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-97v8n       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m
	  kube-system                 kube-proxy-q4x6p    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age              From             Message
	  ----    ------                   ----             ----             -------
	  Normal  Starting                 2m55s            kube-proxy       
	  Normal  NodeHasSufficientMemory  3m (x2 over 3m)  kubelet          Node ha-118173-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m (x2 over 3m)  kubelet          Node ha-118173-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m (x2 over 3m)  kubelet          Node ha-118173-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m               kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m59s            node-controller  Node ha-118173-m04 event: Registered Node ha-118173-m04 in Controller
	  Normal  RegisteredNode           2m57s            node-controller  Node ha-118173-m04 event: Registered Node ha-118173-m04 in Controller
	  Normal  RegisteredNode           2m56s            node-controller  Node ha-118173-m04 event: Registered Node ha-118173-m04 in Controller
	  Normal  NodeReady                2m40s            kubelet          Node ha-118173-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Aug 6 07:22] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050778] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040080] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.786324] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.540494] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.635705] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.212519] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.057280] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.055993] systemd-fstab-generator[605]: Ignoring "noauto" option for root device
	[  +0.205541] systemd-fstab-generator[619]: Ignoring "noauto" option for root device
	[  +0.119779] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	[  +0.271137] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[  +4.288191] systemd-fstab-generator[759]: Ignoring "noauto" option for root device
	[  +0.057342] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.007696] systemd-fstab-generator[935]: Ignoring "noauto" option for root device
	[  +1.004774] kauditd_printk_skb: 62 callbacks suppressed
	[  +5.981141] systemd-fstab-generator[1356]: Ignoring "noauto" option for root device
	[  +0.087170] kauditd_printk_skb: 35 callbacks suppressed
	[Aug 6 07:23] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.080054] kauditd_printk_skb: 35 callbacks suppressed
	[Aug 6 07:25] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [306b97b37955570f0a18d38ff1afab7bb09e01c47a3418dba2aeed0f00d6052d] <==
	{"level":"warn","ts":"2024-08-06T07:30:38.370945Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"9fed855c7863c2a","rtt":"11.078005ms","error":"dial tcp 192.168.39.203:2380: i/o timeout"}
	{"level":"warn","ts":"2024-08-06T07:30:38.371081Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"9fed855c7863c2a","rtt":"1.131562ms","error":"dial tcp 192.168.39.203:2380: i/o timeout"}
	{"level":"warn","ts":"2024-08-06T07:30:38.547411Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a00153c0a3e6122","from":"6a00153c0a3e6122","remote-peer-id":"9fed855c7863c2a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-06T07:30:38.551256Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a00153c0a3e6122","from":"6a00153c0a3e6122","remote-peer-id":"9fed855c7863c2a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-06T07:30:38.555929Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a00153c0a3e6122","from":"6a00153c0a3e6122","remote-peer-id":"9fed855c7863c2a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-06T07:30:38.566462Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a00153c0a3e6122","from":"6a00153c0a3e6122","remote-peer-id":"9fed855c7863c2a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-06T07:30:38.582632Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a00153c0a3e6122","from":"6a00153c0a3e6122","remote-peer-id":"9fed855c7863c2a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-06T07:30:38.591702Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a00153c0a3e6122","from":"6a00153c0a3e6122","remote-peer-id":"9fed855c7863c2a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-06T07:30:38.595812Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a00153c0a3e6122","from":"6a00153c0a3e6122","remote-peer-id":"9fed855c7863c2a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-06T07:30:38.59944Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a00153c0a3e6122","from":"6a00153c0a3e6122","remote-peer-id":"9fed855c7863c2a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-06T07:30:38.609926Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a00153c0a3e6122","from":"6a00153c0a3e6122","remote-peer-id":"9fed855c7863c2a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-06T07:30:38.618537Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a00153c0a3e6122","from":"6a00153c0a3e6122","remote-peer-id":"9fed855c7863c2a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-06T07:30:38.635061Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a00153c0a3e6122","from":"6a00153c0a3e6122","remote-peer-id":"9fed855c7863c2a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-06T07:30:38.638849Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a00153c0a3e6122","from":"6a00153c0a3e6122","remote-peer-id":"9fed855c7863c2a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-06T07:30:38.642045Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a00153c0a3e6122","from":"6a00153c0a3e6122","remote-peer-id":"9fed855c7863c2a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-06T07:30:38.653358Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a00153c0a3e6122","from":"6a00153c0a3e6122","remote-peer-id":"9fed855c7863c2a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-06T07:30:38.655161Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a00153c0a3e6122","from":"6a00153c0a3e6122","remote-peer-id":"9fed855c7863c2a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-06T07:30:38.660356Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a00153c0a3e6122","from":"6a00153c0a3e6122","remote-peer-id":"9fed855c7863c2a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-06T07:30:38.666829Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a00153c0a3e6122","from":"6a00153c0a3e6122","remote-peer-id":"9fed855c7863c2a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-06T07:30:38.672198Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a00153c0a3e6122","from":"6a00153c0a3e6122","remote-peer-id":"9fed855c7863c2a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-06T07:30:38.675633Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a00153c0a3e6122","from":"6a00153c0a3e6122","remote-peer-id":"9fed855c7863c2a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-06T07:30:38.685475Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a00153c0a3e6122","from":"6a00153c0a3e6122","remote-peer-id":"9fed855c7863c2a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-06T07:30:38.695762Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a00153c0a3e6122","from":"6a00153c0a3e6122","remote-peer-id":"9fed855c7863c2a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-06T07:30:38.703821Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a00153c0a3e6122","from":"6a00153c0a3e6122","remote-peer-id":"9fed855c7863c2a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-06T07:30:38.756205Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a00153c0a3e6122","from":"6a00153c0a3e6122","remote-peer-id":"9fed855c7863c2a","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 07:30:38 up 8 min,  0 users,  load average: 0.36, 0.37, 0.20
	Linux ha-118173 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [b8f5ae0da505fc7f79225ba7115c83b9e909128bc28df767fce1fc3ba7a01392] <==
	I0806 07:29:59.278740       1 main.go:322] Node ha-118173-m04 has CIDR [10.244.3.0/24] 
	I0806 07:30:09.272178       1 main.go:295] Handling node with IPs: map[192.168.39.164:{}]
	I0806 07:30:09.272231       1 main.go:299] handling current node
	I0806 07:30:09.272254       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0806 07:30:09.272260       1 main.go:322] Node ha-118173-m02 has CIDR [10.244.1.0/24] 
	I0806 07:30:09.272442       1 main.go:295] Handling node with IPs: map[192.168.39.5:{}]
	I0806 07:30:09.272479       1 main.go:322] Node ha-118173-m03 has CIDR [10.244.2.0/24] 
	I0806 07:30:09.272643       1 main.go:295] Handling node with IPs: map[192.168.39.107:{}]
	I0806 07:30:09.272673       1 main.go:322] Node ha-118173-m04 has CIDR [10.244.3.0/24] 
	I0806 07:30:19.271965       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0806 07:30:19.271998       1 main.go:322] Node ha-118173-m02 has CIDR [10.244.1.0/24] 
	I0806 07:30:19.272180       1 main.go:295] Handling node with IPs: map[192.168.39.5:{}]
	I0806 07:30:19.272205       1 main.go:322] Node ha-118173-m03 has CIDR [10.244.2.0/24] 
	I0806 07:30:19.272265       1 main.go:295] Handling node with IPs: map[192.168.39.107:{}]
	I0806 07:30:19.272270       1 main.go:322] Node ha-118173-m04 has CIDR [10.244.3.0/24] 
	I0806 07:30:19.272322       1 main.go:295] Handling node with IPs: map[192.168.39.164:{}]
	I0806 07:30:19.272385       1 main.go:299] handling current node
	I0806 07:30:29.275805       1 main.go:295] Handling node with IPs: map[192.168.39.164:{}]
	I0806 07:30:29.275879       1 main.go:299] handling current node
	I0806 07:30:29.275902       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0806 07:30:29.275909       1 main.go:322] Node ha-118173-m02 has CIDR [10.244.1.0/24] 
	I0806 07:30:29.276110       1 main.go:295] Handling node with IPs: map[192.168.39.5:{}]
	I0806 07:30:29.276138       1 main.go:322] Node ha-118173-m03 has CIDR [10.244.2.0/24] 
	I0806 07:30:29.276208       1 main.go:295] Handling node with IPs: map[192.168.39.107:{}]
	I0806 07:30:29.276213       1 main.go:322] Node ha-118173-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [1ffee4f9f096a33fc08f1f6b266ea17ad2181a58a153f7fb1703ded978b23a46] <==
	I0806 07:22:59.114813       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0806 07:22:59.149016       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.164]
	I0806 07:22:59.151002       1 controller.go:615] quota admission added evaluator for: endpoints
	I0806 07:22:59.173271       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0806 07:22:59.374819       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0806 07:23:00.696250       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0806 07:23:00.737911       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0806 07:23:00.761423       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0806 07:23:13.330094       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0806 07:23:13.583838       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0806 07:27:03.129943       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59664: use of closed network connection
	E0806 07:27:03.330300       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59684: use of closed network connection
	E0806 07:27:03.526011       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59706: use of closed network connection
	E0806 07:27:03.732229       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59720: use of closed network connection
	E0806 07:27:03.918349       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59730: use of closed network connection
	E0806 07:27:04.104013       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59752: use of closed network connection
	E0806 07:27:04.287914       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59770: use of closed network connection
	E0806 07:27:04.498130       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59790: use of closed network connection
	E0806 07:27:04.689488       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59806: use of closed network connection
	E0806 07:27:05.194787       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59844: use of closed network connection
	E0806 07:27:05.380854       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59872: use of closed network connection
	E0806 07:27:05.557717       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59896: use of closed network connection
	E0806 07:27:05.740816       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55210: use of closed network connection
	E0806 07:27:05.917075       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55230: use of closed network connection
	W0806 07:28:29.139362       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.164 192.168.39.5]
	
	
	==> kube-controller-manager [a94703035c76b778779a89b7fc9176c7a956ebec52628aa1720d26bf2a1c31f5] <==
	I0806 07:26:57.272494       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="59.028µs"
	I0806 07:26:57.318015       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="17.119403ms"
	I0806 07:26:57.318438       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="67.238µs"
	I0806 07:26:57.383441       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.446412ms"
	I0806 07:26:57.383608       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="58.39µs"
	I0806 07:26:58.313952       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="68.708µs"
	I0806 07:26:58.329219       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="113.525µs"
	I0806 07:26:58.343952       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="53.183µs"
	I0806 07:26:58.348477       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.841µs"
	I0806 07:26:58.352388       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="68.761µs"
	I0806 07:26:58.369449       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="103.908µs"
	I0806 07:27:00.463853       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.356244ms"
	I0806 07:27:00.463984       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.866µs"
	I0806 07:27:00.645958       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="72.548µs"
	I0806 07:27:02.344534       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.10763ms"
	I0806 07:27:02.344697       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.953µs"
	I0806 07:27:02.662283       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.857433ms"
	I0806 07:27:02.662405       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="63.709µs"
	I0806 07:27:38.505040       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-118173-m04\" does not exist"
	I0806 07:27:38.530464       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-118173-m04" podCIDRs=["10.244.3.0/24"]
	I0806 07:27:42.873398       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-118173-m04"
	I0806 07:27:58.034326       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-118173-m04"
	I0806 07:28:57.904050       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-118173-m04"
	I0806 07:28:58.073064       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="50.758979ms"
	I0806 07:28:58.073278       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.967µs"
	
	
	==> kube-proxy [e5a5bdf0b2a62163f6aff6abe864eedc47b461e233989f23ef7770c226763325] <==
	I0806 07:23:14.558114       1 server_linux.go:69] "Using iptables proxy"
	I0806 07:23:14.576147       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.164"]
	I0806 07:23:14.624208       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0806 07:23:14.624261       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0806 07:23:14.624277       1 server_linux.go:165] "Using iptables Proxier"
	I0806 07:23:14.627661       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0806 07:23:14.627905       1 server.go:872] "Version info" version="v1.30.3"
	I0806 07:23:14.627938       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0806 07:23:14.630373       1 config.go:192] "Starting service config controller"
	I0806 07:23:14.630640       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0806 07:23:14.630690       1 config.go:101] "Starting endpoint slice config controller"
	I0806 07:23:14.630711       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0806 07:23:14.632764       1 config.go:319] "Starting node config controller"
	I0806 07:23:14.632788       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0806 07:23:14.730828       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0806 07:23:14.730891       1 shared_informer.go:320] Caches are synced for service config
	I0806 07:23:14.733864       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [27f4fdf6fa77a51e1295c12b459cda687ed6ae68875218e24ba7959fb37814ed] <==
	W0806 07:22:58.668683       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0806 07:22:58.668848       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0806 07:22:58.886973       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0806 07:22:58.887021       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0806 07:23:01.830478       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0806 07:26:27.185971       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-sw4nj\": pod kube-proxy-sw4nj is already assigned to node \"ha-118173-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-sw4nj" node="ha-118173-m03"
	E0806 07:26:27.186652       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod a75ef027-13c0-49c6-8580-6adb8afa497b(kube-system/kube-proxy-sw4nj) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-sw4nj"
	E0806 07:26:27.186739       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-sw4nj\": pod kube-proxy-sw4nj is already assigned to node \"ha-118173-m03\"" pod="kube-system/kube-proxy-sw4nj"
	I0806 07:26:27.186813       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-sw4nj" node="ha-118173-m03"
	E0806 07:26:27.188161       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-dzls6\": pod kindnet-dzls6 is already assigned to node \"ha-118173-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-dzls6" node="ha-118173-m03"
	E0806 07:26:27.188282       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 3ba3729c-f95a-4c31-8930-69abe66f0aa1(kube-system/kindnet-dzls6) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-dzls6"
	E0806 07:26:27.188414       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-dzls6\": pod kindnet-dzls6 is already assigned to node \"ha-118173-m03\"" pod="kube-system/kindnet-dzls6"
	I0806 07:26:27.188520       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-dzls6" node="ha-118173-m03"
	E0806 07:26:56.860057       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-j9d92\": pod busybox-fc5497c4f-j9d92 is already assigned to node \"ha-118173-m02\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-j9d92" node="ha-118173-m02"
	E0806 07:26:56.861298       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 32b327ab-b417-4a6f-9933-de5b4366d910(default/busybox-fc5497c4f-j9d92) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-j9d92"
	E0806 07:26:56.861407       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-j9d92\": pod busybox-fc5497c4f-j9d92 is already assigned to node \"ha-118173-m02\"" pod="default/busybox-fc5497c4f-j9d92"
	I0806 07:26:56.861533       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-j9d92" node="ha-118173-m02"
	E0806 07:26:58.329378       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-852mp\": pod busybox-fc5497c4f-852mp is already assigned to node \"ha-118173-m03\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-852mp" node="ha-118173-m03"
	E0806 07:26:58.329483       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-852mp\": pod busybox-fc5497c4f-852mp is already assigned to node \"ha-118173-m03\"" pod="default/busybox-fc5497c4f-852mp"
	E0806 07:27:38.588805       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-97v8n\": pod kindnet-97v8n is already assigned to node \"ha-118173-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-97v8n" node="ha-118173-m04"
	E0806 07:27:38.590540       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 9fc002fa-f45d-473d-8dc8-f2ea9ede38f4(kube-system/kindnet-97v8n) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-97v8n"
	E0806 07:27:38.592148       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-97v8n\": pod kindnet-97v8n is already assigned to node \"ha-118173-m04\"" pod="kube-system/kindnet-97v8n"
	I0806 07:27:38.592274       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-97v8n" node="ha-118173-m04"
	E0806 07:27:38.592953       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-zldnr\": pod kube-proxy-zldnr is already assigned to node \"ha-118173-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-zldnr" node="ha-118173-m04"
	E0806 07:27:38.593060       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-zldnr\": pod kube-proxy-zldnr is already assigned to node \"ha-118173-m04\"" pod="kube-system/kube-proxy-zldnr"
	
	
	==> kubelet <==
	Aug 06 07:26:56 ha-118173 kubelet[1363]: E0806 07:26:56.934187    1363 reflector.go:150] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ha-118173" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'ha-118173' and this object
	Aug 06 07:26:57 ha-118173 kubelet[1363]: I0806 07:26:57.005036    1363 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5tfh\" (UniqueName: \"kubernetes.io/projected/4a82c0ca-1596-4dfe-a2f9-3a34f22480e5-kube-api-access-s5tfh\") pod \"busybox-fc5497c4f-rml5d\" (UID: \"4a82c0ca-1596-4dfe-a2f9-3a34f22480e5\") " pod="default/busybox-fc5497c4f-rml5d"
	Aug 06 07:26:58 ha-118173 kubelet[1363]: E0806 07:26:58.171959    1363 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Aug 06 07:26:58 ha-118173 kubelet[1363]: E0806 07:26:58.172360    1363 projected.go:200] Error preparing data for projected volume kube-api-access-s5tfh for pod default/busybox-fc5497c4f-rml5d: failed to sync configmap cache: timed out waiting for the condition
	Aug 06 07:26:58 ha-118173 kubelet[1363]: E0806 07:26:58.172850    1363 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4a82c0ca-1596-4dfe-a2f9-3a34f22480e5-kube-api-access-s5tfh podName:4a82c0ca-1596-4dfe-a2f9-3a34f22480e5 nodeName:}" failed. No retries permitted until 2024-08-06 07:26:58.672526182 +0000 UTC m=+238.198102295 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s5tfh" (UniqueName: "kubernetes.io/projected/4a82c0ca-1596-4dfe-a2f9-3a34f22480e5-kube-api-access-s5tfh") pod "busybox-fc5497c4f-rml5d" (UID: "4a82c0ca-1596-4dfe-a2f9-3a34f22480e5") : failed to sync configmap cache: timed out waiting for the condition
	Aug 06 07:27:00 ha-118173 kubelet[1363]: E0806 07:27:00.655627    1363 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 06 07:27:00 ha-118173 kubelet[1363]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 06 07:27:00 ha-118173 kubelet[1363]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 06 07:27:00 ha-118173 kubelet[1363]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 06 07:27:00 ha-118173 kubelet[1363]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 06 07:28:00 ha-118173 kubelet[1363]: E0806 07:28:00.650896    1363 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 06 07:28:00 ha-118173 kubelet[1363]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 06 07:28:00 ha-118173 kubelet[1363]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 06 07:28:00 ha-118173 kubelet[1363]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 06 07:28:00 ha-118173 kubelet[1363]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 06 07:29:00 ha-118173 kubelet[1363]: E0806 07:29:00.651876    1363 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 06 07:29:00 ha-118173 kubelet[1363]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 06 07:29:00 ha-118173 kubelet[1363]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 06 07:29:00 ha-118173 kubelet[1363]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 06 07:29:00 ha-118173 kubelet[1363]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 06 07:30:00 ha-118173 kubelet[1363]: E0806 07:30:00.647829    1363 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 06 07:30:00 ha-118173 kubelet[1363]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 06 07:30:00 ha-118173 kubelet[1363]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 06 07:30:00 ha-118173 kubelet[1363]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 06 07:30:00 ha-118173 kubelet[1363]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-118173 -n ha-118173
helpers_test.go:261: (dbg) Run:  kubectl --context ha-118173 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (59.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-118173 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-118173 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-118173 status -v=7 --alsologtostderr: exit status 3 (3.20495764s)

                                                
                                                
-- stdout --
	ha-118173
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-118173-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-118173-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-118173-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 07:30:43.271577   36860 out.go:291] Setting OutFile to fd 1 ...
	I0806 07:30:43.271886   36860 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 07:30:43.271899   36860 out.go:304] Setting ErrFile to fd 2...
	I0806 07:30:43.271904   36860 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 07:30:43.272190   36860 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19370-13057/.minikube/bin
	I0806 07:30:43.272441   36860 out.go:298] Setting JSON to false
	I0806 07:30:43.272468   36860 mustload.go:65] Loading cluster: ha-118173
	I0806 07:30:43.272566   36860 notify.go:220] Checking for updates...
	I0806 07:30:43.273017   36860 config.go:182] Loaded profile config "ha-118173": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 07:30:43.273038   36860 status.go:255] checking status of ha-118173 ...
	I0806 07:30:43.273698   36860 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:30:43.273756   36860 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:30:43.290564   36860 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34735
	I0806 07:30:43.291006   36860 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:30:43.291665   36860 main.go:141] libmachine: Using API Version  1
	I0806 07:30:43.291688   36860 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:30:43.292080   36860 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:30:43.292317   36860 main.go:141] libmachine: (ha-118173) Calling .GetState
	I0806 07:30:43.293906   36860 status.go:330] ha-118173 host status = "Running" (err=<nil>)
	I0806 07:30:43.293929   36860 host.go:66] Checking if "ha-118173" exists ...
	I0806 07:30:43.294334   36860 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:30:43.294373   36860 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:30:43.308543   36860 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39907
	I0806 07:30:43.309003   36860 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:30:43.309486   36860 main.go:141] libmachine: Using API Version  1
	I0806 07:30:43.309506   36860 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:30:43.309750   36860 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:30:43.309918   36860 main.go:141] libmachine: (ha-118173) Calling .GetIP
	I0806 07:30:43.312600   36860 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:30:43.312988   36860 main.go:141] libmachine: (ha-118173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:df:f0", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:22:33 +0000 UTC Type:0 Mac:52:54:00:ef:df:f0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-118173 Clientid:01:52:54:00:ef:df:f0}
	I0806 07:30:43.313013   36860 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined IP address 192.168.39.164 and MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:30:43.313184   36860 host.go:66] Checking if "ha-118173" exists ...
	I0806 07:30:43.313601   36860 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:30:43.313661   36860 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:30:43.327884   36860 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33035
	I0806 07:30:43.328335   36860 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:30:43.328775   36860 main.go:141] libmachine: Using API Version  1
	I0806 07:30:43.328788   36860 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:30:43.329070   36860 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:30:43.329230   36860 main.go:141] libmachine: (ha-118173) Calling .DriverName
	I0806 07:30:43.329382   36860 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0806 07:30:43.329411   36860 main.go:141] libmachine: (ha-118173) Calling .GetSSHHostname
	I0806 07:30:43.332229   36860 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:30:43.332634   36860 main.go:141] libmachine: (ha-118173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:df:f0", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:22:33 +0000 UTC Type:0 Mac:52:54:00:ef:df:f0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-118173 Clientid:01:52:54:00:ef:df:f0}
	I0806 07:30:43.332667   36860 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined IP address 192.168.39.164 and MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:30:43.332832   36860 main.go:141] libmachine: (ha-118173) Calling .GetSSHPort
	I0806 07:30:43.332997   36860 main.go:141] libmachine: (ha-118173) Calling .GetSSHKeyPath
	I0806 07:30:43.333137   36860 main.go:141] libmachine: (ha-118173) Calling .GetSSHUsername
	I0806 07:30:43.333272   36860 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173/id_rsa Username:docker}
	I0806 07:30:43.416261   36860 ssh_runner.go:195] Run: systemctl --version
	I0806 07:30:43.422515   36860 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 07:30:43.443005   36860 kubeconfig.go:125] found "ha-118173" server: "https://192.168.39.254:8443"
	I0806 07:30:43.443035   36860 api_server.go:166] Checking apiserver status ...
	I0806 07:30:43.443079   36860 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 07:30:43.460230   36860 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1178/cgroup
	W0806 07:30:43.471807   36860 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1178/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0806 07:30:43.471854   36860 ssh_runner.go:195] Run: ls
	I0806 07:30:43.476425   36860 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0806 07:30:43.480604   36860 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0806 07:30:43.480623   36860 status.go:422] ha-118173 apiserver status = Running (err=<nil>)
	I0806 07:30:43.480633   36860 status.go:257] ha-118173 status: &{Name:ha-118173 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0806 07:30:43.480648   36860 status.go:255] checking status of ha-118173-m02 ...
	I0806 07:30:43.481008   36860 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:30:43.481041   36860 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:30:43.495375   36860 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45875
	I0806 07:30:43.495763   36860 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:30:43.496237   36860 main.go:141] libmachine: Using API Version  1
	I0806 07:30:43.496258   36860 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:30:43.496694   36860 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:30:43.496941   36860 main.go:141] libmachine: (ha-118173-m02) Calling .GetState
	I0806 07:30:43.498450   36860 status.go:330] ha-118173-m02 host status = "Running" (err=<nil>)
	I0806 07:30:43.498469   36860 host.go:66] Checking if "ha-118173-m02" exists ...
	I0806 07:30:43.498780   36860 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:30:43.498816   36860 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:30:43.513498   36860 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43707
	I0806 07:30:43.513885   36860 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:30:43.514373   36860 main.go:141] libmachine: Using API Version  1
	I0806 07:30:43.514400   36860 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:30:43.514875   36860 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:30:43.515115   36860 main.go:141] libmachine: (ha-118173-m02) Calling .GetIP
	I0806 07:30:43.517974   36860 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:30:43.518358   36860 main.go:141] libmachine: (ha-118173-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:e9:38", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:23:29 +0000 UTC Type:0 Mac:52:54:00:b4:e9:38 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-118173-m02 Clientid:01:52:54:00:b4:e9:38}
	I0806 07:30:43.518382   36860 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined IP address 192.168.39.203 and MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:30:43.518499   36860 host.go:66] Checking if "ha-118173-m02" exists ...
	I0806 07:30:43.518887   36860 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:30:43.518934   36860 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:30:43.533085   36860 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34747
	I0806 07:30:43.533454   36860 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:30:43.533892   36860 main.go:141] libmachine: Using API Version  1
	I0806 07:30:43.533914   36860 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:30:43.534209   36860 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:30:43.534412   36860 main.go:141] libmachine: (ha-118173-m02) Calling .DriverName
	I0806 07:30:43.534590   36860 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0806 07:30:43.534610   36860 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHHostname
	I0806 07:30:43.537040   36860 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:30:43.537474   36860 main.go:141] libmachine: (ha-118173-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:e9:38", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:23:29 +0000 UTC Type:0 Mac:52:54:00:b4:e9:38 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-118173-m02 Clientid:01:52:54:00:b4:e9:38}
	I0806 07:30:43.537501   36860 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined IP address 192.168.39.203 and MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:30:43.537620   36860 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHPort
	I0806 07:30:43.537782   36860 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHKeyPath
	I0806 07:30:43.537897   36860 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHUsername
	I0806 07:30:43.538035   36860 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173-m02/id_rsa Username:docker}
	W0806 07:30:46.084662   36860 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.203:22: connect: no route to host
	W0806 07:30:46.084750   36860 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.203:22: connect: no route to host
	E0806 07:30:46.084773   36860 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.203:22: connect: no route to host
	I0806 07:30:46.084784   36860 status.go:257] ha-118173-m02 status: &{Name:ha-118173-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0806 07:30:46.084807   36860 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.203:22: connect: no route to host
	I0806 07:30:46.084837   36860 status.go:255] checking status of ha-118173-m03 ...
	I0806 07:30:46.085155   36860 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:30:46.085203   36860 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:30:46.100792   36860 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46577
	I0806 07:30:46.101312   36860 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:30:46.101856   36860 main.go:141] libmachine: Using API Version  1
	I0806 07:30:46.101883   36860 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:30:46.102251   36860 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:30:46.102448   36860 main.go:141] libmachine: (ha-118173-m03) Calling .GetState
	I0806 07:30:46.104377   36860 status.go:330] ha-118173-m03 host status = "Running" (err=<nil>)
	I0806 07:30:46.104395   36860 host.go:66] Checking if "ha-118173-m03" exists ...
	I0806 07:30:46.104693   36860 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:30:46.104725   36860 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:30:46.119694   36860 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33317
	I0806 07:30:46.120183   36860 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:30:46.120721   36860 main.go:141] libmachine: Using API Version  1
	I0806 07:30:46.120741   36860 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:30:46.121048   36860 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:30:46.121244   36860 main.go:141] libmachine: (ha-118173-m03) Calling .GetIP
	I0806 07:30:46.123727   36860 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:30:46.124169   36860 main.go:141] libmachine: (ha-118173-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:90:55", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:25:53 +0000 UTC Type:0 Mac:52:54:00:d8:90:55 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-118173-m03 Clientid:01:52:54:00:d8:90:55}
	I0806 07:30:46.124195   36860 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined IP address 192.168.39.5 and MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:30:46.124304   36860 host.go:66] Checking if "ha-118173-m03" exists ...
	I0806 07:30:46.124644   36860 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:30:46.124711   36860 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:30:46.139035   36860 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45771
	I0806 07:30:46.139512   36860 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:30:46.140057   36860 main.go:141] libmachine: Using API Version  1
	I0806 07:30:46.140077   36860 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:30:46.140419   36860 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:30:46.140600   36860 main.go:141] libmachine: (ha-118173-m03) Calling .DriverName
	I0806 07:30:46.140760   36860 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0806 07:30:46.140781   36860 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHHostname
	I0806 07:30:46.143663   36860 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:30:46.144097   36860 main.go:141] libmachine: (ha-118173-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:90:55", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:25:53 +0000 UTC Type:0 Mac:52:54:00:d8:90:55 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-118173-m03 Clientid:01:52:54:00:d8:90:55}
	I0806 07:30:46.144134   36860 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined IP address 192.168.39.5 and MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:30:46.144316   36860 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHPort
	I0806 07:30:46.144509   36860 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHKeyPath
	I0806 07:30:46.144736   36860 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHUsername
	I0806 07:30:46.144892   36860 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173-m03/id_rsa Username:docker}
	I0806 07:30:46.228169   36860 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 07:30:46.242642   36860 kubeconfig.go:125] found "ha-118173" server: "https://192.168.39.254:8443"
	I0806 07:30:46.242665   36860 api_server.go:166] Checking apiserver status ...
	I0806 07:30:46.242694   36860 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 07:30:46.255550   36860 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1553/cgroup
	W0806 07:30:46.265214   36860 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1553/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0806 07:30:46.265271   36860 ssh_runner.go:195] Run: ls
	I0806 07:30:46.269824   36860 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0806 07:30:46.277322   36860 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0806 07:30:46.277344   36860 status.go:422] ha-118173-m03 apiserver status = Running (err=<nil>)
	I0806 07:30:46.277354   36860 status.go:257] ha-118173-m03 status: &{Name:ha-118173-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0806 07:30:46.277374   36860 status.go:255] checking status of ha-118173-m04 ...
	I0806 07:30:46.277763   36860 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:30:46.277804   36860 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:30:46.293470   36860 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40203
	I0806 07:30:46.293922   36860 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:30:46.294461   36860 main.go:141] libmachine: Using API Version  1
	I0806 07:30:46.294487   36860 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:30:46.294831   36860 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:30:46.295029   36860 main.go:141] libmachine: (ha-118173-m04) Calling .GetState
	I0806 07:30:46.296841   36860 status.go:330] ha-118173-m04 host status = "Running" (err=<nil>)
	I0806 07:30:46.296862   36860 host.go:66] Checking if "ha-118173-m04" exists ...
	I0806 07:30:46.297303   36860 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:30:46.297356   36860 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:30:46.312277   36860 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38505
	I0806 07:30:46.312729   36860 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:30:46.313186   36860 main.go:141] libmachine: Using API Version  1
	I0806 07:30:46.313205   36860 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:30:46.313553   36860 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:30:46.313753   36860 main.go:141] libmachine: (ha-118173-m04) Calling .GetIP
	I0806 07:30:46.316222   36860 main.go:141] libmachine: (ha-118173-m04) DBG | domain ha-118173-m04 has defined MAC address 52:54:00:8f:be:12 in network mk-ha-118173
	I0806 07:30:46.316697   36860 main.go:141] libmachine: (ha-118173-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:be:12", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:27:21 +0000 UTC Type:0 Mac:52:54:00:8f:be:12 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:ha-118173-m04 Clientid:01:52:54:00:8f:be:12}
	I0806 07:30:46.316723   36860 main.go:141] libmachine: (ha-118173-m04) DBG | domain ha-118173-m04 has defined IP address 192.168.39.107 and MAC address 52:54:00:8f:be:12 in network mk-ha-118173
	I0806 07:30:46.316926   36860 host.go:66] Checking if "ha-118173-m04" exists ...
	I0806 07:30:46.317304   36860 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:30:46.317344   36860 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:30:46.331618   36860 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35819
	I0806 07:30:46.332090   36860 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:30:46.332589   36860 main.go:141] libmachine: Using API Version  1
	I0806 07:30:46.332608   36860 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:30:46.332968   36860 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:30:46.333171   36860 main.go:141] libmachine: (ha-118173-m04) Calling .DriverName
	I0806 07:30:46.333355   36860 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0806 07:30:46.333374   36860 main.go:141] libmachine: (ha-118173-m04) Calling .GetSSHHostname
	I0806 07:30:46.336410   36860 main.go:141] libmachine: (ha-118173-m04) DBG | domain ha-118173-m04 has defined MAC address 52:54:00:8f:be:12 in network mk-ha-118173
	I0806 07:30:46.336903   36860 main.go:141] libmachine: (ha-118173-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:be:12", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:27:21 +0000 UTC Type:0 Mac:52:54:00:8f:be:12 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:ha-118173-m04 Clientid:01:52:54:00:8f:be:12}
	I0806 07:30:46.336944   36860 main.go:141] libmachine: (ha-118173-m04) DBG | domain ha-118173-m04 has defined IP address 192.168.39.107 and MAC address 52:54:00:8f:be:12 in network mk-ha-118173
	I0806 07:30:46.337125   36860 main.go:141] libmachine: (ha-118173-m04) Calling .GetSSHPort
	I0806 07:30:46.337303   36860 main.go:141] libmachine: (ha-118173-m04) Calling .GetSSHKeyPath
	I0806 07:30:46.337476   36860 main.go:141] libmachine: (ha-118173-m04) Calling .GetSSHUsername
	I0806 07:30:46.337616   36860 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173-m04/id_rsa Username:docker}
	I0806 07:30:46.420047   36860 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 07:30:46.433636   36860 status.go:257] ha-118173-m04 status: &{Name:ha-118173-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-118173 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-118173 status -v=7 --alsologtostderr: exit status 3 (5.35581957s)

                                                
                                                
-- stdout --
	ha-118173
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-118173-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-118173-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-118173-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 07:30:47.274092   36962 out.go:291] Setting OutFile to fd 1 ...
	I0806 07:30:47.274368   36962 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 07:30:47.274379   36962 out.go:304] Setting ErrFile to fd 2...
	I0806 07:30:47.274383   36962 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 07:30:47.274592   36962 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19370-13057/.minikube/bin
	I0806 07:30:47.274778   36962 out.go:298] Setting JSON to false
	I0806 07:30:47.274804   36962 mustload.go:65] Loading cluster: ha-118173
	I0806 07:30:47.274841   36962 notify.go:220] Checking for updates...
	I0806 07:30:47.275207   36962 config.go:182] Loaded profile config "ha-118173": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 07:30:47.275225   36962 status.go:255] checking status of ha-118173 ...
	I0806 07:30:47.275720   36962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:30:47.275763   36962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:30:47.291907   36962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41617
	I0806 07:30:47.292416   36962 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:30:47.293115   36962 main.go:141] libmachine: Using API Version  1
	I0806 07:30:47.293149   36962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:30:47.293488   36962 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:30:47.293711   36962 main.go:141] libmachine: (ha-118173) Calling .GetState
	I0806 07:30:47.295524   36962 status.go:330] ha-118173 host status = "Running" (err=<nil>)
	I0806 07:30:47.295544   36962 host.go:66] Checking if "ha-118173" exists ...
	I0806 07:30:47.295882   36962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:30:47.295911   36962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:30:47.311257   36962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35895
	I0806 07:30:47.311643   36962 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:30:47.312158   36962 main.go:141] libmachine: Using API Version  1
	I0806 07:30:47.312184   36962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:30:47.312552   36962 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:30:47.312784   36962 main.go:141] libmachine: (ha-118173) Calling .GetIP
	I0806 07:30:47.315681   36962 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:30:47.316134   36962 main.go:141] libmachine: (ha-118173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:df:f0", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:22:33 +0000 UTC Type:0 Mac:52:54:00:ef:df:f0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-118173 Clientid:01:52:54:00:ef:df:f0}
	I0806 07:30:47.316169   36962 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined IP address 192.168.39.164 and MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:30:47.316332   36962 host.go:66] Checking if "ha-118173" exists ...
	I0806 07:30:47.316649   36962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:30:47.316682   36962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:30:47.331568   36962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38499
	I0806 07:30:47.331961   36962 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:30:47.332406   36962 main.go:141] libmachine: Using API Version  1
	I0806 07:30:47.332429   36962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:30:47.332768   36962 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:30:47.332980   36962 main.go:141] libmachine: (ha-118173) Calling .DriverName
	I0806 07:30:47.333197   36962 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0806 07:30:47.333232   36962 main.go:141] libmachine: (ha-118173) Calling .GetSSHHostname
	I0806 07:30:47.336104   36962 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:30:47.336547   36962 main.go:141] libmachine: (ha-118173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:df:f0", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:22:33 +0000 UTC Type:0 Mac:52:54:00:ef:df:f0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-118173 Clientid:01:52:54:00:ef:df:f0}
	I0806 07:30:47.336574   36962 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined IP address 192.168.39.164 and MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:30:47.336717   36962 main.go:141] libmachine: (ha-118173) Calling .GetSSHPort
	I0806 07:30:47.336894   36962 main.go:141] libmachine: (ha-118173) Calling .GetSSHKeyPath
	I0806 07:30:47.337069   36962 main.go:141] libmachine: (ha-118173) Calling .GetSSHUsername
	I0806 07:30:47.337216   36962 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173/id_rsa Username:docker}
	I0806 07:30:47.416236   36962 ssh_runner.go:195] Run: systemctl --version
	I0806 07:30:47.424701   36962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 07:30:47.441142   36962 kubeconfig.go:125] found "ha-118173" server: "https://192.168.39.254:8443"
	I0806 07:30:47.441170   36962 api_server.go:166] Checking apiserver status ...
	I0806 07:30:47.441201   36962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 07:30:47.458016   36962 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1178/cgroup
	W0806 07:30:47.469182   36962 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1178/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0806 07:30:47.469227   36962 ssh_runner.go:195] Run: ls
	I0806 07:30:47.474029   36962 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0806 07:30:47.480327   36962 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0806 07:30:47.480353   36962 status.go:422] ha-118173 apiserver status = Running (err=<nil>)
	I0806 07:30:47.480387   36962 status.go:257] ha-118173 status: &{Name:ha-118173 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0806 07:30:47.480413   36962 status.go:255] checking status of ha-118173-m02 ...
	I0806 07:30:47.480784   36962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:30:47.480852   36962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:30:47.495779   36962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44049
	I0806 07:30:47.496200   36962 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:30:47.496630   36962 main.go:141] libmachine: Using API Version  1
	I0806 07:30:47.496650   36962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:30:47.497007   36962 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:30:47.497190   36962 main.go:141] libmachine: (ha-118173-m02) Calling .GetState
	I0806 07:30:47.498820   36962 status.go:330] ha-118173-m02 host status = "Running" (err=<nil>)
	I0806 07:30:47.498834   36962 host.go:66] Checking if "ha-118173-m02" exists ...
	I0806 07:30:47.499128   36962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:30:47.499162   36962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:30:47.514664   36962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35191
	I0806 07:30:47.515184   36962 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:30:47.515710   36962 main.go:141] libmachine: Using API Version  1
	I0806 07:30:47.515729   36962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:30:47.516017   36962 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:30:47.516206   36962 main.go:141] libmachine: (ha-118173-m02) Calling .GetIP
	I0806 07:30:47.518890   36962 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:30:47.519258   36962 main.go:141] libmachine: (ha-118173-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:e9:38", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:23:29 +0000 UTC Type:0 Mac:52:54:00:b4:e9:38 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-118173-m02 Clientid:01:52:54:00:b4:e9:38}
	I0806 07:30:47.519300   36962 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined IP address 192.168.39.203 and MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:30:47.519426   36962 host.go:66] Checking if "ha-118173-m02" exists ...
	I0806 07:30:47.519770   36962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:30:47.519834   36962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:30:47.533941   36962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35157
	I0806 07:30:47.534401   36962 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:30:47.534812   36962 main.go:141] libmachine: Using API Version  1
	I0806 07:30:47.534831   36962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:30:47.535144   36962 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:30:47.535329   36962 main.go:141] libmachine: (ha-118173-m02) Calling .DriverName
	I0806 07:30:47.535491   36962 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0806 07:30:47.535509   36962 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHHostname
	I0806 07:30:47.538379   36962 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:30:47.538918   36962 main.go:141] libmachine: (ha-118173-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:e9:38", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:23:29 +0000 UTC Type:0 Mac:52:54:00:b4:e9:38 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-118173-m02 Clientid:01:52:54:00:b4:e9:38}
	I0806 07:30:47.538946   36962 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined IP address 192.168.39.203 and MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:30:47.539083   36962 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHPort
	I0806 07:30:47.539239   36962 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHKeyPath
	I0806 07:30:47.539405   36962 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHUsername
	I0806 07:30:47.539557   36962 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173-m02/id_rsa Username:docker}
	W0806 07:30:49.156630   36962 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.203:22: connect: no route to host
	I0806 07:30:49.156674   36962 retry.go:31] will retry after 363.786896ms: dial tcp 192.168.39.203:22: connect: no route to host
	W0806 07:30:52.228667   36962 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.203:22: connect: no route to host
	W0806 07:30:52.228794   36962 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.203:22: connect: no route to host
	E0806 07:30:52.228850   36962 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.203:22: connect: no route to host
	I0806 07:30:52.228860   36962 status.go:257] ha-118173-m02 status: &{Name:ha-118173-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0806 07:30:52.228881   36962 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.203:22: connect: no route to host
	I0806 07:30:52.228891   36962 status.go:255] checking status of ha-118173-m03 ...
	I0806 07:30:52.229323   36962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:30:52.229400   36962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:30:52.245746   36962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42895
	I0806 07:30:52.246226   36962 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:30:52.246895   36962 main.go:141] libmachine: Using API Version  1
	I0806 07:30:52.246924   36962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:30:52.247258   36962 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:30:52.247453   36962 main.go:141] libmachine: (ha-118173-m03) Calling .GetState
	I0806 07:30:52.249368   36962 status.go:330] ha-118173-m03 host status = "Running" (err=<nil>)
	I0806 07:30:52.249385   36962 host.go:66] Checking if "ha-118173-m03" exists ...
	I0806 07:30:52.249726   36962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:30:52.249772   36962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:30:52.264110   36962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37939
	I0806 07:30:52.264572   36962 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:30:52.265034   36962 main.go:141] libmachine: Using API Version  1
	I0806 07:30:52.265054   36962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:30:52.265386   36962 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:30:52.265591   36962 main.go:141] libmachine: (ha-118173-m03) Calling .GetIP
	I0806 07:30:52.268825   36962 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:30:52.269251   36962 main.go:141] libmachine: (ha-118173-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:90:55", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:25:53 +0000 UTC Type:0 Mac:52:54:00:d8:90:55 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-118173-m03 Clientid:01:52:54:00:d8:90:55}
	I0806 07:30:52.269279   36962 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined IP address 192.168.39.5 and MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:30:52.269485   36962 host.go:66] Checking if "ha-118173-m03" exists ...
	I0806 07:30:52.269894   36962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:30:52.269939   36962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:30:52.285441   36962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35155
	I0806 07:30:52.285818   36962 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:30:52.286291   36962 main.go:141] libmachine: Using API Version  1
	I0806 07:30:52.286311   36962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:30:52.286682   36962 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:30:52.286927   36962 main.go:141] libmachine: (ha-118173-m03) Calling .DriverName
	I0806 07:30:52.287133   36962 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0806 07:30:52.287158   36962 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHHostname
	I0806 07:30:52.290374   36962 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:30:52.290764   36962 main.go:141] libmachine: (ha-118173-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:90:55", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:25:53 +0000 UTC Type:0 Mac:52:54:00:d8:90:55 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-118173-m03 Clientid:01:52:54:00:d8:90:55}
	I0806 07:30:52.290785   36962 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined IP address 192.168.39.5 and MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:30:52.291018   36962 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHPort
	I0806 07:30:52.291185   36962 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHKeyPath
	I0806 07:30:52.291351   36962 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHUsername
	I0806 07:30:52.291479   36962 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173-m03/id_rsa Username:docker}
	I0806 07:30:52.368901   36962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 07:30:52.388398   36962 kubeconfig.go:125] found "ha-118173" server: "https://192.168.39.254:8443"
	I0806 07:30:52.388427   36962 api_server.go:166] Checking apiserver status ...
	I0806 07:30:52.388475   36962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 07:30:52.401984   36962 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1553/cgroup
	W0806 07:30:52.412502   36962 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1553/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0806 07:30:52.412561   36962 ssh_runner.go:195] Run: ls
	I0806 07:30:52.417419   36962 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0806 07:30:52.421972   36962 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0806 07:30:52.422004   36962 status.go:422] ha-118173-m03 apiserver status = Running (err=<nil>)
	I0806 07:30:52.422016   36962 status.go:257] ha-118173-m03 status: &{Name:ha-118173-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0806 07:30:52.422037   36962 status.go:255] checking status of ha-118173-m04 ...
	I0806 07:30:52.422335   36962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:30:52.422376   36962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:30:52.437768   36962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35265
	I0806 07:30:52.438246   36962 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:30:52.438764   36962 main.go:141] libmachine: Using API Version  1
	I0806 07:30:52.438785   36962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:30:52.439147   36962 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:30:52.439364   36962 main.go:141] libmachine: (ha-118173-m04) Calling .GetState
	I0806 07:30:52.441106   36962 status.go:330] ha-118173-m04 host status = "Running" (err=<nil>)
	I0806 07:30:52.441125   36962 host.go:66] Checking if "ha-118173-m04" exists ...
	I0806 07:30:52.441548   36962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:30:52.441622   36962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:30:52.457151   36962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34819
	I0806 07:30:52.457556   36962 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:30:52.458051   36962 main.go:141] libmachine: Using API Version  1
	I0806 07:30:52.458102   36962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:30:52.458445   36962 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:30:52.458642   36962 main.go:141] libmachine: (ha-118173-m04) Calling .GetIP
	I0806 07:30:52.461665   36962 main.go:141] libmachine: (ha-118173-m04) DBG | domain ha-118173-m04 has defined MAC address 52:54:00:8f:be:12 in network mk-ha-118173
	I0806 07:30:52.462111   36962 main.go:141] libmachine: (ha-118173-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:be:12", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:27:21 +0000 UTC Type:0 Mac:52:54:00:8f:be:12 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:ha-118173-m04 Clientid:01:52:54:00:8f:be:12}
	I0806 07:30:52.462138   36962 main.go:141] libmachine: (ha-118173-m04) DBG | domain ha-118173-m04 has defined IP address 192.168.39.107 and MAC address 52:54:00:8f:be:12 in network mk-ha-118173
	I0806 07:30:52.462344   36962 host.go:66] Checking if "ha-118173-m04" exists ...
	I0806 07:30:52.462651   36962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:30:52.462696   36962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:30:52.478440   36962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35019
	I0806 07:30:52.478937   36962 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:30:52.479425   36962 main.go:141] libmachine: Using API Version  1
	I0806 07:30:52.479444   36962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:30:52.479741   36962 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:30:52.479905   36962 main.go:141] libmachine: (ha-118173-m04) Calling .DriverName
	I0806 07:30:52.480092   36962 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0806 07:30:52.480113   36962 main.go:141] libmachine: (ha-118173-m04) Calling .GetSSHHostname
	I0806 07:30:52.483268   36962 main.go:141] libmachine: (ha-118173-m04) DBG | domain ha-118173-m04 has defined MAC address 52:54:00:8f:be:12 in network mk-ha-118173
	I0806 07:30:52.483706   36962 main.go:141] libmachine: (ha-118173-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:be:12", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:27:21 +0000 UTC Type:0 Mac:52:54:00:8f:be:12 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:ha-118173-m04 Clientid:01:52:54:00:8f:be:12}
	I0806 07:30:52.483730   36962 main.go:141] libmachine: (ha-118173-m04) DBG | domain ha-118173-m04 has defined IP address 192.168.39.107 and MAC address 52:54:00:8f:be:12 in network mk-ha-118173
	I0806 07:30:52.483834   36962 main.go:141] libmachine: (ha-118173-m04) Calling .GetSSHPort
	I0806 07:30:52.484007   36962 main.go:141] libmachine: (ha-118173-m04) Calling .GetSSHKeyPath
	I0806 07:30:52.484213   36962 main.go:141] libmachine: (ha-118173-m04) Calling .GetSSHUsername
	I0806 07:30:52.484391   36962 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173-m04/id_rsa Username:docker}
	I0806 07:30:52.568765   36962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 07:30:52.585667   36962 status.go:257] ha-118173-m04 status: &{Name:ha-118173-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-118173 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-118173 status -v=7 --alsologtostderr: exit status 3 (5.028089996s)

                                                
                                                
-- stdout --
	ha-118173
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-118173-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-118173-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-118173-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 07:30:54.091006   37062 out.go:291] Setting OutFile to fd 1 ...
	I0806 07:30:54.091108   37062 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 07:30:54.091121   37062 out.go:304] Setting ErrFile to fd 2...
	I0806 07:30:54.091127   37062 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 07:30:54.091333   37062 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19370-13057/.minikube/bin
	I0806 07:30:54.091574   37062 out.go:298] Setting JSON to false
	I0806 07:30:54.091623   37062 mustload.go:65] Loading cluster: ha-118173
	I0806 07:30:54.091717   37062 notify.go:220] Checking for updates...
	I0806 07:30:54.092112   37062 config.go:182] Loaded profile config "ha-118173": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 07:30:54.092130   37062 status.go:255] checking status of ha-118173 ...
	I0806 07:30:54.092623   37062 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:30:54.092675   37062 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:30:54.110573   37062 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46335
	I0806 07:30:54.111044   37062 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:30:54.111739   37062 main.go:141] libmachine: Using API Version  1
	I0806 07:30:54.111756   37062 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:30:54.112195   37062 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:30:54.112418   37062 main.go:141] libmachine: (ha-118173) Calling .GetState
	I0806 07:30:54.114166   37062 status.go:330] ha-118173 host status = "Running" (err=<nil>)
	I0806 07:30:54.114189   37062 host.go:66] Checking if "ha-118173" exists ...
	I0806 07:30:54.114611   37062 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:30:54.114658   37062 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:30:54.130782   37062 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43011
	I0806 07:30:54.131155   37062 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:30:54.131652   37062 main.go:141] libmachine: Using API Version  1
	I0806 07:30:54.131674   37062 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:30:54.131968   37062 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:30:54.132159   37062 main.go:141] libmachine: (ha-118173) Calling .GetIP
	I0806 07:30:54.134982   37062 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:30:54.135573   37062 main.go:141] libmachine: (ha-118173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:df:f0", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:22:33 +0000 UTC Type:0 Mac:52:54:00:ef:df:f0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-118173 Clientid:01:52:54:00:ef:df:f0}
	I0806 07:30:54.135607   37062 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined IP address 192.168.39.164 and MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:30:54.135755   37062 host.go:66] Checking if "ha-118173" exists ...
	I0806 07:30:54.136055   37062 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:30:54.136099   37062 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:30:54.150994   37062 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39849
	I0806 07:30:54.151409   37062 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:30:54.151954   37062 main.go:141] libmachine: Using API Version  1
	I0806 07:30:54.151975   37062 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:30:54.152330   37062 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:30:54.152538   37062 main.go:141] libmachine: (ha-118173) Calling .DriverName
	I0806 07:30:54.152729   37062 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0806 07:30:54.152755   37062 main.go:141] libmachine: (ha-118173) Calling .GetSSHHostname
	I0806 07:30:54.155747   37062 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:30:54.156196   37062 main.go:141] libmachine: (ha-118173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:df:f0", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:22:33 +0000 UTC Type:0 Mac:52:54:00:ef:df:f0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-118173 Clientid:01:52:54:00:ef:df:f0}
	I0806 07:30:54.156232   37062 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined IP address 192.168.39.164 and MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:30:54.156418   37062 main.go:141] libmachine: (ha-118173) Calling .GetSSHPort
	I0806 07:30:54.156648   37062 main.go:141] libmachine: (ha-118173) Calling .GetSSHKeyPath
	I0806 07:30:54.156809   37062 main.go:141] libmachine: (ha-118173) Calling .GetSSHUsername
	I0806 07:30:54.157001   37062 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173/id_rsa Username:docker}
	I0806 07:30:54.240466   37062 ssh_runner.go:195] Run: systemctl --version
	I0806 07:30:54.246461   37062 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 07:30:54.263068   37062 kubeconfig.go:125] found "ha-118173" server: "https://192.168.39.254:8443"
	I0806 07:30:54.263096   37062 api_server.go:166] Checking apiserver status ...
	I0806 07:30:54.263125   37062 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 07:30:54.279441   37062 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1178/cgroup
	W0806 07:30:54.289706   37062 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1178/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0806 07:30:54.289764   37062 ssh_runner.go:195] Run: ls
	I0806 07:30:54.294539   37062 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0806 07:30:54.299077   37062 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0806 07:30:54.299098   37062 status.go:422] ha-118173 apiserver status = Running (err=<nil>)
	I0806 07:30:54.299107   37062 status.go:257] ha-118173 status: &{Name:ha-118173 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0806 07:30:54.299121   37062 status.go:255] checking status of ha-118173-m02 ...
	I0806 07:30:54.299423   37062 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:30:54.299457   37062 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:30:54.314144   37062 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43991
	I0806 07:30:54.314598   37062 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:30:54.315106   37062 main.go:141] libmachine: Using API Version  1
	I0806 07:30:54.315131   37062 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:30:54.315488   37062 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:30:54.315666   37062 main.go:141] libmachine: (ha-118173-m02) Calling .GetState
	I0806 07:30:54.317317   37062 status.go:330] ha-118173-m02 host status = "Running" (err=<nil>)
	I0806 07:30:54.317335   37062 host.go:66] Checking if "ha-118173-m02" exists ...
	I0806 07:30:54.317646   37062 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:30:54.317683   37062 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:30:54.332518   37062 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38711
	I0806 07:30:54.332960   37062 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:30:54.333387   37062 main.go:141] libmachine: Using API Version  1
	I0806 07:30:54.333422   37062 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:30:54.333737   37062 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:30:54.333983   37062 main.go:141] libmachine: (ha-118173-m02) Calling .GetIP
	I0806 07:30:54.336658   37062 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:30:54.337019   37062 main.go:141] libmachine: (ha-118173-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:e9:38", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:23:29 +0000 UTC Type:0 Mac:52:54:00:b4:e9:38 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-118173-m02 Clientid:01:52:54:00:b4:e9:38}
	I0806 07:30:54.337047   37062 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined IP address 192.168.39.203 and MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:30:54.337157   37062 host.go:66] Checking if "ha-118173-m02" exists ...
	I0806 07:30:54.337484   37062 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:30:54.337524   37062 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:30:54.352382   37062 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34077
	I0806 07:30:54.352770   37062 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:30:54.353257   37062 main.go:141] libmachine: Using API Version  1
	I0806 07:30:54.353283   37062 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:30:54.353649   37062 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:30:54.353847   37062 main.go:141] libmachine: (ha-118173-m02) Calling .DriverName
	I0806 07:30:54.354059   37062 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0806 07:30:54.354080   37062 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHHostname
	I0806 07:30:54.356792   37062 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:30:54.357329   37062 main.go:141] libmachine: (ha-118173-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:e9:38", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:23:29 +0000 UTC Type:0 Mac:52:54:00:b4:e9:38 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-118173-m02 Clientid:01:52:54:00:b4:e9:38}
	I0806 07:30:54.357361   37062 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined IP address 192.168.39.203 and MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:30:54.357632   37062 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHPort
	I0806 07:30:54.357894   37062 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHKeyPath
	I0806 07:30:54.358087   37062 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHUsername
	I0806 07:30:54.358293   37062 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173-m02/id_rsa Username:docker}
	W0806 07:30:55.304603   37062 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.203:22: connect: no route to host
	I0806 07:30:55.304680   37062 retry.go:31] will retry after 367.531567ms: dial tcp 192.168.39.203:22: connect: no route to host
	W0806 07:30:58.724601   37062 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.203:22: connect: no route to host
	W0806 07:30:58.724749   37062 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.203:22: connect: no route to host
	E0806 07:30:58.724810   37062 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.203:22: connect: no route to host
	I0806 07:30:58.724826   37062 status.go:257] ha-118173-m02 status: &{Name:ha-118173-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0806 07:30:58.724855   37062 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.203:22: connect: no route to host
	I0806 07:30:58.724881   37062 status.go:255] checking status of ha-118173-m03 ...
	I0806 07:30:58.725201   37062 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:30:58.725266   37062 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:30:58.740504   37062 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44817
	I0806 07:30:58.740963   37062 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:30:58.741474   37062 main.go:141] libmachine: Using API Version  1
	I0806 07:30:58.741494   37062 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:30:58.741844   37062 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:30:58.742023   37062 main.go:141] libmachine: (ha-118173-m03) Calling .GetState
	I0806 07:30:58.743772   37062 status.go:330] ha-118173-m03 host status = "Running" (err=<nil>)
	I0806 07:30:58.743787   37062 host.go:66] Checking if "ha-118173-m03" exists ...
	I0806 07:30:58.744122   37062 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:30:58.744154   37062 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:30:58.758755   37062 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34963
	I0806 07:30:58.759286   37062 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:30:58.759744   37062 main.go:141] libmachine: Using API Version  1
	I0806 07:30:58.759769   37062 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:30:58.760080   37062 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:30:58.760281   37062 main.go:141] libmachine: (ha-118173-m03) Calling .GetIP
	I0806 07:30:58.763353   37062 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:30:58.763791   37062 main.go:141] libmachine: (ha-118173-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:90:55", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:25:53 +0000 UTC Type:0 Mac:52:54:00:d8:90:55 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-118173-m03 Clientid:01:52:54:00:d8:90:55}
	I0806 07:30:58.763818   37062 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined IP address 192.168.39.5 and MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:30:58.763960   37062 host.go:66] Checking if "ha-118173-m03" exists ...
	I0806 07:30:58.764474   37062 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:30:58.764529   37062 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:30:58.779510   37062 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34139
	I0806 07:30:58.779881   37062 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:30:58.780393   37062 main.go:141] libmachine: Using API Version  1
	I0806 07:30:58.780412   37062 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:30:58.780732   37062 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:30:58.781026   37062 main.go:141] libmachine: (ha-118173-m03) Calling .DriverName
	I0806 07:30:58.781257   37062 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0806 07:30:58.781278   37062 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHHostname
	I0806 07:30:58.784140   37062 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:30:58.784663   37062 main.go:141] libmachine: (ha-118173-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:90:55", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:25:53 +0000 UTC Type:0 Mac:52:54:00:d8:90:55 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-118173-m03 Clientid:01:52:54:00:d8:90:55}
	I0806 07:30:58.784691   37062 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined IP address 192.168.39.5 and MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:30:58.784841   37062 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHPort
	I0806 07:30:58.784996   37062 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHKeyPath
	I0806 07:30:58.785158   37062 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHUsername
	I0806 07:30:58.785303   37062 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173-m03/id_rsa Username:docker}
	I0806 07:30:58.864731   37062 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 07:30:58.881765   37062 kubeconfig.go:125] found "ha-118173" server: "https://192.168.39.254:8443"
	I0806 07:30:58.881801   37062 api_server.go:166] Checking apiserver status ...
	I0806 07:30:58.881864   37062 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 07:30:58.898790   37062 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1553/cgroup
	W0806 07:30:58.908742   37062 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1553/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0806 07:30:58.908797   37062 ssh_runner.go:195] Run: ls
	I0806 07:30:58.913249   37062 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0806 07:30:58.917688   37062 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0806 07:30:58.917707   37062 status.go:422] ha-118173-m03 apiserver status = Running (err=<nil>)
	I0806 07:30:58.917715   37062 status.go:257] ha-118173-m03 status: &{Name:ha-118173-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0806 07:30:58.917729   37062 status.go:255] checking status of ha-118173-m04 ...
	I0806 07:30:58.918007   37062 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:30:58.918042   37062 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:30:58.932472   37062 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40577
	I0806 07:30:58.932869   37062 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:30:58.933357   37062 main.go:141] libmachine: Using API Version  1
	I0806 07:30:58.933382   37062 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:30:58.933706   37062 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:30:58.933901   37062 main.go:141] libmachine: (ha-118173-m04) Calling .GetState
	I0806 07:30:58.935465   37062 status.go:330] ha-118173-m04 host status = "Running" (err=<nil>)
	I0806 07:30:58.935481   37062 host.go:66] Checking if "ha-118173-m04" exists ...
	I0806 07:30:58.935861   37062 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:30:58.935894   37062 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:30:58.951548   37062 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44759
	I0806 07:30:58.951997   37062 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:30:58.952481   37062 main.go:141] libmachine: Using API Version  1
	I0806 07:30:58.952507   37062 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:30:58.952805   37062 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:30:58.953041   37062 main.go:141] libmachine: (ha-118173-m04) Calling .GetIP
	I0806 07:30:58.955615   37062 main.go:141] libmachine: (ha-118173-m04) DBG | domain ha-118173-m04 has defined MAC address 52:54:00:8f:be:12 in network mk-ha-118173
	I0806 07:30:58.956030   37062 main.go:141] libmachine: (ha-118173-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:be:12", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:27:21 +0000 UTC Type:0 Mac:52:54:00:8f:be:12 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:ha-118173-m04 Clientid:01:52:54:00:8f:be:12}
	I0806 07:30:58.956062   37062 main.go:141] libmachine: (ha-118173-m04) DBG | domain ha-118173-m04 has defined IP address 192.168.39.107 and MAC address 52:54:00:8f:be:12 in network mk-ha-118173
	I0806 07:30:58.956199   37062 host.go:66] Checking if "ha-118173-m04" exists ...
	I0806 07:30:58.956623   37062 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:30:58.956662   37062 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:30:58.971630   37062 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43147
	I0806 07:30:58.972130   37062 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:30:58.972689   37062 main.go:141] libmachine: Using API Version  1
	I0806 07:30:58.972710   37062 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:30:58.972985   37062 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:30:58.973142   37062 main.go:141] libmachine: (ha-118173-m04) Calling .DriverName
	I0806 07:30:58.973318   37062 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0806 07:30:58.973339   37062 main.go:141] libmachine: (ha-118173-m04) Calling .GetSSHHostname
	I0806 07:30:58.976121   37062 main.go:141] libmachine: (ha-118173-m04) DBG | domain ha-118173-m04 has defined MAC address 52:54:00:8f:be:12 in network mk-ha-118173
	I0806 07:30:58.976640   37062 main.go:141] libmachine: (ha-118173-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:be:12", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:27:21 +0000 UTC Type:0 Mac:52:54:00:8f:be:12 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:ha-118173-m04 Clientid:01:52:54:00:8f:be:12}
	I0806 07:30:58.976674   37062 main.go:141] libmachine: (ha-118173-m04) DBG | domain ha-118173-m04 has defined IP address 192.168.39.107 and MAC address 52:54:00:8f:be:12 in network mk-ha-118173
	I0806 07:30:58.976845   37062 main.go:141] libmachine: (ha-118173-m04) Calling .GetSSHPort
	I0806 07:30:58.977014   37062 main.go:141] libmachine: (ha-118173-m04) Calling .GetSSHKeyPath
	I0806 07:30:58.977210   37062 main.go:141] libmachine: (ha-118173-m04) Calling .GetSSHUsername
	I0806 07:30:58.977319   37062 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173-m04/id_rsa Username:docker}
	I0806 07:30:59.060167   37062 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 07:30:59.077332   37062 status.go:257] ha-118173-m04 status: &{Name:ha-118173-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-118173 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-118173 status -v=7 --alsologtostderr: exit status 3 (3.981845785s)

                                                
                                                
-- stdout --
	ha-118173
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-118173-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-118173-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-118173-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 07:31:01.475222   37161 out.go:291] Setting OutFile to fd 1 ...
	I0806 07:31:01.475476   37161 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 07:31:01.475485   37161 out.go:304] Setting ErrFile to fd 2...
	I0806 07:31:01.475490   37161 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 07:31:01.475687   37161 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19370-13057/.minikube/bin
	I0806 07:31:01.475868   37161 out.go:298] Setting JSON to false
	I0806 07:31:01.475896   37161 mustload.go:65] Loading cluster: ha-118173
	I0806 07:31:01.475935   37161 notify.go:220] Checking for updates...
	I0806 07:31:01.476451   37161 config.go:182] Loaded profile config "ha-118173": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 07:31:01.476471   37161 status.go:255] checking status of ha-118173 ...
	I0806 07:31:01.477010   37161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:31:01.477071   37161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:31:01.494431   37161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44431
	I0806 07:31:01.494880   37161 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:31:01.495460   37161 main.go:141] libmachine: Using API Version  1
	I0806 07:31:01.495490   37161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:31:01.495925   37161 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:31:01.496163   37161 main.go:141] libmachine: (ha-118173) Calling .GetState
	I0806 07:31:01.497863   37161 status.go:330] ha-118173 host status = "Running" (err=<nil>)
	I0806 07:31:01.497886   37161 host.go:66] Checking if "ha-118173" exists ...
	I0806 07:31:01.498242   37161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:31:01.498293   37161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:31:01.514527   37161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39951
	I0806 07:31:01.514911   37161 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:31:01.515454   37161 main.go:141] libmachine: Using API Version  1
	I0806 07:31:01.515473   37161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:31:01.515806   37161 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:31:01.516022   37161 main.go:141] libmachine: (ha-118173) Calling .GetIP
	I0806 07:31:01.519453   37161 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:31:01.519925   37161 main.go:141] libmachine: (ha-118173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:df:f0", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:22:33 +0000 UTC Type:0 Mac:52:54:00:ef:df:f0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-118173 Clientid:01:52:54:00:ef:df:f0}
	I0806 07:31:01.519952   37161 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined IP address 192.168.39.164 and MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:31:01.520111   37161 host.go:66] Checking if "ha-118173" exists ...
	I0806 07:31:01.520424   37161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:31:01.520468   37161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:31:01.535025   37161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44317
	I0806 07:31:01.535418   37161 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:31:01.535855   37161 main.go:141] libmachine: Using API Version  1
	I0806 07:31:01.535875   37161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:31:01.536206   37161 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:31:01.536432   37161 main.go:141] libmachine: (ha-118173) Calling .DriverName
	I0806 07:31:01.536672   37161 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0806 07:31:01.536704   37161 main.go:141] libmachine: (ha-118173) Calling .GetSSHHostname
	I0806 07:31:01.539546   37161 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:31:01.540021   37161 main.go:141] libmachine: (ha-118173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:df:f0", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:22:33 +0000 UTC Type:0 Mac:52:54:00:ef:df:f0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-118173 Clientid:01:52:54:00:ef:df:f0}
	I0806 07:31:01.540053   37161 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined IP address 192.168.39.164 and MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:31:01.540330   37161 main.go:141] libmachine: (ha-118173) Calling .GetSSHPort
	I0806 07:31:01.540513   37161 main.go:141] libmachine: (ha-118173) Calling .GetSSHKeyPath
	I0806 07:31:01.540716   37161 main.go:141] libmachine: (ha-118173) Calling .GetSSHUsername
	I0806 07:31:01.540867   37161 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173/id_rsa Username:docker}
	I0806 07:31:01.621524   37161 ssh_runner.go:195] Run: systemctl --version
	I0806 07:31:01.631900   37161 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 07:31:01.649115   37161 kubeconfig.go:125] found "ha-118173" server: "https://192.168.39.254:8443"
	I0806 07:31:01.649146   37161 api_server.go:166] Checking apiserver status ...
	I0806 07:31:01.649198   37161 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 07:31:01.666918   37161 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1178/cgroup
	W0806 07:31:01.678988   37161 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1178/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0806 07:31:01.679044   37161 ssh_runner.go:195] Run: ls
	I0806 07:31:01.683625   37161 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0806 07:31:01.688302   37161 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0806 07:31:01.688329   37161 status.go:422] ha-118173 apiserver status = Running (err=<nil>)
	I0806 07:31:01.688341   37161 status.go:257] ha-118173 status: &{Name:ha-118173 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0806 07:31:01.688393   37161 status.go:255] checking status of ha-118173-m02 ...
	I0806 07:31:01.688802   37161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:31:01.688852   37161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:31:01.706165   37161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36593
	I0806 07:31:01.706613   37161 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:31:01.707073   37161 main.go:141] libmachine: Using API Version  1
	I0806 07:31:01.707093   37161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:31:01.707466   37161 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:31:01.707667   37161 main.go:141] libmachine: (ha-118173-m02) Calling .GetState
	I0806 07:31:01.709393   37161 status.go:330] ha-118173-m02 host status = "Running" (err=<nil>)
	I0806 07:31:01.709410   37161 host.go:66] Checking if "ha-118173-m02" exists ...
	I0806 07:31:01.709726   37161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:31:01.709761   37161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:31:01.724149   37161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36757
	I0806 07:31:01.724599   37161 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:31:01.725029   37161 main.go:141] libmachine: Using API Version  1
	I0806 07:31:01.725050   37161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:31:01.725335   37161 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:31:01.725506   37161 main.go:141] libmachine: (ha-118173-m02) Calling .GetIP
	I0806 07:31:01.728243   37161 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:31:01.728651   37161 main.go:141] libmachine: (ha-118173-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:e9:38", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:23:29 +0000 UTC Type:0 Mac:52:54:00:b4:e9:38 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-118173-m02 Clientid:01:52:54:00:b4:e9:38}
	I0806 07:31:01.728675   37161 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined IP address 192.168.39.203 and MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:31:01.728801   37161 host.go:66] Checking if "ha-118173-m02" exists ...
	I0806 07:31:01.729175   37161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:31:01.729211   37161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:31:01.744899   37161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41259
	I0806 07:31:01.745353   37161 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:31:01.745796   37161 main.go:141] libmachine: Using API Version  1
	I0806 07:31:01.745822   37161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:31:01.746126   37161 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:31:01.746360   37161 main.go:141] libmachine: (ha-118173-m02) Calling .DriverName
	I0806 07:31:01.746558   37161 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0806 07:31:01.746577   37161 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHHostname
	I0806 07:31:01.749311   37161 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:31:01.749639   37161 main.go:141] libmachine: (ha-118173-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:e9:38", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:23:29 +0000 UTC Type:0 Mac:52:54:00:b4:e9:38 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-118173-m02 Clientid:01:52:54:00:b4:e9:38}
	I0806 07:31:01.749667   37161 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined IP address 192.168.39.203 and MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:31:01.749737   37161 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHPort
	I0806 07:31:01.749883   37161 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHKeyPath
	I0806 07:31:01.750027   37161 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHUsername
	I0806 07:31:01.750188   37161 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173-m02/id_rsa Username:docker}
	W0806 07:31:01.796604   37161 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.203:22: connect: no route to host
	I0806 07:31:01.796692   37161 retry.go:31] will retry after 208.294857ms: dial tcp 192.168.39.203:22: connect: no route to host
	W0806 07:31:05.060627   37161 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.203:22: connect: no route to host
	W0806 07:31:05.060714   37161 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.203:22: connect: no route to host
	E0806 07:31:05.060734   37161 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.203:22: connect: no route to host
	I0806 07:31:05.060742   37161 status.go:257] ha-118173-m02 status: &{Name:ha-118173-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0806 07:31:05.060760   37161 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.203:22: connect: no route to host
	I0806 07:31:05.060790   37161 status.go:255] checking status of ha-118173-m03 ...
	I0806 07:31:05.061223   37161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:31:05.061276   37161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:31:05.077146   37161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33833
	I0806 07:31:05.077627   37161 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:31:05.078129   37161 main.go:141] libmachine: Using API Version  1
	I0806 07:31:05.078154   37161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:31:05.078470   37161 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:31:05.078682   37161 main.go:141] libmachine: (ha-118173-m03) Calling .GetState
	I0806 07:31:05.080718   37161 status.go:330] ha-118173-m03 host status = "Running" (err=<nil>)
	I0806 07:31:05.080736   37161 host.go:66] Checking if "ha-118173-m03" exists ...
	I0806 07:31:05.081087   37161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:31:05.081141   37161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:31:05.096506   37161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41417
	I0806 07:31:05.096900   37161 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:31:05.097433   37161 main.go:141] libmachine: Using API Version  1
	I0806 07:31:05.097453   37161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:31:05.097782   37161 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:31:05.098053   37161 main.go:141] libmachine: (ha-118173-m03) Calling .GetIP
	I0806 07:31:05.101093   37161 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:31:05.101527   37161 main.go:141] libmachine: (ha-118173-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:90:55", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:25:53 +0000 UTC Type:0 Mac:52:54:00:d8:90:55 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-118173-m03 Clientid:01:52:54:00:d8:90:55}
	I0806 07:31:05.101557   37161 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined IP address 192.168.39.5 and MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:31:05.101614   37161 host.go:66] Checking if "ha-118173-m03" exists ...
	I0806 07:31:05.101930   37161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:31:05.101977   37161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:31:05.117606   37161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33147
	I0806 07:31:05.118115   37161 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:31:05.118665   37161 main.go:141] libmachine: Using API Version  1
	I0806 07:31:05.118694   37161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:31:05.119060   37161 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:31:05.119264   37161 main.go:141] libmachine: (ha-118173-m03) Calling .DriverName
	I0806 07:31:05.119482   37161 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0806 07:31:05.119506   37161 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHHostname
	I0806 07:31:05.122695   37161 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:31:05.123222   37161 main.go:141] libmachine: (ha-118173-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:90:55", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:25:53 +0000 UTC Type:0 Mac:52:54:00:d8:90:55 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-118173-m03 Clientid:01:52:54:00:d8:90:55}
	I0806 07:31:05.123236   37161 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined IP address 192.168.39.5 and MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:31:05.123415   37161 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHPort
	I0806 07:31:05.123613   37161 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHKeyPath
	I0806 07:31:05.123761   37161 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHUsername
	I0806 07:31:05.123920   37161 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173-m03/id_rsa Username:docker}
	I0806 07:31:05.201300   37161 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 07:31:05.217201   37161 kubeconfig.go:125] found "ha-118173" server: "https://192.168.39.254:8443"
	I0806 07:31:05.217234   37161 api_server.go:166] Checking apiserver status ...
	I0806 07:31:05.217267   37161 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 07:31:05.232492   37161 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1553/cgroup
	W0806 07:31:05.244067   37161 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1553/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0806 07:31:05.244150   37161 ssh_runner.go:195] Run: ls
	I0806 07:31:05.248931   37161 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0806 07:31:05.253105   37161 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0806 07:31:05.253128   37161 status.go:422] ha-118173-m03 apiserver status = Running (err=<nil>)
	I0806 07:31:05.253136   37161 status.go:257] ha-118173-m03 status: &{Name:ha-118173-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0806 07:31:05.253151   37161 status.go:255] checking status of ha-118173-m04 ...
	I0806 07:31:05.253444   37161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:31:05.253483   37161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:31:05.268946   37161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42807
	I0806 07:31:05.269425   37161 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:31:05.269950   37161 main.go:141] libmachine: Using API Version  1
	I0806 07:31:05.269967   37161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:31:05.270304   37161 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:31:05.270548   37161 main.go:141] libmachine: (ha-118173-m04) Calling .GetState
	I0806 07:31:05.272200   37161 status.go:330] ha-118173-m04 host status = "Running" (err=<nil>)
	I0806 07:31:05.272217   37161 host.go:66] Checking if "ha-118173-m04" exists ...
	I0806 07:31:05.272715   37161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:31:05.272756   37161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:31:05.287246   37161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40093
	I0806 07:31:05.287646   37161 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:31:05.288083   37161 main.go:141] libmachine: Using API Version  1
	I0806 07:31:05.288102   37161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:31:05.288438   37161 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:31:05.288637   37161 main.go:141] libmachine: (ha-118173-m04) Calling .GetIP
	I0806 07:31:05.291688   37161 main.go:141] libmachine: (ha-118173-m04) DBG | domain ha-118173-m04 has defined MAC address 52:54:00:8f:be:12 in network mk-ha-118173
	I0806 07:31:05.292222   37161 main.go:141] libmachine: (ha-118173-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:be:12", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:27:21 +0000 UTC Type:0 Mac:52:54:00:8f:be:12 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:ha-118173-m04 Clientid:01:52:54:00:8f:be:12}
	I0806 07:31:05.292250   37161 main.go:141] libmachine: (ha-118173-m04) DBG | domain ha-118173-m04 has defined IP address 192.168.39.107 and MAC address 52:54:00:8f:be:12 in network mk-ha-118173
	I0806 07:31:05.292409   37161 host.go:66] Checking if "ha-118173-m04" exists ...
	I0806 07:31:05.292733   37161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:31:05.292770   37161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:31:05.307830   37161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36169
	I0806 07:31:05.308203   37161 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:31:05.308647   37161 main.go:141] libmachine: Using API Version  1
	I0806 07:31:05.308669   37161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:31:05.308926   37161 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:31:05.309110   37161 main.go:141] libmachine: (ha-118173-m04) Calling .DriverName
	I0806 07:31:05.309331   37161 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0806 07:31:05.309348   37161 main.go:141] libmachine: (ha-118173-m04) Calling .GetSSHHostname
	I0806 07:31:05.312201   37161 main.go:141] libmachine: (ha-118173-m04) DBG | domain ha-118173-m04 has defined MAC address 52:54:00:8f:be:12 in network mk-ha-118173
	I0806 07:31:05.312661   37161 main.go:141] libmachine: (ha-118173-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:be:12", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:27:21 +0000 UTC Type:0 Mac:52:54:00:8f:be:12 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:ha-118173-m04 Clientid:01:52:54:00:8f:be:12}
	I0806 07:31:05.312700   37161 main.go:141] libmachine: (ha-118173-m04) DBG | domain ha-118173-m04 has defined IP address 192.168.39.107 and MAC address 52:54:00:8f:be:12 in network mk-ha-118173
	I0806 07:31:05.312738   37161 main.go:141] libmachine: (ha-118173-m04) Calling .GetSSHPort
	I0806 07:31:05.312916   37161 main.go:141] libmachine: (ha-118173-m04) Calling .GetSSHKeyPath
	I0806 07:31:05.313064   37161 main.go:141] libmachine: (ha-118173-m04) Calling .GetSSHUsername
	I0806 07:31:05.313183   37161 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173-m04/id_rsa Username:docker}
	I0806 07:31:05.395751   37161 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 07:31:05.410475   37161 status.go:257] ha-118173-m04 status: &{Name:ha-118173-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-118173 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-118173 status -v=7 --alsologtostderr: exit status 3 (3.744277485s)

                                                
                                                
-- stdout --
	ha-118173
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-118173-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-118173-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-118173-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 07:31:08.852709   37278 out.go:291] Setting OutFile to fd 1 ...
	I0806 07:31:08.853087   37278 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 07:31:08.853102   37278 out.go:304] Setting ErrFile to fd 2...
	I0806 07:31:08.853108   37278 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 07:31:08.853343   37278 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19370-13057/.minikube/bin
	I0806 07:31:08.853552   37278 out.go:298] Setting JSON to false
	I0806 07:31:08.853578   37278 mustload.go:65] Loading cluster: ha-118173
	I0806 07:31:08.853633   37278 notify.go:220] Checking for updates...
	I0806 07:31:08.854011   37278 config.go:182] Loaded profile config "ha-118173": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 07:31:08.854027   37278 status.go:255] checking status of ha-118173 ...
	I0806 07:31:08.854429   37278 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:31:08.854501   37278 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:31:08.874609   37278 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34393
	I0806 07:31:08.875074   37278 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:31:08.875655   37278 main.go:141] libmachine: Using API Version  1
	I0806 07:31:08.875679   37278 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:31:08.876046   37278 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:31:08.876265   37278 main.go:141] libmachine: (ha-118173) Calling .GetState
	I0806 07:31:08.877751   37278 status.go:330] ha-118173 host status = "Running" (err=<nil>)
	I0806 07:31:08.877773   37278 host.go:66] Checking if "ha-118173" exists ...
	I0806 07:31:08.878074   37278 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:31:08.878104   37278 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:31:08.893112   37278 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39229
	I0806 07:31:08.893651   37278 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:31:08.894208   37278 main.go:141] libmachine: Using API Version  1
	I0806 07:31:08.894240   37278 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:31:08.894673   37278 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:31:08.894857   37278 main.go:141] libmachine: (ha-118173) Calling .GetIP
	I0806 07:31:08.897607   37278 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:31:08.898063   37278 main.go:141] libmachine: (ha-118173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:df:f0", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:22:33 +0000 UTC Type:0 Mac:52:54:00:ef:df:f0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-118173 Clientid:01:52:54:00:ef:df:f0}
	I0806 07:31:08.898098   37278 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined IP address 192.168.39.164 and MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:31:08.898184   37278 host.go:66] Checking if "ha-118173" exists ...
	I0806 07:31:08.898455   37278 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:31:08.898500   37278 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:31:08.914105   37278 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33737
	I0806 07:31:08.914518   37278 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:31:08.914953   37278 main.go:141] libmachine: Using API Version  1
	I0806 07:31:08.914977   37278 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:31:08.915305   37278 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:31:08.915563   37278 main.go:141] libmachine: (ha-118173) Calling .DriverName
	I0806 07:31:08.915735   37278 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0806 07:31:08.915767   37278 main.go:141] libmachine: (ha-118173) Calling .GetSSHHostname
	I0806 07:31:08.918563   37278 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:31:08.919007   37278 main.go:141] libmachine: (ha-118173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:df:f0", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:22:33 +0000 UTC Type:0 Mac:52:54:00:ef:df:f0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-118173 Clientid:01:52:54:00:ef:df:f0}
	I0806 07:31:08.919044   37278 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined IP address 192.168.39.164 and MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:31:08.919254   37278 main.go:141] libmachine: (ha-118173) Calling .GetSSHPort
	I0806 07:31:08.919415   37278 main.go:141] libmachine: (ha-118173) Calling .GetSSHKeyPath
	I0806 07:31:08.919592   37278 main.go:141] libmachine: (ha-118173) Calling .GetSSHUsername
	I0806 07:31:08.919775   37278 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173/id_rsa Username:docker}
	I0806 07:31:09.001207   37278 ssh_runner.go:195] Run: systemctl --version
	I0806 07:31:09.008018   37278 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 07:31:09.027594   37278 kubeconfig.go:125] found "ha-118173" server: "https://192.168.39.254:8443"
	I0806 07:31:09.027629   37278 api_server.go:166] Checking apiserver status ...
	I0806 07:31:09.027672   37278 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 07:31:09.044112   37278 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1178/cgroup
	W0806 07:31:09.055071   37278 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1178/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0806 07:31:09.055135   37278 ssh_runner.go:195] Run: ls
	I0806 07:31:09.060392   37278 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0806 07:31:09.064786   37278 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0806 07:31:09.064809   37278 status.go:422] ha-118173 apiserver status = Running (err=<nil>)
	I0806 07:31:09.064821   37278 status.go:257] ha-118173 status: &{Name:ha-118173 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0806 07:31:09.064843   37278 status.go:255] checking status of ha-118173-m02 ...
	I0806 07:31:09.065147   37278 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:31:09.065189   37278 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:31:09.080408   37278 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33811
	I0806 07:31:09.080828   37278 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:31:09.081317   37278 main.go:141] libmachine: Using API Version  1
	I0806 07:31:09.081336   37278 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:31:09.081774   37278 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:31:09.081947   37278 main.go:141] libmachine: (ha-118173-m02) Calling .GetState
	I0806 07:31:09.083650   37278 status.go:330] ha-118173-m02 host status = "Running" (err=<nil>)
	I0806 07:31:09.083666   37278 host.go:66] Checking if "ha-118173-m02" exists ...
	I0806 07:31:09.083983   37278 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:31:09.084034   37278 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:31:09.098756   37278 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39007
	I0806 07:31:09.099143   37278 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:31:09.099563   37278 main.go:141] libmachine: Using API Version  1
	I0806 07:31:09.099584   37278 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:31:09.099887   37278 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:31:09.100074   37278 main.go:141] libmachine: (ha-118173-m02) Calling .GetIP
	I0806 07:31:09.102879   37278 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:31:09.103331   37278 main.go:141] libmachine: (ha-118173-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:e9:38", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:23:29 +0000 UTC Type:0 Mac:52:54:00:b4:e9:38 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-118173-m02 Clientid:01:52:54:00:b4:e9:38}
	I0806 07:31:09.103355   37278 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined IP address 192.168.39.203 and MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:31:09.103570   37278 host.go:66] Checking if "ha-118173-m02" exists ...
	I0806 07:31:09.103884   37278 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:31:09.103941   37278 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:31:09.119210   37278 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37377
	I0806 07:31:09.119583   37278 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:31:09.119983   37278 main.go:141] libmachine: Using API Version  1
	I0806 07:31:09.120005   37278 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:31:09.120332   37278 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:31:09.120538   37278 main.go:141] libmachine: (ha-118173-m02) Calling .DriverName
	I0806 07:31:09.120752   37278 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0806 07:31:09.120772   37278 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHHostname
	I0806 07:31:09.123811   37278 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:31:09.124250   37278 main.go:141] libmachine: (ha-118173-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:e9:38", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:23:29 +0000 UTC Type:0 Mac:52:54:00:b4:e9:38 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-118173-m02 Clientid:01:52:54:00:b4:e9:38}
	I0806 07:31:09.124277   37278 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined IP address 192.168.39.203 and MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:31:09.124436   37278 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHPort
	I0806 07:31:09.124590   37278 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHKeyPath
	I0806 07:31:09.124738   37278 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHUsername
	I0806 07:31:09.124872   37278 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173-m02/id_rsa Username:docker}
	W0806 07:31:12.196622   37278 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.203:22: connect: no route to host
	W0806 07:31:12.196704   37278 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.203:22: connect: no route to host
	E0806 07:31:12.196718   37278 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.203:22: connect: no route to host
	I0806 07:31:12.196727   37278 status.go:257] ha-118173-m02 status: &{Name:ha-118173-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0806 07:31:12.196749   37278 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.203:22: connect: no route to host
	I0806 07:31:12.196757   37278 status.go:255] checking status of ha-118173-m03 ...
	I0806 07:31:12.197079   37278 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:31:12.197125   37278 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:31:12.211931   37278 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45441
	I0806 07:31:12.212402   37278 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:31:12.212848   37278 main.go:141] libmachine: Using API Version  1
	I0806 07:31:12.212861   37278 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:31:12.213193   37278 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:31:12.213412   37278 main.go:141] libmachine: (ha-118173-m03) Calling .GetState
	I0806 07:31:12.215054   37278 status.go:330] ha-118173-m03 host status = "Running" (err=<nil>)
	I0806 07:31:12.215069   37278 host.go:66] Checking if "ha-118173-m03" exists ...
	I0806 07:31:12.215391   37278 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:31:12.215444   37278 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:31:12.230148   37278 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38145
	I0806 07:31:12.230564   37278 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:31:12.231095   37278 main.go:141] libmachine: Using API Version  1
	I0806 07:31:12.231120   37278 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:31:12.231465   37278 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:31:12.231655   37278 main.go:141] libmachine: (ha-118173-m03) Calling .GetIP
	I0806 07:31:12.234692   37278 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:31:12.235134   37278 main.go:141] libmachine: (ha-118173-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:90:55", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:25:53 +0000 UTC Type:0 Mac:52:54:00:d8:90:55 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-118173-m03 Clientid:01:52:54:00:d8:90:55}
	I0806 07:31:12.235153   37278 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined IP address 192.168.39.5 and MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:31:12.235317   37278 host.go:66] Checking if "ha-118173-m03" exists ...
	I0806 07:31:12.235617   37278 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:31:12.235658   37278 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:31:12.251523   37278 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38473
	I0806 07:31:12.251966   37278 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:31:12.252451   37278 main.go:141] libmachine: Using API Version  1
	I0806 07:31:12.252477   37278 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:31:12.252848   37278 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:31:12.253024   37278 main.go:141] libmachine: (ha-118173-m03) Calling .DriverName
	I0806 07:31:12.253225   37278 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0806 07:31:12.253245   37278 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHHostname
	I0806 07:31:12.256025   37278 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:31:12.256517   37278 main.go:141] libmachine: (ha-118173-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:90:55", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:25:53 +0000 UTC Type:0 Mac:52:54:00:d8:90:55 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-118173-m03 Clientid:01:52:54:00:d8:90:55}
	I0806 07:31:12.256555   37278 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined IP address 192.168.39.5 and MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:31:12.256711   37278 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHPort
	I0806 07:31:12.256900   37278 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHKeyPath
	I0806 07:31:12.257105   37278 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHUsername
	I0806 07:31:12.257286   37278 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173-m03/id_rsa Username:docker}
	I0806 07:31:12.337263   37278 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 07:31:12.354625   37278 kubeconfig.go:125] found "ha-118173" server: "https://192.168.39.254:8443"
	I0806 07:31:12.354651   37278 api_server.go:166] Checking apiserver status ...
	I0806 07:31:12.354687   37278 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 07:31:12.371051   37278 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1553/cgroup
	W0806 07:31:12.382043   37278 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1553/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0806 07:31:12.382101   37278 ssh_runner.go:195] Run: ls
	I0806 07:31:12.387140   37278 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0806 07:31:12.393565   37278 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0806 07:31:12.393599   37278 status.go:422] ha-118173-m03 apiserver status = Running (err=<nil>)
	I0806 07:31:12.393610   37278 status.go:257] ha-118173-m03 status: &{Name:ha-118173-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0806 07:31:12.393637   37278 status.go:255] checking status of ha-118173-m04 ...
	I0806 07:31:12.393923   37278 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:31:12.393954   37278 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:31:12.408650   37278 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36363
	I0806 07:31:12.409173   37278 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:31:12.409688   37278 main.go:141] libmachine: Using API Version  1
	I0806 07:31:12.409727   37278 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:31:12.410000   37278 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:31:12.410193   37278 main.go:141] libmachine: (ha-118173-m04) Calling .GetState
	I0806 07:31:12.411783   37278 status.go:330] ha-118173-m04 host status = "Running" (err=<nil>)
	I0806 07:31:12.411799   37278 host.go:66] Checking if "ha-118173-m04" exists ...
	I0806 07:31:12.412073   37278 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:31:12.412110   37278 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:31:12.427225   37278 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42049
	I0806 07:31:12.427691   37278 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:31:12.428182   37278 main.go:141] libmachine: Using API Version  1
	I0806 07:31:12.428198   37278 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:31:12.428617   37278 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:31:12.428811   37278 main.go:141] libmachine: (ha-118173-m04) Calling .GetIP
	I0806 07:31:12.432035   37278 main.go:141] libmachine: (ha-118173-m04) DBG | domain ha-118173-m04 has defined MAC address 52:54:00:8f:be:12 in network mk-ha-118173
	I0806 07:31:12.432501   37278 main.go:141] libmachine: (ha-118173-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:be:12", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:27:21 +0000 UTC Type:0 Mac:52:54:00:8f:be:12 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:ha-118173-m04 Clientid:01:52:54:00:8f:be:12}
	I0806 07:31:12.432539   37278 main.go:141] libmachine: (ha-118173-m04) DBG | domain ha-118173-m04 has defined IP address 192.168.39.107 and MAC address 52:54:00:8f:be:12 in network mk-ha-118173
	I0806 07:31:12.432710   37278 host.go:66] Checking if "ha-118173-m04" exists ...
	I0806 07:31:12.433019   37278 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:31:12.433073   37278 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:31:12.448432   37278 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33607
	I0806 07:31:12.448863   37278 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:31:12.449338   37278 main.go:141] libmachine: Using API Version  1
	I0806 07:31:12.449373   37278 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:31:12.449718   37278 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:31:12.449917   37278 main.go:141] libmachine: (ha-118173-m04) Calling .DriverName
	I0806 07:31:12.450135   37278 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0806 07:31:12.450155   37278 main.go:141] libmachine: (ha-118173-m04) Calling .GetSSHHostname
	I0806 07:31:12.453027   37278 main.go:141] libmachine: (ha-118173-m04) DBG | domain ha-118173-m04 has defined MAC address 52:54:00:8f:be:12 in network mk-ha-118173
	I0806 07:31:12.453531   37278 main.go:141] libmachine: (ha-118173-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:be:12", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:27:21 +0000 UTC Type:0 Mac:52:54:00:8f:be:12 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:ha-118173-m04 Clientid:01:52:54:00:8f:be:12}
	I0806 07:31:12.453559   37278 main.go:141] libmachine: (ha-118173-m04) DBG | domain ha-118173-m04 has defined IP address 192.168.39.107 and MAC address 52:54:00:8f:be:12 in network mk-ha-118173
	I0806 07:31:12.453694   37278 main.go:141] libmachine: (ha-118173-m04) Calling .GetSSHPort
	I0806 07:31:12.453884   37278 main.go:141] libmachine: (ha-118173-m04) Calling .GetSSHKeyPath
	I0806 07:31:12.454053   37278 main.go:141] libmachine: (ha-118173-m04) Calling .GetSSHUsername
	I0806 07:31:12.454205   37278 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173-m04/id_rsa Username:docker}
	I0806 07:31:12.536631   37278 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 07:31:12.551953   37278 status.go:257] ha-118173-m04 status: &{Name:ha-118173-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-118173 status -v=7 --alsologtostderr
E0806 07:31:21.824528   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/functional-811691/client.crt: no such file or directory
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-118173 status -v=7 --alsologtostderr: exit status 3 (3.748348561s)

                                                
                                                
-- stdout --
	ha-118173
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-118173-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-118173-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-118173-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 07:31:18.198393   37395 out.go:291] Setting OutFile to fd 1 ...
	I0806 07:31:18.198636   37395 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 07:31:18.198647   37395 out.go:304] Setting ErrFile to fd 2...
	I0806 07:31:18.198651   37395 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 07:31:18.198853   37395 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19370-13057/.minikube/bin
	I0806 07:31:18.199045   37395 out.go:298] Setting JSON to false
	I0806 07:31:18.199074   37395 mustload.go:65] Loading cluster: ha-118173
	I0806 07:31:18.199192   37395 notify.go:220] Checking for updates...
	I0806 07:31:18.199584   37395 config.go:182] Loaded profile config "ha-118173": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 07:31:18.199607   37395 status.go:255] checking status of ha-118173 ...
	I0806 07:31:18.200107   37395 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:31:18.200153   37395 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:31:18.215546   37395 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43899
	I0806 07:31:18.215981   37395 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:31:18.216564   37395 main.go:141] libmachine: Using API Version  1
	I0806 07:31:18.216600   37395 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:31:18.216981   37395 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:31:18.217191   37395 main.go:141] libmachine: (ha-118173) Calling .GetState
	I0806 07:31:18.219010   37395 status.go:330] ha-118173 host status = "Running" (err=<nil>)
	I0806 07:31:18.219034   37395 host.go:66] Checking if "ha-118173" exists ...
	I0806 07:31:18.219361   37395 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:31:18.219396   37395 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:31:18.234045   37395 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40485
	I0806 07:31:18.234473   37395 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:31:18.234962   37395 main.go:141] libmachine: Using API Version  1
	I0806 07:31:18.235011   37395 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:31:18.235388   37395 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:31:18.235629   37395 main.go:141] libmachine: (ha-118173) Calling .GetIP
	I0806 07:31:18.238559   37395 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:31:18.239032   37395 main.go:141] libmachine: (ha-118173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:df:f0", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:22:33 +0000 UTC Type:0 Mac:52:54:00:ef:df:f0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-118173 Clientid:01:52:54:00:ef:df:f0}
	I0806 07:31:18.239065   37395 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined IP address 192.168.39.164 and MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:31:18.239223   37395 host.go:66] Checking if "ha-118173" exists ...
	I0806 07:31:18.239531   37395 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:31:18.239567   37395 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:31:18.255032   37395 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39877
	I0806 07:31:18.255496   37395 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:31:18.256018   37395 main.go:141] libmachine: Using API Version  1
	I0806 07:31:18.256038   37395 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:31:18.256445   37395 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:31:18.256712   37395 main.go:141] libmachine: (ha-118173) Calling .DriverName
	I0806 07:31:18.256934   37395 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0806 07:31:18.256954   37395 main.go:141] libmachine: (ha-118173) Calling .GetSSHHostname
	I0806 07:31:18.259959   37395 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:31:18.260422   37395 main.go:141] libmachine: (ha-118173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:df:f0", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:22:33 +0000 UTC Type:0 Mac:52:54:00:ef:df:f0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-118173 Clientid:01:52:54:00:ef:df:f0}
	I0806 07:31:18.260445   37395 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined IP address 192.168.39.164 and MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:31:18.260620   37395 main.go:141] libmachine: (ha-118173) Calling .GetSSHPort
	I0806 07:31:18.260788   37395 main.go:141] libmachine: (ha-118173) Calling .GetSSHKeyPath
	I0806 07:31:18.260975   37395 main.go:141] libmachine: (ha-118173) Calling .GetSSHUsername
	I0806 07:31:18.261127   37395 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173/id_rsa Username:docker}
	I0806 07:31:18.345846   37395 ssh_runner.go:195] Run: systemctl --version
	I0806 07:31:18.353131   37395 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 07:31:18.369938   37395 kubeconfig.go:125] found "ha-118173" server: "https://192.168.39.254:8443"
	I0806 07:31:18.369969   37395 api_server.go:166] Checking apiserver status ...
	I0806 07:31:18.370002   37395 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 07:31:18.386563   37395 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1178/cgroup
	W0806 07:31:18.399239   37395 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1178/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0806 07:31:18.399304   37395 ssh_runner.go:195] Run: ls
	I0806 07:31:18.404702   37395 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0806 07:31:18.410125   37395 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0806 07:31:18.410150   37395 status.go:422] ha-118173 apiserver status = Running (err=<nil>)
	I0806 07:31:18.410159   37395 status.go:257] ha-118173 status: &{Name:ha-118173 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0806 07:31:18.410174   37395 status.go:255] checking status of ha-118173-m02 ...
	I0806 07:31:18.410457   37395 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:31:18.410487   37395 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:31:18.426234   37395 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42555
	I0806 07:31:18.426704   37395 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:31:18.427205   37395 main.go:141] libmachine: Using API Version  1
	I0806 07:31:18.427226   37395 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:31:18.427510   37395 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:31:18.427681   37395 main.go:141] libmachine: (ha-118173-m02) Calling .GetState
	I0806 07:31:18.429178   37395 status.go:330] ha-118173-m02 host status = "Running" (err=<nil>)
	I0806 07:31:18.429195   37395 host.go:66] Checking if "ha-118173-m02" exists ...
	I0806 07:31:18.429518   37395 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:31:18.429552   37395 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:31:18.444747   37395 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37015
	I0806 07:31:18.445237   37395 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:31:18.445705   37395 main.go:141] libmachine: Using API Version  1
	I0806 07:31:18.445724   37395 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:31:18.446072   37395 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:31:18.446271   37395 main.go:141] libmachine: (ha-118173-m02) Calling .GetIP
	I0806 07:31:18.449165   37395 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:31:18.449610   37395 main.go:141] libmachine: (ha-118173-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:e9:38", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:23:29 +0000 UTC Type:0 Mac:52:54:00:b4:e9:38 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-118173-m02 Clientid:01:52:54:00:b4:e9:38}
	I0806 07:31:18.449630   37395 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined IP address 192.168.39.203 and MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:31:18.449779   37395 host.go:66] Checking if "ha-118173-m02" exists ...
	I0806 07:31:18.450080   37395 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:31:18.450118   37395 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:31:18.465551   37395 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45115
	I0806 07:31:18.465993   37395 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:31:18.466452   37395 main.go:141] libmachine: Using API Version  1
	I0806 07:31:18.466473   37395 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:31:18.466776   37395 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:31:18.466944   37395 main.go:141] libmachine: (ha-118173-m02) Calling .DriverName
	I0806 07:31:18.467150   37395 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0806 07:31:18.467166   37395 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHHostname
	I0806 07:31:18.469711   37395 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:31:18.470080   37395 main.go:141] libmachine: (ha-118173-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:e9:38", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:23:29 +0000 UTC Type:0 Mac:52:54:00:b4:e9:38 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-118173-m02 Clientid:01:52:54:00:b4:e9:38}
	I0806 07:31:18.470094   37395 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined IP address 192.168.39.203 and MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:31:18.470240   37395 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHPort
	I0806 07:31:18.470400   37395 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHKeyPath
	I0806 07:31:18.470586   37395 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHUsername
	I0806 07:31:18.470751   37395 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173-m02/id_rsa Username:docker}
	W0806 07:31:21.544678   37395 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.203:22: connect: no route to host
	W0806 07:31:21.544766   37395 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.203:22: connect: no route to host
	E0806 07:31:21.544786   37395 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.203:22: connect: no route to host
	I0806 07:31:21.544795   37395 status.go:257] ha-118173-m02 status: &{Name:ha-118173-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0806 07:31:21.544813   37395 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.203:22: connect: no route to host
	I0806 07:31:21.544821   37395 status.go:255] checking status of ha-118173-m03 ...
	I0806 07:31:21.545156   37395 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:31:21.545214   37395 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:31:21.559887   37395 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43535
	I0806 07:31:21.560349   37395 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:31:21.560787   37395 main.go:141] libmachine: Using API Version  1
	I0806 07:31:21.560813   37395 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:31:21.561132   37395 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:31:21.561307   37395 main.go:141] libmachine: (ha-118173-m03) Calling .GetState
	I0806 07:31:21.562973   37395 status.go:330] ha-118173-m03 host status = "Running" (err=<nil>)
	I0806 07:31:21.562991   37395 host.go:66] Checking if "ha-118173-m03" exists ...
	I0806 07:31:21.563276   37395 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:31:21.563310   37395 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:31:21.580673   37395 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42459
	I0806 07:31:21.581063   37395 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:31:21.581492   37395 main.go:141] libmachine: Using API Version  1
	I0806 07:31:21.581511   37395 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:31:21.581798   37395 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:31:21.582015   37395 main.go:141] libmachine: (ha-118173-m03) Calling .GetIP
	I0806 07:31:21.584908   37395 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:31:21.585377   37395 main.go:141] libmachine: (ha-118173-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:90:55", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:25:53 +0000 UTC Type:0 Mac:52:54:00:d8:90:55 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-118173-m03 Clientid:01:52:54:00:d8:90:55}
	I0806 07:31:21.585406   37395 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined IP address 192.168.39.5 and MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:31:21.585591   37395 host.go:66] Checking if "ha-118173-m03" exists ...
	I0806 07:31:21.585967   37395 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:31:21.586095   37395 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:31:21.601067   37395 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39749
	I0806 07:31:21.601524   37395 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:31:21.602085   37395 main.go:141] libmachine: Using API Version  1
	I0806 07:31:21.602102   37395 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:31:21.602368   37395 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:31:21.602509   37395 main.go:141] libmachine: (ha-118173-m03) Calling .DriverName
	I0806 07:31:21.602652   37395 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0806 07:31:21.602670   37395 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHHostname
	I0806 07:31:21.605853   37395 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:31:21.606383   37395 main.go:141] libmachine: (ha-118173-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:90:55", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:25:53 +0000 UTC Type:0 Mac:52:54:00:d8:90:55 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-118173-m03 Clientid:01:52:54:00:d8:90:55}
	I0806 07:31:21.606411   37395 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined IP address 192.168.39.5 and MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:31:21.606533   37395 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHPort
	I0806 07:31:21.606710   37395 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHKeyPath
	I0806 07:31:21.606889   37395 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHUsername
	I0806 07:31:21.606999   37395 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173-m03/id_rsa Username:docker}
	I0806 07:31:21.688400   37395 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 07:31:21.707369   37395 kubeconfig.go:125] found "ha-118173" server: "https://192.168.39.254:8443"
	I0806 07:31:21.707401   37395 api_server.go:166] Checking apiserver status ...
	I0806 07:31:21.707441   37395 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 07:31:21.722851   37395 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1553/cgroup
	W0806 07:31:21.733549   37395 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1553/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0806 07:31:21.733604   37395 ssh_runner.go:195] Run: ls
	I0806 07:31:21.739852   37395 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0806 07:31:21.744812   37395 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0806 07:31:21.744839   37395 status.go:422] ha-118173-m03 apiserver status = Running (err=<nil>)
	I0806 07:31:21.744848   37395 status.go:257] ha-118173-m03 status: &{Name:ha-118173-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0806 07:31:21.744864   37395 status.go:255] checking status of ha-118173-m04 ...
	I0806 07:31:21.745280   37395 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:31:21.745320   37395 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:31:21.760248   37395 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44201
	I0806 07:31:21.760769   37395 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:31:21.761302   37395 main.go:141] libmachine: Using API Version  1
	I0806 07:31:21.761328   37395 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:31:21.761649   37395 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:31:21.761830   37395 main.go:141] libmachine: (ha-118173-m04) Calling .GetState
	I0806 07:31:21.763422   37395 status.go:330] ha-118173-m04 host status = "Running" (err=<nil>)
	I0806 07:31:21.763436   37395 host.go:66] Checking if "ha-118173-m04" exists ...
	I0806 07:31:21.763702   37395 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:31:21.763738   37395 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:31:21.778605   37395 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42869
	I0806 07:31:21.779100   37395 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:31:21.779698   37395 main.go:141] libmachine: Using API Version  1
	I0806 07:31:21.779727   37395 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:31:21.780050   37395 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:31:21.780260   37395 main.go:141] libmachine: (ha-118173-m04) Calling .GetIP
	I0806 07:31:21.783256   37395 main.go:141] libmachine: (ha-118173-m04) DBG | domain ha-118173-m04 has defined MAC address 52:54:00:8f:be:12 in network mk-ha-118173
	I0806 07:31:21.783772   37395 main.go:141] libmachine: (ha-118173-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:be:12", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:27:21 +0000 UTC Type:0 Mac:52:54:00:8f:be:12 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:ha-118173-m04 Clientid:01:52:54:00:8f:be:12}
	I0806 07:31:21.783789   37395 main.go:141] libmachine: (ha-118173-m04) DBG | domain ha-118173-m04 has defined IP address 192.168.39.107 and MAC address 52:54:00:8f:be:12 in network mk-ha-118173
	I0806 07:31:21.783966   37395 host.go:66] Checking if "ha-118173-m04" exists ...
	I0806 07:31:21.784295   37395 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:31:21.784337   37395 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:31:21.799306   37395 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32909
	I0806 07:31:21.799855   37395 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:31:21.800464   37395 main.go:141] libmachine: Using API Version  1
	I0806 07:31:21.800491   37395 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:31:21.800839   37395 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:31:21.801047   37395 main.go:141] libmachine: (ha-118173-m04) Calling .DriverName
	I0806 07:31:21.801250   37395 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0806 07:31:21.801273   37395 main.go:141] libmachine: (ha-118173-m04) Calling .GetSSHHostname
	I0806 07:31:21.805120   37395 main.go:141] libmachine: (ha-118173-m04) DBG | domain ha-118173-m04 has defined MAC address 52:54:00:8f:be:12 in network mk-ha-118173
	I0806 07:31:21.805539   37395 main.go:141] libmachine: (ha-118173-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:be:12", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:27:21 +0000 UTC Type:0 Mac:52:54:00:8f:be:12 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:ha-118173-m04 Clientid:01:52:54:00:8f:be:12}
	I0806 07:31:21.805565   37395 main.go:141] libmachine: (ha-118173-m04) DBG | domain ha-118173-m04 has defined IP address 192.168.39.107 and MAC address 52:54:00:8f:be:12 in network mk-ha-118173
	I0806 07:31:21.805698   37395 main.go:141] libmachine: (ha-118173-m04) Calling .GetSSHPort
	I0806 07:31:21.805887   37395 main.go:141] libmachine: (ha-118173-m04) Calling .GetSSHKeyPath
	I0806 07:31:21.806079   37395 main.go:141] libmachine: (ha-118173-m04) Calling .GetSSHUsername
	I0806 07:31:21.806225   37395 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173-m04/id_rsa Username:docker}
	I0806 07:31:21.887953   37395 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 07:31:21.903014   37395 status.go:257] ha-118173-m04 status: &{Name:ha-118173-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-118173 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-118173 status -v=7 --alsologtostderr: exit status 7 (622.167367ms)

                                                
                                                
-- stdout --
	ha-118173
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-118173-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-118173-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-118173-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 07:31:30.509556   37531 out.go:291] Setting OutFile to fd 1 ...
	I0806 07:31:30.509653   37531 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 07:31:30.509660   37531 out.go:304] Setting ErrFile to fd 2...
	I0806 07:31:30.509665   37531 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 07:31:30.509867   37531 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19370-13057/.minikube/bin
	I0806 07:31:30.510026   37531 out.go:298] Setting JSON to false
	I0806 07:31:30.510050   37531 mustload.go:65] Loading cluster: ha-118173
	I0806 07:31:30.510149   37531 notify.go:220] Checking for updates...
	I0806 07:31:30.510409   37531 config.go:182] Loaded profile config "ha-118173": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 07:31:30.510422   37531 status.go:255] checking status of ha-118173 ...
	I0806 07:31:30.510750   37531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:31:30.510792   37531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:31:30.525296   37531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37377
	I0806 07:31:30.525722   37531 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:31:30.526333   37531 main.go:141] libmachine: Using API Version  1
	I0806 07:31:30.526350   37531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:31:30.526889   37531 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:31:30.527127   37531 main.go:141] libmachine: (ha-118173) Calling .GetState
	I0806 07:31:30.528889   37531 status.go:330] ha-118173 host status = "Running" (err=<nil>)
	I0806 07:31:30.528912   37531 host.go:66] Checking if "ha-118173" exists ...
	I0806 07:31:30.529264   37531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:31:30.529300   37531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:31:30.544009   37531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33259
	I0806 07:31:30.544573   37531 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:31:30.545046   37531 main.go:141] libmachine: Using API Version  1
	I0806 07:31:30.545064   37531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:31:30.545406   37531 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:31:30.545597   37531 main.go:141] libmachine: (ha-118173) Calling .GetIP
	I0806 07:31:30.548026   37531 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:31:30.548474   37531 main.go:141] libmachine: (ha-118173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:df:f0", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:22:33 +0000 UTC Type:0 Mac:52:54:00:ef:df:f0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-118173 Clientid:01:52:54:00:ef:df:f0}
	I0806 07:31:30.548500   37531 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined IP address 192.168.39.164 and MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:31:30.548629   37531 host.go:66] Checking if "ha-118173" exists ...
	I0806 07:31:30.548977   37531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:31:30.549015   37531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:31:30.565268   37531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44885
	I0806 07:31:30.565752   37531 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:31:30.566270   37531 main.go:141] libmachine: Using API Version  1
	I0806 07:31:30.566297   37531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:31:30.566648   37531 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:31:30.566854   37531 main.go:141] libmachine: (ha-118173) Calling .DriverName
	I0806 07:31:30.567077   37531 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0806 07:31:30.567108   37531 main.go:141] libmachine: (ha-118173) Calling .GetSSHHostname
	I0806 07:31:30.570160   37531 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:31:30.570697   37531 main.go:141] libmachine: (ha-118173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:df:f0", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:22:33 +0000 UTC Type:0 Mac:52:54:00:ef:df:f0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-118173 Clientid:01:52:54:00:ef:df:f0}
	I0806 07:31:30.570731   37531 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined IP address 192.168.39.164 and MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:31:30.571005   37531 main.go:141] libmachine: (ha-118173) Calling .GetSSHPort
	I0806 07:31:30.571221   37531 main.go:141] libmachine: (ha-118173) Calling .GetSSHKeyPath
	I0806 07:31:30.571400   37531 main.go:141] libmachine: (ha-118173) Calling .GetSSHUsername
	I0806 07:31:30.571610   37531 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173/id_rsa Username:docker}
	I0806 07:31:30.652505   37531 ssh_runner.go:195] Run: systemctl --version
	I0806 07:31:30.659765   37531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 07:31:30.680315   37531 kubeconfig.go:125] found "ha-118173" server: "https://192.168.39.254:8443"
	I0806 07:31:30.680342   37531 api_server.go:166] Checking apiserver status ...
	I0806 07:31:30.680387   37531 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 07:31:30.698267   37531 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1178/cgroup
	W0806 07:31:30.710014   37531 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1178/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0806 07:31:30.710092   37531 ssh_runner.go:195] Run: ls
	I0806 07:31:30.715735   37531 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0806 07:31:30.720173   37531 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0806 07:31:30.720203   37531 status.go:422] ha-118173 apiserver status = Running (err=<nil>)
	I0806 07:31:30.720216   37531 status.go:257] ha-118173 status: &{Name:ha-118173 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0806 07:31:30.720237   37531 status.go:255] checking status of ha-118173-m02 ...
	I0806 07:31:30.720626   37531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:31:30.720661   37531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:31:30.735277   37531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35413
	I0806 07:31:30.735787   37531 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:31:30.736290   37531 main.go:141] libmachine: Using API Version  1
	I0806 07:31:30.736309   37531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:31:30.736649   37531 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:31:30.736906   37531 main.go:141] libmachine: (ha-118173-m02) Calling .GetState
	I0806 07:31:30.738507   37531 status.go:330] ha-118173-m02 host status = "Stopped" (err=<nil>)
	I0806 07:31:30.738519   37531 status.go:343] host is not running, skipping remaining checks
	I0806 07:31:30.738525   37531 status.go:257] ha-118173-m02 status: &{Name:ha-118173-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0806 07:31:30.738539   37531 status.go:255] checking status of ha-118173-m03 ...
	I0806 07:31:30.738915   37531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:31:30.738958   37531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:31:30.753367   37531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36479
	I0806 07:31:30.753894   37531 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:31:30.754389   37531 main.go:141] libmachine: Using API Version  1
	I0806 07:31:30.754411   37531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:31:30.754777   37531 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:31:30.754960   37531 main.go:141] libmachine: (ha-118173-m03) Calling .GetState
	I0806 07:31:30.756942   37531 status.go:330] ha-118173-m03 host status = "Running" (err=<nil>)
	I0806 07:31:30.756959   37531 host.go:66] Checking if "ha-118173-m03" exists ...
	I0806 07:31:30.757239   37531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:31:30.757278   37531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:31:30.772731   37531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46631
	I0806 07:31:30.773235   37531 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:31:30.773728   37531 main.go:141] libmachine: Using API Version  1
	I0806 07:31:30.773756   37531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:31:30.774099   37531 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:31:30.774288   37531 main.go:141] libmachine: (ha-118173-m03) Calling .GetIP
	I0806 07:31:30.777188   37531 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:31:30.777683   37531 main.go:141] libmachine: (ha-118173-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:90:55", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:25:53 +0000 UTC Type:0 Mac:52:54:00:d8:90:55 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-118173-m03 Clientid:01:52:54:00:d8:90:55}
	I0806 07:31:30.777706   37531 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined IP address 192.168.39.5 and MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:31:30.777866   37531 host.go:66] Checking if "ha-118173-m03" exists ...
	I0806 07:31:30.778274   37531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:31:30.778314   37531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:31:30.793245   37531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35597
	I0806 07:31:30.793782   37531 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:31:30.794283   37531 main.go:141] libmachine: Using API Version  1
	I0806 07:31:30.794309   37531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:31:30.794637   37531 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:31:30.794824   37531 main.go:141] libmachine: (ha-118173-m03) Calling .DriverName
	I0806 07:31:30.795026   37531 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0806 07:31:30.795045   37531 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHHostname
	I0806 07:31:30.798282   37531 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:31:30.798840   37531 main.go:141] libmachine: (ha-118173-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:90:55", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:25:53 +0000 UTC Type:0 Mac:52:54:00:d8:90:55 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-118173-m03 Clientid:01:52:54:00:d8:90:55}
	I0806 07:31:30.798861   37531 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined IP address 192.168.39.5 and MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:31:30.799109   37531 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHPort
	I0806 07:31:30.799330   37531 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHKeyPath
	I0806 07:31:30.799502   37531 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHUsername
	I0806 07:31:30.799700   37531 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173-m03/id_rsa Username:docker}
	I0806 07:31:30.876527   37531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 07:31:30.891581   37531 kubeconfig.go:125] found "ha-118173" server: "https://192.168.39.254:8443"
	I0806 07:31:30.891616   37531 api_server.go:166] Checking apiserver status ...
	I0806 07:31:30.891667   37531 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 07:31:30.906236   37531 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1553/cgroup
	W0806 07:31:30.916399   37531 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1553/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0806 07:31:30.916449   37531 ssh_runner.go:195] Run: ls
	I0806 07:31:30.921891   37531 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0806 07:31:30.928993   37531 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0806 07:31:30.929020   37531 status.go:422] ha-118173-m03 apiserver status = Running (err=<nil>)
	I0806 07:31:30.929029   37531 status.go:257] ha-118173-m03 status: &{Name:ha-118173-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0806 07:31:30.929044   37531 status.go:255] checking status of ha-118173-m04 ...
	I0806 07:31:30.929368   37531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:31:30.929400   37531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:31:30.944046   37531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34083
	I0806 07:31:30.944553   37531 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:31:30.945133   37531 main.go:141] libmachine: Using API Version  1
	I0806 07:31:30.945162   37531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:31:30.945556   37531 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:31:30.945750   37531 main.go:141] libmachine: (ha-118173-m04) Calling .GetState
	I0806 07:31:30.947457   37531 status.go:330] ha-118173-m04 host status = "Running" (err=<nil>)
	I0806 07:31:30.947483   37531 host.go:66] Checking if "ha-118173-m04" exists ...
	I0806 07:31:30.947738   37531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:31:30.947768   37531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:31:30.962567   37531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45177
	I0806 07:31:30.962974   37531 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:31:30.963416   37531 main.go:141] libmachine: Using API Version  1
	I0806 07:31:30.963435   37531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:31:30.963757   37531 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:31:30.963971   37531 main.go:141] libmachine: (ha-118173-m04) Calling .GetIP
	I0806 07:31:30.966885   37531 main.go:141] libmachine: (ha-118173-m04) DBG | domain ha-118173-m04 has defined MAC address 52:54:00:8f:be:12 in network mk-ha-118173
	I0806 07:31:30.967324   37531 main.go:141] libmachine: (ha-118173-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:be:12", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:27:21 +0000 UTC Type:0 Mac:52:54:00:8f:be:12 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:ha-118173-m04 Clientid:01:52:54:00:8f:be:12}
	I0806 07:31:30.967355   37531 main.go:141] libmachine: (ha-118173-m04) DBG | domain ha-118173-m04 has defined IP address 192.168.39.107 and MAC address 52:54:00:8f:be:12 in network mk-ha-118173
	I0806 07:31:30.967475   37531 host.go:66] Checking if "ha-118173-m04" exists ...
	I0806 07:31:30.967781   37531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:31:30.967833   37531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:31:30.983726   37531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43013
	I0806 07:31:30.984154   37531 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:31:30.984661   37531 main.go:141] libmachine: Using API Version  1
	I0806 07:31:30.984680   37531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:31:30.984993   37531 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:31:30.985175   37531 main.go:141] libmachine: (ha-118173-m04) Calling .DriverName
	I0806 07:31:30.985344   37531 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0806 07:31:30.985369   37531 main.go:141] libmachine: (ha-118173-m04) Calling .GetSSHHostname
	I0806 07:31:30.988197   37531 main.go:141] libmachine: (ha-118173-m04) DBG | domain ha-118173-m04 has defined MAC address 52:54:00:8f:be:12 in network mk-ha-118173
	I0806 07:31:30.988647   37531 main.go:141] libmachine: (ha-118173-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:be:12", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:27:21 +0000 UTC Type:0 Mac:52:54:00:8f:be:12 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:ha-118173-m04 Clientid:01:52:54:00:8f:be:12}
	I0806 07:31:30.988683   37531 main.go:141] libmachine: (ha-118173-m04) DBG | domain ha-118173-m04 has defined IP address 192.168.39.107 and MAC address 52:54:00:8f:be:12 in network mk-ha-118173
	I0806 07:31:30.988813   37531 main.go:141] libmachine: (ha-118173-m04) Calling .GetSSHPort
	I0806 07:31:30.988977   37531 main.go:141] libmachine: (ha-118173-m04) Calling .GetSSHKeyPath
	I0806 07:31:30.989136   37531 main.go:141] libmachine: (ha-118173-m04) Calling .GetSSHUsername
	I0806 07:31:30.989276   37531 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173-m04/id_rsa Username:docker}
	I0806 07:31:31.072566   37531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 07:31:31.088338   37531 status.go:257] ha-118173-m04 status: &{Name:ha-118173-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-118173 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-118173 status -v=7 --alsologtostderr: exit status 7 (614.717226ms)

                                                
                                                
-- stdout --
	ha-118173
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-118173-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-118173-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-118173-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 07:31:40.108937   37636 out.go:291] Setting OutFile to fd 1 ...
	I0806 07:31:40.109189   37636 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 07:31:40.109198   37636 out.go:304] Setting ErrFile to fd 2...
	I0806 07:31:40.109201   37636 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 07:31:40.109363   37636 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19370-13057/.minikube/bin
	I0806 07:31:40.109524   37636 out.go:298] Setting JSON to false
	I0806 07:31:40.109548   37636 mustload.go:65] Loading cluster: ha-118173
	I0806 07:31:40.109658   37636 notify.go:220] Checking for updates...
	I0806 07:31:40.110003   37636 config.go:182] Loaded profile config "ha-118173": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 07:31:40.110019   37636 status.go:255] checking status of ha-118173 ...
	I0806 07:31:40.110411   37636 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:31:40.110461   37636 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:31:40.127991   37636 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43849
	I0806 07:31:40.128389   37636 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:31:40.128899   37636 main.go:141] libmachine: Using API Version  1
	I0806 07:31:40.128919   37636 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:31:40.129325   37636 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:31:40.129533   37636 main.go:141] libmachine: (ha-118173) Calling .GetState
	I0806 07:31:40.131101   37636 status.go:330] ha-118173 host status = "Running" (err=<nil>)
	I0806 07:31:40.131121   37636 host.go:66] Checking if "ha-118173" exists ...
	I0806 07:31:40.131407   37636 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:31:40.131446   37636 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:31:40.146322   37636 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40047
	I0806 07:31:40.146850   37636 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:31:40.147364   37636 main.go:141] libmachine: Using API Version  1
	I0806 07:31:40.147385   37636 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:31:40.147683   37636 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:31:40.147842   37636 main.go:141] libmachine: (ha-118173) Calling .GetIP
	I0806 07:31:40.150748   37636 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:31:40.151228   37636 main.go:141] libmachine: (ha-118173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:df:f0", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:22:33 +0000 UTC Type:0 Mac:52:54:00:ef:df:f0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-118173 Clientid:01:52:54:00:ef:df:f0}
	I0806 07:31:40.151255   37636 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined IP address 192.168.39.164 and MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:31:40.151400   37636 host.go:66] Checking if "ha-118173" exists ...
	I0806 07:31:40.151690   37636 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:31:40.151732   37636 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:31:40.166174   37636 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33879
	I0806 07:31:40.166553   37636 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:31:40.167060   37636 main.go:141] libmachine: Using API Version  1
	I0806 07:31:40.167088   37636 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:31:40.167438   37636 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:31:40.167632   37636 main.go:141] libmachine: (ha-118173) Calling .DriverName
	I0806 07:31:40.167861   37636 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0806 07:31:40.167879   37636 main.go:141] libmachine: (ha-118173) Calling .GetSSHHostname
	I0806 07:31:40.171063   37636 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:31:40.171575   37636 main.go:141] libmachine: (ha-118173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:df:f0", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:22:33 +0000 UTC Type:0 Mac:52:54:00:ef:df:f0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-118173 Clientid:01:52:54:00:ef:df:f0}
	I0806 07:31:40.171609   37636 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined IP address 192.168.39.164 and MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:31:40.171772   37636 main.go:141] libmachine: (ha-118173) Calling .GetSSHPort
	I0806 07:31:40.171956   37636 main.go:141] libmachine: (ha-118173) Calling .GetSSHKeyPath
	I0806 07:31:40.172195   37636 main.go:141] libmachine: (ha-118173) Calling .GetSSHUsername
	I0806 07:31:40.172380   37636 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173/id_rsa Username:docker}
	I0806 07:31:40.254576   37636 ssh_runner.go:195] Run: systemctl --version
	I0806 07:31:40.260860   37636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 07:31:40.277667   37636 kubeconfig.go:125] found "ha-118173" server: "https://192.168.39.254:8443"
	I0806 07:31:40.277692   37636 api_server.go:166] Checking apiserver status ...
	I0806 07:31:40.277729   37636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 07:31:40.293379   37636 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1178/cgroup
	W0806 07:31:40.303279   37636 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1178/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0806 07:31:40.303333   37636 ssh_runner.go:195] Run: ls
	I0806 07:31:40.308241   37636 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0806 07:31:40.314488   37636 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0806 07:31:40.314515   37636 status.go:422] ha-118173 apiserver status = Running (err=<nil>)
	I0806 07:31:40.314528   37636 status.go:257] ha-118173 status: &{Name:ha-118173 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0806 07:31:40.314558   37636 status.go:255] checking status of ha-118173-m02 ...
	I0806 07:31:40.314963   37636 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:31:40.315006   37636 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:31:40.330529   37636 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42909
	I0806 07:31:40.331004   37636 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:31:40.331475   37636 main.go:141] libmachine: Using API Version  1
	I0806 07:31:40.331497   37636 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:31:40.331814   37636 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:31:40.332015   37636 main.go:141] libmachine: (ha-118173-m02) Calling .GetState
	I0806 07:31:40.333586   37636 status.go:330] ha-118173-m02 host status = "Stopped" (err=<nil>)
	I0806 07:31:40.333601   37636 status.go:343] host is not running, skipping remaining checks
	I0806 07:31:40.333608   37636 status.go:257] ha-118173-m02 status: &{Name:ha-118173-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0806 07:31:40.333627   37636 status.go:255] checking status of ha-118173-m03 ...
	I0806 07:31:40.333947   37636 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:31:40.333985   37636 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:31:40.348942   37636 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37511
	I0806 07:31:40.349470   37636 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:31:40.349933   37636 main.go:141] libmachine: Using API Version  1
	I0806 07:31:40.349962   37636 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:31:40.350255   37636 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:31:40.350425   37636 main.go:141] libmachine: (ha-118173-m03) Calling .GetState
	I0806 07:31:40.351885   37636 status.go:330] ha-118173-m03 host status = "Running" (err=<nil>)
	I0806 07:31:40.351900   37636 host.go:66] Checking if "ha-118173-m03" exists ...
	I0806 07:31:40.352212   37636 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:31:40.352251   37636 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:31:40.366931   37636 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45701
	I0806 07:31:40.367379   37636 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:31:40.367815   37636 main.go:141] libmachine: Using API Version  1
	I0806 07:31:40.367830   37636 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:31:40.368232   37636 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:31:40.368493   37636 main.go:141] libmachine: (ha-118173-m03) Calling .GetIP
	I0806 07:31:40.371406   37636 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:31:40.371859   37636 main.go:141] libmachine: (ha-118173-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:90:55", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:25:53 +0000 UTC Type:0 Mac:52:54:00:d8:90:55 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-118173-m03 Clientid:01:52:54:00:d8:90:55}
	I0806 07:31:40.371884   37636 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined IP address 192.168.39.5 and MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:31:40.372080   37636 host.go:66] Checking if "ha-118173-m03" exists ...
	I0806 07:31:40.372534   37636 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:31:40.372593   37636 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:31:40.387195   37636 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36177
	I0806 07:31:40.387587   37636 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:31:40.388003   37636 main.go:141] libmachine: Using API Version  1
	I0806 07:31:40.388017   37636 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:31:40.388327   37636 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:31:40.388543   37636 main.go:141] libmachine: (ha-118173-m03) Calling .DriverName
	I0806 07:31:40.388722   37636 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0806 07:31:40.388743   37636 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHHostname
	I0806 07:31:40.391515   37636 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:31:40.391969   37636 main.go:141] libmachine: (ha-118173-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:90:55", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:25:53 +0000 UTC Type:0 Mac:52:54:00:d8:90:55 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-118173-m03 Clientid:01:52:54:00:d8:90:55}
	I0806 07:31:40.391996   37636 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined IP address 192.168.39.5 and MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:31:40.392160   37636 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHPort
	I0806 07:31:40.392336   37636 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHKeyPath
	I0806 07:31:40.392535   37636 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHUsername
	I0806 07:31:40.392725   37636 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173-m03/id_rsa Username:docker}
	I0806 07:31:40.473399   37636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 07:31:40.488725   37636 kubeconfig.go:125] found "ha-118173" server: "https://192.168.39.254:8443"
	I0806 07:31:40.488757   37636 api_server.go:166] Checking apiserver status ...
	I0806 07:31:40.488801   37636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 07:31:40.503567   37636 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1553/cgroup
	W0806 07:31:40.513816   37636 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1553/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0806 07:31:40.513881   37636 ssh_runner.go:195] Run: ls
	I0806 07:31:40.518386   37636 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0806 07:31:40.523494   37636 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0806 07:31:40.523518   37636 status.go:422] ha-118173-m03 apiserver status = Running (err=<nil>)
	I0806 07:31:40.523526   37636 status.go:257] ha-118173-m03 status: &{Name:ha-118173-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0806 07:31:40.523540   37636 status.go:255] checking status of ha-118173-m04 ...
	I0806 07:31:40.523832   37636 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:31:40.523863   37636 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:31:40.539144   37636 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45317
	I0806 07:31:40.539577   37636 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:31:40.540029   37636 main.go:141] libmachine: Using API Version  1
	I0806 07:31:40.540048   37636 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:31:40.540388   37636 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:31:40.540589   37636 main.go:141] libmachine: (ha-118173-m04) Calling .GetState
	I0806 07:31:40.542300   37636 status.go:330] ha-118173-m04 host status = "Running" (err=<nil>)
	I0806 07:31:40.542313   37636 host.go:66] Checking if "ha-118173-m04" exists ...
	I0806 07:31:40.542607   37636 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:31:40.542652   37636 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:31:40.557226   37636 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33285
	I0806 07:31:40.557631   37636 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:31:40.558067   37636 main.go:141] libmachine: Using API Version  1
	I0806 07:31:40.558088   37636 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:31:40.558361   37636 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:31:40.558567   37636 main.go:141] libmachine: (ha-118173-m04) Calling .GetIP
	I0806 07:31:40.561401   37636 main.go:141] libmachine: (ha-118173-m04) DBG | domain ha-118173-m04 has defined MAC address 52:54:00:8f:be:12 in network mk-ha-118173
	I0806 07:31:40.561930   37636 main.go:141] libmachine: (ha-118173-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:be:12", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:27:21 +0000 UTC Type:0 Mac:52:54:00:8f:be:12 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:ha-118173-m04 Clientid:01:52:54:00:8f:be:12}
	I0806 07:31:40.561956   37636 main.go:141] libmachine: (ha-118173-m04) DBG | domain ha-118173-m04 has defined IP address 192.168.39.107 and MAC address 52:54:00:8f:be:12 in network mk-ha-118173
	I0806 07:31:40.562113   37636 host.go:66] Checking if "ha-118173-m04" exists ...
	I0806 07:31:40.562542   37636 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:31:40.562588   37636 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:31:40.578643   37636 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34917
	I0806 07:31:40.579062   37636 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:31:40.579444   37636 main.go:141] libmachine: Using API Version  1
	I0806 07:31:40.579463   37636 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:31:40.579774   37636 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:31:40.579977   37636 main.go:141] libmachine: (ha-118173-m04) Calling .DriverName
	I0806 07:31:40.580169   37636 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0806 07:31:40.580187   37636 main.go:141] libmachine: (ha-118173-m04) Calling .GetSSHHostname
	I0806 07:31:40.582893   37636 main.go:141] libmachine: (ha-118173-m04) DBG | domain ha-118173-m04 has defined MAC address 52:54:00:8f:be:12 in network mk-ha-118173
	I0806 07:31:40.583322   37636 main.go:141] libmachine: (ha-118173-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:be:12", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:27:21 +0000 UTC Type:0 Mac:52:54:00:8f:be:12 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:ha-118173-m04 Clientid:01:52:54:00:8f:be:12}
	I0806 07:31:40.583349   37636 main.go:141] libmachine: (ha-118173-m04) DBG | domain ha-118173-m04 has defined IP address 192.168.39.107 and MAC address 52:54:00:8f:be:12 in network mk-ha-118173
	I0806 07:31:40.583461   37636 main.go:141] libmachine: (ha-118173-m04) Calling .GetSSHPort
	I0806 07:31:40.583624   37636 main.go:141] libmachine: (ha-118173-m04) Calling .GetSSHKeyPath
	I0806 07:31:40.583768   37636 main.go:141] libmachine: (ha-118173-m04) Calling .GetSSHUsername
	I0806 07:31:40.583909   37636 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173-m04/id_rsa Username:docker}
	I0806 07:31:40.664840   37636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 07:31:40.679984   37636 status.go:257] ha-118173-m04 status: &{Name:ha-118173-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-118173 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-118173 -n ha-118173
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-118173 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-118173 logs -n 25: (1.519202996s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-118173 ssh -n                                                                 | ha-118173 | jenkins | v1.33.1 | 06 Aug 24 07:28 UTC | 06 Aug 24 07:28 UTC |
	|         | ha-118173-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-118173 cp ha-118173-m03:/home/docker/cp-test.txt                              | ha-118173 | jenkins | v1.33.1 | 06 Aug 24 07:28 UTC | 06 Aug 24 07:28 UTC |
	|         | ha-118173:/home/docker/cp-test_ha-118173-m03_ha-118173.txt                       |           |         |         |                     |                     |
	| ssh     | ha-118173 ssh -n                                                                 | ha-118173 | jenkins | v1.33.1 | 06 Aug 24 07:28 UTC | 06 Aug 24 07:28 UTC |
	|         | ha-118173-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-118173 ssh -n ha-118173 sudo cat                                              | ha-118173 | jenkins | v1.33.1 | 06 Aug 24 07:28 UTC | 06 Aug 24 07:28 UTC |
	|         | /home/docker/cp-test_ha-118173-m03_ha-118173.txt                                 |           |         |         |                     |                     |
	| cp      | ha-118173 cp ha-118173-m03:/home/docker/cp-test.txt                              | ha-118173 | jenkins | v1.33.1 | 06 Aug 24 07:28 UTC | 06 Aug 24 07:28 UTC |
	|         | ha-118173-m02:/home/docker/cp-test_ha-118173-m03_ha-118173-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-118173 ssh -n                                                                 | ha-118173 | jenkins | v1.33.1 | 06 Aug 24 07:28 UTC | 06 Aug 24 07:28 UTC |
	|         | ha-118173-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-118173 ssh -n ha-118173-m02 sudo cat                                          | ha-118173 | jenkins | v1.33.1 | 06 Aug 24 07:28 UTC | 06 Aug 24 07:28 UTC |
	|         | /home/docker/cp-test_ha-118173-m03_ha-118173-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-118173 cp ha-118173-m03:/home/docker/cp-test.txt                              | ha-118173 | jenkins | v1.33.1 | 06 Aug 24 07:28 UTC | 06 Aug 24 07:28 UTC |
	|         | ha-118173-m04:/home/docker/cp-test_ha-118173-m03_ha-118173-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-118173 ssh -n                                                                 | ha-118173 | jenkins | v1.33.1 | 06 Aug 24 07:28 UTC | 06 Aug 24 07:28 UTC |
	|         | ha-118173-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-118173 ssh -n ha-118173-m04 sudo cat                                          | ha-118173 | jenkins | v1.33.1 | 06 Aug 24 07:28 UTC | 06 Aug 24 07:28 UTC |
	|         | /home/docker/cp-test_ha-118173-m03_ha-118173-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-118173 cp testdata/cp-test.txt                                                | ha-118173 | jenkins | v1.33.1 | 06 Aug 24 07:28 UTC | 06 Aug 24 07:28 UTC |
	|         | ha-118173-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-118173 ssh -n                                                                 | ha-118173 | jenkins | v1.33.1 | 06 Aug 24 07:28 UTC | 06 Aug 24 07:28 UTC |
	|         | ha-118173-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-118173 cp ha-118173-m04:/home/docker/cp-test.txt                              | ha-118173 | jenkins | v1.33.1 | 06 Aug 24 07:28 UTC | 06 Aug 24 07:28 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile4204706009/001/cp-test_ha-118173-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-118173 ssh -n                                                                 | ha-118173 | jenkins | v1.33.1 | 06 Aug 24 07:28 UTC | 06 Aug 24 07:28 UTC |
	|         | ha-118173-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-118173 cp ha-118173-m04:/home/docker/cp-test.txt                              | ha-118173 | jenkins | v1.33.1 | 06 Aug 24 07:28 UTC | 06 Aug 24 07:28 UTC |
	|         | ha-118173:/home/docker/cp-test_ha-118173-m04_ha-118173.txt                       |           |         |         |                     |                     |
	| ssh     | ha-118173 ssh -n                                                                 | ha-118173 | jenkins | v1.33.1 | 06 Aug 24 07:28 UTC | 06 Aug 24 07:28 UTC |
	|         | ha-118173-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-118173 ssh -n ha-118173 sudo cat                                              | ha-118173 | jenkins | v1.33.1 | 06 Aug 24 07:28 UTC | 06 Aug 24 07:28 UTC |
	|         | /home/docker/cp-test_ha-118173-m04_ha-118173.txt                                 |           |         |         |                     |                     |
	| cp      | ha-118173 cp ha-118173-m04:/home/docker/cp-test.txt                              | ha-118173 | jenkins | v1.33.1 | 06 Aug 24 07:28 UTC | 06 Aug 24 07:28 UTC |
	|         | ha-118173-m02:/home/docker/cp-test_ha-118173-m04_ha-118173-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-118173 ssh -n                                                                 | ha-118173 | jenkins | v1.33.1 | 06 Aug 24 07:28 UTC | 06 Aug 24 07:28 UTC |
	|         | ha-118173-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-118173 ssh -n ha-118173-m02 sudo cat                                          | ha-118173 | jenkins | v1.33.1 | 06 Aug 24 07:28 UTC | 06 Aug 24 07:28 UTC |
	|         | /home/docker/cp-test_ha-118173-m04_ha-118173-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-118173 cp ha-118173-m04:/home/docker/cp-test.txt                              | ha-118173 | jenkins | v1.33.1 | 06 Aug 24 07:28 UTC | 06 Aug 24 07:28 UTC |
	|         | ha-118173-m03:/home/docker/cp-test_ha-118173-m04_ha-118173-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-118173 ssh -n                                                                 | ha-118173 | jenkins | v1.33.1 | 06 Aug 24 07:28 UTC | 06 Aug 24 07:28 UTC |
	|         | ha-118173-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-118173 ssh -n ha-118173-m03 sudo cat                                          | ha-118173 | jenkins | v1.33.1 | 06 Aug 24 07:28 UTC | 06 Aug 24 07:28 UTC |
	|         | /home/docker/cp-test_ha-118173-m04_ha-118173-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-118173 node stop m02 -v=7                                                     | ha-118173 | jenkins | v1.33.1 | 06 Aug 24 07:28 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-118173 node start m02 -v=7                                                    | ha-118173 | jenkins | v1.33.1 | 06 Aug 24 07:30 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/06 07:22:19
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0806 07:22:19.165619   31793 out.go:291] Setting OutFile to fd 1 ...
	I0806 07:22:19.165868   31793 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 07:22:19.165876   31793 out.go:304] Setting ErrFile to fd 2...
	I0806 07:22:19.165880   31793 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 07:22:19.166038   31793 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19370-13057/.minikube/bin
	I0806 07:22:19.166569   31793 out.go:298] Setting JSON to false
	I0806 07:22:19.167442   31793 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3885,"bootTime":1722925054,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0806 07:22:19.167497   31793 start.go:139] virtualization: kvm guest
	I0806 07:22:19.169559   31793 out.go:177] * [ha-118173] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0806 07:22:19.171087   31793 out.go:177]   - MINIKUBE_LOCATION=19370
	I0806 07:22:19.171085   31793 notify.go:220] Checking for updates...
	I0806 07:22:19.173961   31793 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0806 07:22:19.175568   31793 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19370-13057/kubeconfig
	I0806 07:22:19.177238   31793 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19370-13057/.minikube
	I0806 07:22:19.178649   31793 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0806 07:22:19.180538   31793 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0806 07:22:19.182046   31793 driver.go:392] Setting default libvirt URI to qemu:///system
	I0806 07:22:19.217814   31793 out.go:177] * Using the kvm2 driver based on user configuration
	I0806 07:22:19.219269   31793 start.go:297] selected driver: kvm2
	I0806 07:22:19.219281   31793 start.go:901] validating driver "kvm2" against <nil>
	I0806 07:22:19.219291   31793 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0806 07:22:19.219959   31793 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 07:22:19.220038   31793 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19370-13057/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0806 07:22:19.235588   31793 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0806 07:22:19.235636   31793 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0806 07:22:19.235826   31793 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0806 07:22:19.235882   31793 cni.go:84] Creating CNI manager for ""
	I0806 07:22:19.235900   31793 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0806 07:22:19.235907   31793 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0806 07:22:19.235952   31793 start.go:340] cluster config:
	{Name:ha-118173 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-118173 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0806 07:22:19.236037   31793 iso.go:125] acquiring lock: {Name:mke58d71de28babe305c5eb0c31c274e9713bb3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 07:22:19.239086   31793 out.go:177] * Starting "ha-118173" primary control-plane node in "ha-118173" cluster
	I0806 07:22:19.240422   31793 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0806 07:22:19.240456   31793 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0806 07:22:19.240464   31793 cache.go:56] Caching tarball of preloaded images
	I0806 07:22:19.240536   31793 preload.go:172] Found /home/jenkins/minikube-integration/19370-13057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0806 07:22:19.240546   31793 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0806 07:22:19.240830   31793 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/config.json ...
	I0806 07:22:19.240847   31793 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/config.json: {Name:mk9d17cd34cb906319ddb11b145cd3570462329e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 07:22:19.240990   31793 start.go:360] acquireMachinesLock for ha-118173: {Name:mkc602ac9c8cc3360dcc0fcb8104303837aaab61 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 07:22:19.241018   31793 start.go:364] duration metric: took 15.051µs to acquireMachinesLock for "ha-118173"
	I0806 07:22:19.241033   31793 start.go:93] Provisioning new machine with config: &{Name:ha-118173 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-118173 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0806 07:22:19.241086   31793 start.go:125] createHost starting for "" (driver="kvm2")
	I0806 07:22:19.243213   31793 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0806 07:22:19.243369   31793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:22:19.243414   31793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:22:19.257859   31793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46593
	I0806 07:22:19.258258   31793 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:22:19.258748   31793 main.go:141] libmachine: Using API Version  1
	I0806 07:22:19.258769   31793 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:22:19.259082   31793 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:22:19.259259   31793 main.go:141] libmachine: (ha-118173) Calling .GetMachineName
	I0806 07:22:19.259386   31793 main.go:141] libmachine: (ha-118173) Calling .DriverName
	I0806 07:22:19.259535   31793 start.go:159] libmachine.API.Create for "ha-118173" (driver="kvm2")
	I0806 07:22:19.259560   31793 client.go:168] LocalClient.Create starting
	I0806 07:22:19.259587   31793 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem
	I0806 07:22:19.259623   31793 main.go:141] libmachine: Decoding PEM data...
	I0806 07:22:19.259639   31793 main.go:141] libmachine: Parsing certificate...
	I0806 07:22:19.259687   31793 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem
	I0806 07:22:19.259705   31793 main.go:141] libmachine: Decoding PEM data...
	I0806 07:22:19.259726   31793 main.go:141] libmachine: Parsing certificate...
	I0806 07:22:19.259741   31793 main.go:141] libmachine: Running pre-create checks...
	I0806 07:22:19.259754   31793 main.go:141] libmachine: (ha-118173) Calling .PreCreateCheck
	I0806 07:22:19.260262   31793 main.go:141] libmachine: (ha-118173) Calling .GetConfigRaw
	I0806 07:22:19.260642   31793 main.go:141] libmachine: Creating machine...
	I0806 07:22:19.260655   31793 main.go:141] libmachine: (ha-118173) Calling .Create
	I0806 07:22:19.260774   31793 main.go:141] libmachine: (ha-118173) Creating KVM machine...
	I0806 07:22:19.261978   31793 main.go:141] libmachine: (ha-118173) DBG | found existing default KVM network
	I0806 07:22:19.262624   31793 main.go:141] libmachine: (ha-118173) DBG | I0806 07:22:19.262492   31816 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ac0}
	I0806 07:22:19.262707   31793 main.go:141] libmachine: (ha-118173) DBG | created network xml: 
	I0806 07:22:19.262736   31793 main.go:141] libmachine: (ha-118173) DBG | <network>
	I0806 07:22:19.262748   31793 main.go:141] libmachine: (ha-118173) DBG |   <name>mk-ha-118173</name>
	I0806 07:22:19.262761   31793 main.go:141] libmachine: (ha-118173) DBG |   <dns enable='no'/>
	I0806 07:22:19.262772   31793 main.go:141] libmachine: (ha-118173) DBG |   
	I0806 07:22:19.262783   31793 main.go:141] libmachine: (ha-118173) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0806 07:22:19.262791   31793 main.go:141] libmachine: (ha-118173) DBG |     <dhcp>
	I0806 07:22:19.262801   31793 main.go:141] libmachine: (ha-118173) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0806 07:22:19.262817   31793 main.go:141] libmachine: (ha-118173) DBG |     </dhcp>
	I0806 07:22:19.262831   31793 main.go:141] libmachine: (ha-118173) DBG |   </ip>
	I0806 07:22:19.262841   31793 main.go:141] libmachine: (ha-118173) DBG |   
	I0806 07:22:19.262853   31793 main.go:141] libmachine: (ha-118173) DBG | </network>
	I0806 07:22:19.262867   31793 main.go:141] libmachine: (ha-118173) DBG | 
	I0806 07:22:19.267917   31793 main.go:141] libmachine: (ha-118173) DBG | trying to create private KVM network mk-ha-118173 192.168.39.0/24...
	I0806 07:22:19.335037   31793 main.go:141] libmachine: (ha-118173) DBG | private KVM network mk-ha-118173 192.168.39.0/24 created
	I0806 07:22:19.335062   31793 main.go:141] libmachine: (ha-118173) Setting up store path in /home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173 ...
	I0806 07:22:19.335076   31793 main.go:141] libmachine: (ha-118173) DBG | I0806 07:22:19.335015   31816 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19370-13057/.minikube
	I0806 07:22:19.335086   31793 main.go:141] libmachine: (ha-118173) Building disk image from file:///home/jenkins/minikube-integration/19370-13057/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0806 07:22:19.335264   31793 main.go:141] libmachine: (ha-118173) Downloading /home/jenkins/minikube-integration/19370-13057/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19370-13057/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0806 07:22:19.580519   31793 main.go:141] libmachine: (ha-118173) DBG | I0806 07:22:19.580396   31816 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173/id_rsa...
	I0806 07:22:19.759220   31793 main.go:141] libmachine: (ha-118173) DBG | I0806 07:22:19.759065   31816 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173/ha-118173.rawdisk...
	I0806 07:22:19.759248   31793 main.go:141] libmachine: (ha-118173) DBG | Writing magic tar header
	I0806 07:22:19.759262   31793 main.go:141] libmachine: (ha-118173) DBG | Writing SSH key tar header
	I0806 07:22:19.759274   31793 main.go:141] libmachine: (ha-118173) DBG | I0806 07:22:19.759202   31816 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173 ...
	I0806 07:22:19.759350   31793 main.go:141] libmachine: (ha-118173) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173
	I0806 07:22:19.759386   31793 main.go:141] libmachine: (ha-118173) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19370-13057/.minikube/machines
	I0806 07:22:19.759397   31793 main.go:141] libmachine: (ha-118173) Setting executable bit set on /home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173 (perms=drwx------)
	I0806 07:22:19.759406   31793 main.go:141] libmachine: (ha-118173) Setting executable bit set on /home/jenkins/minikube-integration/19370-13057/.minikube/machines (perms=drwxr-xr-x)
	I0806 07:22:19.759428   31793 main.go:141] libmachine: (ha-118173) Setting executable bit set on /home/jenkins/minikube-integration/19370-13057/.minikube (perms=drwxr-xr-x)
	I0806 07:22:19.759445   31793 main.go:141] libmachine: (ha-118173) Setting executable bit set on /home/jenkins/minikube-integration/19370-13057 (perms=drwxrwxr-x)
	I0806 07:22:19.759462   31793 main.go:141] libmachine: (ha-118173) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19370-13057/.minikube
	I0806 07:22:19.759475   31793 main.go:141] libmachine: (ha-118173) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0806 07:22:19.759491   31793 main.go:141] libmachine: (ha-118173) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0806 07:22:19.759499   31793 main.go:141] libmachine: (ha-118173) Creating domain...
	I0806 07:22:19.759507   31793 main.go:141] libmachine: (ha-118173) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19370-13057
	I0806 07:22:19.759515   31793 main.go:141] libmachine: (ha-118173) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0806 07:22:19.759526   31793 main.go:141] libmachine: (ha-118173) DBG | Checking permissions on dir: /home/jenkins
	I0806 07:22:19.759539   31793 main.go:141] libmachine: (ha-118173) DBG | Checking permissions on dir: /home
	I0806 07:22:19.759553   31793 main.go:141] libmachine: (ha-118173) DBG | Skipping /home - not owner
	I0806 07:22:19.760506   31793 main.go:141] libmachine: (ha-118173) define libvirt domain using xml: 
	I0806 07:22:19.760527   31793 main.go:141] libmachine: (ha-118173) <domain type='kvm'>
	I0806 07:22:19.760533   31793 main.go:141] libmachine: (ha-118173)   <name>ha-118173</name>
	I0806 07:22:19.760539   31793 main.go:141] libmachine: (ha-118173)   <memory unit='MiB'>2200</memory>
	I0806 07:22:19.760562   31793 main.go:141] libmachine: (ha-118173)   <vcpu>2</vcpu>
	I0806 07:22:19.760581   31793 main.go:141] libmachine: (ha-118173)   <features>
	I0806 07:22:19.760594   31793 main.go:141] libmachine: (ha-118173)     <acpi/>
	I0806 07:22:19.760603   31793 main.go:141] libmachine: (ha-118173)     <apic/>
	I0806 07:22:19.760612   31793 main.go:141] libmachine: (ha-118173)     <pae/>
	I0806 07:22:19.760622   31793 main.go:141] libmachine: (ha-118173)     
	I0806 07:22:19.760634   31793 main.go:141] libmachine: (ha-118173)   </features>
	I0806 07:22:19.760644   31793 main.go:141] libmachine: (ha-118173)   <cpu mode='host-passthrough'>
	I0806 07:22:19.760661   31793 main.go:141] libmachine: (ha-118173)   
	I0806 07:22:19.760677   31793 main.go:141] libmachine: (ha-118173)   </cpu>
	I0806 07:22:19.760686   31793 main.go:141] libmachine: (ha-118173)   <os>
	I0806 07:22:19.760691   31793 main.go:141] libmachine: (ha-118173)     <type>hvm</type>
	I0806 07:22:19.760696   31793 main.go:141] libmachine: (ha-118173)     <boot dev='cdrom'/>
	I0806 07:22:19.760707   31793 main.go:141] libmachine: (ha-118173)     <boot dev='hd'/>
	I0806 07:22:19.760715   31793 main.go:141] libmachine: (ha-118173)     <bootmenu enable='no'/>
	I0806 07:22:19.760719   31793 main.go:141] libmachine: (ha-118173)   </os>
	I0806 07:22:19.760726   31793 main.go:141] libmachine: (ha-118173)   <devices>
	I0806 07:22:19.760731   31793 main.go:141] libmachine: (ha-118173)     <disk type='file' device='cdrom'>
	I0806 07:22:19.760741   31793 main.go:141] libmachine: (ha-118173)       <source file='/home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173/boot2docker.iso'/>
	I0806 07:22:19.760750   31793 main.go:141] libmachine: (ha-118173)       <target dev='hdc' bus='scsi'/>
	I0806 07:22:19.760757   31793 main.go:141] libmachine: (ha-118173)       <readonly/>
	I0806 07:22:19.760761   31793 main.go:141] libmachine: (ha-118173)     </disk>
	I0806 07:22:19.760770   31793 main.go:141] libmachine: (ha-118173)     <disk type='file' device='disk'>
	I0806 07:22:19.760776   31793 main.go:141] libmachine: (ha-118173)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0806 07:22:19.760785   31793 main.go:141] libmachine: (ha-118173)       <source file='/home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173/ha-118173.rawdisk'/>
	I0806 07:22:19.760790   31793 main.go:141] libmachine: (ha-118173)       <target dev='hda' bus='virtio'/>
	I0806 07:22:19.760797   31793 main.go:141] libmachine: (ha-118173)     </disk>
	I0806 07:22:19.760802   31793 main.go:141] libmachine: (ha-118173)     <interface type='network'>
	I0806 07:22:19.760810   31793 main.go:141] libmachine: (ha-118173)       <source network='mk-ha-118173'/>
	I0806 07:22:19.760823   31793 main.go:141] libmachine: (ha-118173)       <model type='virtio'/>
	I0806 07:22:19.760831   31793 main.go:141] libmachine: (ha-118173)     </interface>
	I0806 07:22:19.760835   31793 main.go:141] libmachine: (ha-118173)     <interface type='network'>
	I0806 07:22:19.760841   31793 main.go:141] libmachine: (ha-118173)       <source network='default'/>
	I0806 07:22:19.760847   31793 main.go:141] libmachine: (ha-118173)       <model type='virtio'/>
	I0806 07:22:19.760853   31793 main.go:141] libmachine: (ha-118173)     </interface>
	I0806 07:22:19.760859   31793 main.go:141] libmachine: (ha-118173)     <serial type='pty'>
	I0806 07:22:19.760864   31793 main.go:141] libmachine: (ha-118173)       <target port='0'/>
	I0806 07:22:19.760869   31793 main.go:141] libmachine: (ha-118173)     </serial>
	I0806 07:22:19.760875   31793 main.go:141] libmachine: (ha-118173)     <console type='pty'>
	I0806 07:22:19.760884   31793 main.go:141] libmachine: (ha-118173)       <target type='serial' port='0'/>
	I0806 07:22:19.760892   31793 main.go:141] libmachine: (ha-118173)     </console>
	I0806 07:22:19.760896   31793 main.go:141] libmachine: (ha-118173)     <rng model='virtio'>
	I0806 07:22:19.760902   31793 main.go:141] libmachine: (ha-118173)       <backend model='random'>/dev/random</backend>
	I0806 07:22:19.760908   31793 main.go:141] libmachine: (ha-118173)     </rng>
	I0806 07:22:19.760912   31793 main.go:141] libmachine: (ha-118173)     
	I0806 07:22:19.760919   31793 main.go:141] libmachine: (ha-118173)     
	I0806 07:22:19.760923   31793 main.go:141] libmachine: (ha-118173)   </devices>
	I0806 07:22:19.760929   31793 main.go:141] libmachine: (ha-118173) </domain>
	I0806 07:22:19.760936   31793 main.go:141] libmachine: (ha-118173) 
	I0806 07:22:19.765478   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:c3:65:2c in network default
	I0806 07:22:19.766041   31793 main.go:141] libmachine: (ha-118173) Ensuring networks are active...
	I0806 07:22:19.766056   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:22:19.766643   31793 main.go:141] libmachine: (ha-118173) Ensuring network default is active
	I0806 07:22:19.766933   31793 main.go:141] libmachine: (ha-118173) Ensuring network mk-ha-118173 is active
	I0806 07:22:19.767403   31793 main.go:141] libmachine: (ha-118173) Getting domain xml...
	I0806 07:22:19.767993   31793 main.go:141] libmachine: (ha-118173) Creating domain...
	I0806 07:22:20.969734   31793 main.go:141] libmachine: (ha-118173) Waiting to get IP...
	I0806 07:22:20.970482   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:22:20.970863   31793 main.go:141] libmachine: (ha-118173) DBG | unable to find current IP address of domain ha-118173 in network mk-ha-118173
	I0806 07:22:20.970891   31793 main.go:141] libmachine: (ha-118173) DBG | I0806 07:22:20.970816   31816 retry.go:31] will retry after 237.153154ms: waiting for machine to come up
	I0806 07:22:21.209342   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:22:21.209803   31793 main.go:141] libmachine: (ha-118173) DBG | unable to find current IP address of domain ha-118173 in network mk-ha-118173
	I0806 07:22:21.209831   31793 main.go:141] libmachine: (ha-118173) DBG | I0806 07:22:21.209764   31816 retry.go:31] will retry after 321.567483ms: waiting for machine to come up
	I0806 07:22:21.533243   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:22:21.533703   31793 main.go:141] libmachine: (ha-118173) DBG | unable to find current IP address of domain ha-118173 in network mk-ha-118173
	I0806 07:22:21.533726   31793 main.go:141] libmachine: (ha-118173) DBG | I0806 07:22:21.533632   31816 retry.go:31] will retry after 468.04222ms: waiting for machine to come up
	I0806 07:22:22.003258   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:22:22.003621   31793 main.go:141] libmachine: (ha-118173) DBG | unable to find current IP address of domain ha-118173 in network mk-ha-118173
	I0806 07:22:22.003648   31793 main.go:141] libmachine: (ha-118173) DBG | I0806 07:22:22.003599   31816 retry.go:31] will retry after 376.04295ms: waiting for machine to come up
	I0806 07:22:22.381111   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:22:22.381516   31793 main.go:141] libmachine: (ha-118173) DBG | unable to find current IP address of domain ha-118173 in network mk-ha-118173
	I0806 07:22:22.381538   31793 main.go:141] libmachine: (ha-118173) DBG | I0806 07:22:22.381491   31816 retry.go:31] will retry after 578.39666ms: waiting for machine to come up
	I0806 07:22:22.961296   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:22:22.961662   31793 main.go:141] libmachine: (ha-118173) DBG | unable to find current IP address of domain ha-118173 in network mk-ha-118173
	I0806 07:22:22.961687   31793 main.go:141] libmachine: (ha-118173) DBG | I0806 07:22:22.961615   31816 retry.go:31] will retry after 574.068902ms: waiting for machine to come up
	I0806 07:22:23.537289   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:22:23.537732   31793 main.go:141] libmachine: (ha-118173) DBG | unable to find current IP address of domain ha-118173 in network mk-ha-118173
	I0806 07:22:23.537767   31793 main.go:141] libmachine: (ha-118173) DBG | I0806 07:22:23.537696   31816 retry.go:31] will retry after 1.126514021s: waiting for machine to come up
	I0806 07:22:24.665477   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:22:24.665839   31793 main.go:141] libmachine: (ha-118173) DBG | unable to find current IP address of domain ha-118173 in network mk-ha-118173
	I0806 07:22:24.665873   31793 main.go:141] libmachine: (ha-118173) DBG | I0806 07:22:24.665801   31816 retry.go:31] will retry after 1.097616443s: waiting for machine to come up
	I0806 07:22:25.765277   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:22:25.765676   31793 main.go:141] libmachine: (ha-118173) DBG | unable to find current IP address of domain ha-118173 in network mk-ha-118173
	I0806 07:22:25.765699   31793 main.go:141] libmachine: (ha-118173) DBG | I0806 07:22:25.765646   31816 retry.go:31] will retry after 1.51043032s: waiting for machine to come up
	I0806 07:22:27.277172   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:22:27.277519   31793 main.go:141] libmachine: (ha-118173) DBG | unable to find current IP address of domain ha-118173 in network mk-ha-118173
	I0806 07:22:27.277546   31793 main.go:141] libmachine: (ha-118173) DBG | I0806 07:22:27.277476   31816 retry.go:31] will retry after 1.467155257s: waiting for machine to come up
	I0806 07:22:28.745913   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:22:28.746506   31793 main.go:141] libmachine: (ha-118173) DBG | unable to find current IP address of domain ha-118173 in network mk-ha-118173
	I0806 07:22:28.746614   31793 main.go:141] libmachine: (ha-118173) DBG | I0806 07:22:28.746481   31816 retry.go:31] will retry after 2.73537464s: waiting for machine to come up
	I0806 07:22:31.484404   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:22:31.484928   31793 main.go:141] libmachine: (ha-118173) DBG | unable to find current IP address of domain ha-118173 in network mk-ha-118173
	I0806 07:22:31.484961   31793 main.go:141] libmachine: (ha-118173) DBG | I0806 07:22:31.484896   31816 retry.go:31] will retry after 2.580809537s: waiting for machine to come up
	I0806 07:22:34.068424   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:22:34.068889   31793 main.go:141] libmachine: (ha-118173) DBG | unable to find current IP address of domain ha-118173 in network mk-ha-118173
	I0806 07:22:34.068913   31793 main.go:141] libmachine: (ha-118173) DBG | I0806 07:22:34.068861   31816 retry.go:31] will retry after 2.864411811s: waiting for machine to come up
	I0806 07:22:36.936160   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:22:36.936439   31793 main.go:141] libmachine: (ha-118173) DBG | unable to find current IP address of domain ha-118173 in network mk-ha-118173
	I0806 07:22:36.936459   31793 main.go:141] libmachine: (ha-118173) DBG | I0806 07:22:36.936426   31816 retry.go:31] will retry after 5.39122106s: waiting for machine to come up
	I0806 07:22:42.331757   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:22:42.332181   31793 main.go:141] libmachine: (ha-118173) Found IP for machine: 192.168.39.164
	I0806 07:22:42.332206   31793 main.go:141] libmachine: (ha-118173) Reserving static IP address...
	I0806 07:22:42.332217   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has current primary IP address 192.168.39.164 and MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:22:42.332542   31793 main.go:141] libmachine: (ha-118173) DBG | unable to find host DHCP lease matching {name: "ha-118173", mac: "52:54:00:ef:df:f0", ip: "192.168.39.164"} in network mk-ha-118173
	I0806 07:22:42.404958   31793 main.go:141] libmachine: (ha-118173) DBG | Getting to WaitForSSH function...
	I0806 07:22:42.404976   31793 main.go:141] libmachine: (ha-118173) Reserved static IP address: 192.168.39.164
	I0806 07:22:42.404985   31793 main.go:141] libmachine: (ha-118173) Waiting for SSH to be available...
	I0806 07:22:42.407425   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:22:42.407848   31793 main.go:141] libmachine: (ha-118173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:df:f0", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:22:33 +0000 UTC Type:0 Mac:52:54:00:ef:df:f0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ef:df:f0}
	I0806 07:22:42.407870   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined IP address 192.168.39.164 and MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:22:42.408023   31793 main.go:141] libmachine: (ha-118173) DBG | Using SSH client type: external
	I0806 07:22:42.408042   31793 main.go:141] libmachine: (ha-118173) DBG | Using SSH private key: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173/id_rsa (-rw-------)
	I0806 07:22:42.408084   31793 main.go:141] libmachine: (ha-118173) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.164 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0806 07:22:42.408110   31793 main.go:141] libmachine: (ha-118173) DBG | About to run SSH command:
	I0806 07:22:42.408144   31793 main.go:141] libmachine: (ha-118173) DBG | exit 0
	I0806 07:22:42.532516   31793 main.go:141] libmachine: (ha-118173) DBG | SSH cmd err, output: <nil>: 
	I0806 07:22:42.532772   31793 main.go:141] libmachine: (ha-118173) KVM machine creation complete!
	I0806 07:22:42.533108   31793 main.go:141] libmachine: (ha-118173) Calling .GetConfigRaw
	I0806 07:22:42.533610   31793 main.go:141] libmachine: (ha-118173) Calling .DriverName
	I0806 07:22:42.533842   31793 main.go:141] libmachine: (ha-118173) Calling .DriverName
	I0806 07:22:42.534064   31793 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0806 07:22:42.534081   31793 main.go:141] libmachine: (ha-118173) Calling .GetState
	I0806 07:22:42.535521   31793 main.go:141] libmachine: Detecting operating system of created instance...
	I0806 07:22:42.535535   31793 main.go:141] libmachine: Waiting for SSH to be available...
	I0806 07:22:42.535541   31793 main.go:141] libmachine: Getting to WaitForSSH function...
	I0806 07:22:42.535547   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHHostname
	I0806 07:22:42.538167   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:22:42.538529   31793 main.go:141] libmachine: (ha-118173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:df:f0", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:22:33 +0000 UTC Type:0 Mac:52:54:00:ef:df:f0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-118173 Clientid:01:52:54:00:ef:df:f0}
	I0806 07:22:42.538551   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined IP address 192.168.39.164 and MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:22:42.538692   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHPort
	I0806 07:22:42.538896   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHKeyPath
	I0806 07:22:42.539040   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHKeyPath
	I0806 07:22:42.539196   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHUsername
	I0806 07:22:42.539328   31793 main.go:141] libmachine: Using SSH client type: native
	I0806 07:22:42.539504   31793 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.164 22 <nil> <nil>}
	I0806 07:22:42.539513   31793 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0806 07:22:42.639729   31793 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 07:22:42.639751   31793 main.go:141] libmachine: Detecting the provisioner...
	I0806 07:22:42.639759   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHHostname
	I0806 07:22:42.642762   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:22:42.643126   31793 main.go:141] libmachine: (ha-118173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:df:f0", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:22:33 +0000 UTC Type:0 Mac:52:54:00:ef:df:f0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-118173 Clientid:01:52:54:00:ef:df:f0}
	I0806 07:22:42.643149   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined IP address 192.168.39.164 and MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:22:42.643304   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHPort
	I0806 07:22:42.643520   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHKeyPath
	I0806 07:22:42.643700   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHKeyPath
	I0806 07:22:42.643875   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHUsername
	I0806 07:22:42.644062   31793 main.go:141] libmachine: Using SSH client type: native
	I0806 07:22:42.644248   31793 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.164 22 <nil> <nil>}
	I0806 07:22:42.644262   31793 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0806 07:22:42.745341   31793 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0806 07:22:42.745414   31793 main.go:141] libmachine: found compatible host: buildroot
	I0806 07:22:42.745428   31793 main.go:141] libmachine: Provisioning with buildroot...
	I0806 07:22:42.745439   31793 main.go:141] libmachine: (ha-118173) Calling .GetMachineName
	I0806 07:22:42.745695   31793 buildroot.go:166] provisioning hostname "ha-118173"
	I0806 07:22:42.745719   31793 main.go:141] libmachine: (ha-118173) Calling .GetMachineName
	I0806 07:22:42.745907   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHHostname
	I0806 07:22:42.748408   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:22:42.748670   31793 main.go:141] libmachine: (ha-118173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:df:f0", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:22:33 +0000 UTC Type:0 Mac:52:54:00:ef:df:f0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-118173 Clientid:01:52:54:00:ef:df:f0}
	I0806 07:22:42.748706   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined IP address 192.168.39.164 and MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:22:42.748904   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHPort
	I0806 07:22:42.749035   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHKeyPath
	I0806 07:22:42.749160   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHKeyPath
	I0806 07:22:42.749310   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHUsername
	I0806 07:22:42.749487   31793 main.go:141] libmachine: Using SSH client type: native
	I0806 07:22:42.749647   31793 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.164 22 <nil> <nil>}
	I0806 07:22:42.749659   31793 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-118173 && echo "ha-118173" | sudo tee /etc/hostname
	I0806 07:22:42.863245   31793 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-118173
	
	I0806 07:22:42.863266   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHHostname
	I0806 07:22:42.865903   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:22:42.866299   31793 main.go:141] libmachine: (ha-118173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:df:f0", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:22:33 +0000 UTC Type:0 Mac:52:54:00:ef:df:f0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-118173 Clientid:01:52:54:00:ef:df:f0}
	I0806 07:22:42.866327   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined IP address 192.168.39.164 and MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:22:42.866479   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHPort
	I0806 07:22:42.866683   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHKeyPath
	I0806 07:22:42.866862   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHKeyPath
	I0806 07:22:42.867019   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHUsername
	I0806 07:22:42.867200   31793 main.go:141] libmachine: Using SSH client type: native
	I0806 07:22:42.867410   31793 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.164 22 <nil> <nil>}
	I0806 07:22:42.867434   31793 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-118173' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-118173/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-118173' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0806 07:22:42.977503   31793 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 07:22:42.977535   31793 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19370-13057/.minikube CaCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19370-13057/.minikube}
	I0806 07:22:42.977579   31793 buildroot.go:174] setting up certificates
	I0806 07:22:42.977595   31793 provision.go:84] configureAuth start
	I0806 07:22:42.977605   31793 main.go:141] libmachine: (ha-118173) Calling .GetMachineName
	I0806 07:22:42.977928   31793 main.go:141] libmachine: (ha-118173) Calling .GetIP
	I0806 07:22:42.980709   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:22:42.981057   31793 main.go:141] libmachine: (ha-118173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:df:f0", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:22:33 +0000 UTC Type:0 Mac:52:54:00:ef:df:f0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-118173 Clientid:01:52:54:00:ef:df:f0}
	I0806 07:22:42.981082   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined IP address 192.168.39.164 and MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:22:42.981227   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHHostname
	I0806 07:22:42.983204   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:22:42.983618   31793 main.go:141] libmachine: (ha-118173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:df:f0", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:22:33 +0000 UTC Type:0 Mac:52:54:00:ef:df:f0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-118173 Clientid:01:52:54:00:ef:df:f0}
	I0806 07:22:42.983633   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined IP address 192.168.39.164 and MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:22:42.983775   31793 provision.go:143] copyHostCerts
	I0806 07:22:42.983805   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem
	I0806 07:22:42.983847   31793 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem, removing ...
	I0806 07:22:42.983856   31793 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem
	I0806 07:22:42.983933   31793 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem (1078 bytes)
	I0806 07:22:42.984041   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem
	I0806 07:22:42.984068   31793 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem, removing ...
	I0806 07:22:42.984078   31793 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem
	I0806 07:22:42.984117   31793 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem (1123 bytes)
	I0806 07:22:42.984192   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem
	I0806 07:22:42.984216   31793 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem, removing ...
	I0806 07:22:42.984225   31793 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem
	I0806 07:22:42.984259   31793 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem (1675 bytes)
	I0806 07:22:42.984338   31793 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem org=jenkins.ha-118173 san=[127.0.0.1 192.168.39.164 ha-118173 localhost minikube]
	I0806 07:22:43.227825   31793 provision.go:177] copyRemoteCerts
	I0806 07:22:43.227894   31793 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0806 07:22:43.227920   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHHostname
	I0806 07:22:43.230720   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:22:43.231085   31793 main.go:141] libmachine: (ha-118173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:df:f0", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:22:33 +0000 UTC Type:0 Mac:52:54:00:ef:df:f0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-118173 Clientid:01:52:54:00:ef:df:f0}
	I0806 07:22:43.231115   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined IP address 192.168.39.164 and MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:22:43.231313   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHPort
	I0806 07:22:43.231483   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHKeyPath
	I0806 07:22:43.231667   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHUsername
	I0806 07:22:43.231803   31793 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173/id_rsa Username:docker}
	I0806 07:22:43.310581   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0806 07:22:43.310659   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0806 07:22:43.334641   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0806 07:22:43.334719   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0806 07:22:43.359238   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0806 07:22:43.359305   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0806 07:22:43.383248   31793 provision.go:87] duration metric: took 405.641533ms to configureAuth
	I0806 07:22:43.383272   31793 buildroot.go:189] setting minikube options for container-runtime
	I0806 07:22:43.383432   31793 config.go:182] Loaded profile config "ha-118173": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 07:22:43.383512   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHHostname
	I0806 07:22:43.385959   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:22:43.386318   31793 main.go:141] libmachine: (ha-118173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:df:f0", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:22:33 +0000 UTC Type:0 Mac:52:54:00:ef:df:f0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-118173 Clientid:01:52:54:00:ef:df:f0}
	I0806 07:22:43.386342   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined IP address 192.168.39.164 and MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:22:43.386514   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHPort
	I0806 07:22:43.386708   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHKeyPath
	I0806 07:22:43.386858   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHKeyPath
	I0806 07:22:43.386993   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHUsername
	I0806 07:22:43.387147   31793 main.go:141] libmachine: Using SSH client type: native
	I0806 07:22:43.387364   31793 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.164 22 <nil> <nil>}
	I0806 07:22:43.387382   31793 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0806 07:22:43.649612   31793 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0806 07:22:43.649641   31793 main.go:141] libmachine: Checking connection to Docker...
	I0806 07:22:43.649651   31793 main.go:141] libmachine: (ha-118173) Calling .GetURL
	I0806 07:22:43.651060   31793 main.go:141] libmachine: (ha-118173) DBG | Using libvirt version 6000000
	I0806 07:22:43.653159   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:22:43.653535   31793 main.go:141] libmachine: (ha-118173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:df:f0", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:22:33 +0000 UTC Type:0 Mac:52:54:00:ef:df:f0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-118173 Clientid:01:52:54:00:ef:df:f0}
	I0806 07:22:43.653555   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined IP address 192.168.39.164 and MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:22:43.653733   31793 main.go:141] libmachine: Docker is up and running!
	I0806 07:22:43.653753   31793 main.go:141] libmachine: Reticulating splines...
	I0806 07:22:43.653761   31793 client.go:171] duration metric: took 24.394192731s to LocalClient.Create
	I0806 07:22:43.653784   31793 start.go:167] duration metric: took 24.394250648s to libmachine.API.Create "ha-118173"
	I0806 07:22:43.653790   31793 start.go:293] postStartSetup for "ha-118173" (driver="kvm2")
	I0806 07:22:43.653799   31793 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0806 07:22:43.653811   31793 main.go:141] libmachine: (ha-118173) Calling .DriverName
	I0806 07:22:43.654028   31793 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0806 07:22:43.654050   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHHostname
	I0806 07:22:43.656729   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:22:43.657098   31793 main.go:141] libmachine: (ha-118173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:df:f0", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:22:33 +0000 UTC Type:0 Mac:52:54:00:ef:df:f0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-118173 Clientid:01:52:54:00:ef:df:f0}
	I0806 07:22:43.657122   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined IP address 192.168.39.164 and MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:22:43.657296   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHPort
	I0806 07:22:43.657499   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHKeyPath
	I0806 07:22:43.657675   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHUsername
	I0806 07:22:43.657840   31793 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173/id_rsa Username:docker}
	I0806 07:22:43.738928   31793 ssh_runner.go:195] Run: cat /etc/os-release
	I0806 07:22:43.743272   31793 info.go:137] Remote host: Buildroot 2023.02.9
	I0806 07:22:43.743305   31793 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-13057/.minikube/addons for local assets ...
	I0806 07:22:43.743380   31793 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-13057/.minikube/files for local assets ...
	I0806 07:22:43.743485   31793 filesync.go:149] local asset: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem -> 202462.pem in /etc/ssl/certs
	I0806 07:22:43.743497   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem -> /etc/ssl/certs/202462.pem
	I0806 07:22:43.743590   31793 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0806 07:22:43.752929   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem --> /etc/ssl/certs/202462.pem (1708 bytes)
	I0806 07:22:43.777273   31793 start.go:296] duration metric: took 123.469717ms for postStartSetup
	I0806 07:22:43.777320   31793 main.go:141] libmachine: (ha-118173) Calling .GetConfigRaw
	I0806 07:22:43.777825   31793 main.go:141] libmachine: (ha-118173) Calling .GetIP
	I0806 07:22:43.780421   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:22:43.780747   31793 main.go:141] libmachine: (ha-118173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:df:f0", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:22:33 +0000 UTC Type:0 Mac:52:54:00:ef:df:f0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-118173 Clientid:01:52:54:00:ef:df:f0}
	I0806 07:22:43.780778   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined IP address 192.168.39.164 and MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:22:43.781069   31793 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/config.json ...
	I0806 07:22:43.781248   31793 start.go:128] duration metric: took 24.540153885s to createHost
	I0806 07:22:43.781269   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHHostname
	I0806 07:22:43.783423   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:22:43.783719   31793 main.go:141] libmachine: (ha-118173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:df:f0", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:22:33 +0000 UTC Type:0 Mac:52:54:00:ef:df:f0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-118173 Clientid:01:52:54:00:ef:df:f0}
	I0806 07:22:43.783741   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined IP address 192.168.39.164 and MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:22:43.783856   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHPort
	I0806 07:22:43.784027   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHKeyPath
	I0806 07:22:43.784196   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHKeyPath
	I0806 07:22:43.784349   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHUsername
	I0806 07:22:43.784536   31793 main.go:141] libmachine: Using SSH client type: native
	I0806 07:22:43.784740   31793 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.164 22 <nil> <nil>}
	I0806 07:22:43.784758   31793 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0806 07:22:43.885380   31793 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722928963.865301045
	
	I0806 07:22:43.885401   31793 fix.go:216] guest clock: 1722928963.865301045
	I0806 07:22:43.885409   31793 fix.go:229] Guest: 2024-08-06 07:22:43.865301045 +0000 UTC Remote: 2024-08-06 07:22:43.781259462 +0000 UTC m=+24.648998203 (delta=84.041583ms)
	I0806 07:22:43.885445   31793 fix.go:200] guest clock delta is within tolerance: 84.041583ms
	I0806 07:22:43.885453   31793 start.go:83] releasing machines lock for "ha-118173", held for 24.644426654s
	I0806 07:22:43.885472   31793 main.go:141] libmachine: (ha-118173) Calling .DriverName
	I0806 07:22:43.885748   31793 main.go:141] libmachine: (ha-118173) Calling .GetIP
	I0806 07:22:43.888641   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:22:43.889013   31793 main.go:141] libmachine: (ha-118173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:df:f0", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:22:33 +0000 UTC Type:0 Mac:52:54:00:ef:df:f0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-118173 Clientid:01:52:54:00:ef:df:f0}
	I0806 07:22:43.889034   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined IP address 192.168.39.164 and MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:22:43.889178   31793 main.go:141] libmachine: (ha-118173) Calling .DriverName
	I0806 07:22:43.889589   31793 main.go:141] libmachine: (ha-118173) Calling .DriverName
	I0806 07:22:43.889778   31793 main.go:141] libmachine: (ha-118173) Calling .DriverName
	I0806 07:22:43.889891   31793 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0806 07:22:43.889930   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHHostname
	I0806 07:22:43.890025   31793 ssh_runner.go:195] Run: cat /version.json
	I0806 07:22:43.890052   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHHostname
	I0806 07:22:43.892656   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:22:43.892876   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:22:43.893089   31793 main.go:141] libmachine: (ha-118173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:df:f0", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:22:33 +0000 UTC Type:0 Mac:52:54:00:ef:df:f0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-118173 Clientid:01:52:54:00:ef:df:f0}
	I0806 07:22:43.893120   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined IP address 192.168.39.164 and MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:22:43.893217   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHPort
	I0806 07:22:43.893359   31793 main.go:141] libmachine: (ha-118173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:df:f0", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:22:33 +0000 UTC Type:0 Mac:52:54:00:ef:df:f0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-118173 Clientid:01:52:54:00:ef:df:f0}
	I0806 07:22:43.893384   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHKeyPath
	I0806 07:22:43.893388   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined IP address 192.168.39.164 and MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:22:43.893569   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHUsername
	I0806 07:22:43.893572   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHPort
	I0806 07:22:43.893758   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHKeyPath
	I0806 07:22:43.893756   31793 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173/id_rsa Username:docker}
	I0806 07:22:43.893965   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHUsername
	I0806 07:22:43.894118   31793 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173/id_rsa Username:docker}
	I0806 07:22:43.993907   31793 ssh_runner.go:195] Run: systemctl --version
	I0806 07:22:43.999975   31793 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0806 07:22:44.161213   31793 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0806 07:22:44.167486   31793 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0806 07:22:44.167558   31793 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0806 07:22:44.183937   31793 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0806 07:22:44.183961   31793 start.go:495] detecting cgroup driver to use...
	I0806 07:22:44.184015   31793 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0806 07:22:44.200171   31793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 07:22:44.213909   31793 docker.go:217] disabling cri-docker service (if available) ...
	I0806 07:22:44.213960   31793 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0806 07:22:44.227305   31793 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0806 07:22:44.241338   31793 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0806 07:22:44.360983   31793 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0806 07:22:44.532055   31793 docker.go:233] disabling docker service ...
	I0806 07:22:44.532156   31793 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0806 07:22:44.546746   31793 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0806 07:22:44.559760   31793 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0806 07:22:44.679096   31793 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0806 07:22:44.796850   31793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0806 07:22:44.810975   31793 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 07:22:44.829608   31793 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0806 07:22:44.829674   31793 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 07:22:44.840501   31793 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0806 07:22:44.840567   31793 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 07:22:44.851272   31793 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 07:22:44.861943   31793 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 07:22:44.872594   31793 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0806 07:22:44.883104   31793 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 07:22:44.894221   31793 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 07:22:44.912514   31793 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 07:22:44.922924   31793 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0806 07:22:44.933095   31793 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0806 07:22:44.933159   31793 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0806 07:22:44.946489   31793 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0806 07:22:44.956242   31793 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 07:22:45.071281   31793 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0806 07:22:45.217139   31793 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0806 07:22:45.217236   31793 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0806 07:22:45.222664   31793 start.go:563] Will wait 60s for crictl version
	I0806 07:22:45.222725   31793 ssh_runner.go:195] Run: which crictl
	I0806 07:22:45.227164   31793 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0806 07:22:45.268682   31793 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0806 07:22:45.268751   31793 ssh_runner.go:195] Run: crio --version
	I0806 07:22:45.297199   31793 ssh_runner.go:195] Run: crio --version
	I0806 07:22:45.327614   31793 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0806 07:22:45.328747   31793 main.go:141] libmachine: (ha-118173) Calling .GetIP
	I0806 07:22:45.331387   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:22:45.331766   31793 main.go:141] libmachine: (ha-118173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:df:f0", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:22:33 +0000 UTC Type:0 Mac:52:54:00:ef:df:f0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-118173 Clientid:01:52:54:00:ef:df:f0}
	I0806 07:22:45.331789   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined IP address 192.168.39.164 and MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:22:45.332026   31793 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0806 07:22:45.336081   31793 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 07:22:45.349240   31793 kubeadm.go:883] updating cluster {Name:ha-118173 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-118173 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.164 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0806 07:22:45.349334   31793 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0806 07:22:45.349381   31793 ssh_runner.go:195] Run: sudo crictl images --output json
	I0806 07:22:45.382326   31793 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0806 07:22:45.382385   31793 ssh_runner.go:195] Run: which lz4
	I0806 07:22:45.386376   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0806 07:22:45.386482   31793 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0806 07:22:45.390588   31793 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0806 07:22:45.390624   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0806 07:22:46.801233   31793 crio.go:462] duration metric: took 1.414781252s to copy over tarball
	I0806 07:22:46.801305   31793 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0806 07:22:48.974176   31793 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.172841008s)
	I0806 07:22:48.974201   31793 crio.go:469] duration metric: took 2.172941498s to extract the tarball
	I0806 07:22:48.974210   31793 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0806 07:22:49.012818   31793 ssh_runner.go:195] Run: sudo crictl images --output json
	I0806 07:22:49.059997   31793 crio.go:514] all images are preloaded for cri-o runtime.
	I0806 07:22:49.060065   31793 cache_images.go:84] Images are preloaded, skipping loading
	I0806 07:22:49.060081   31793 kubeadm.go:934] updating node { 192.168.39.164 8443 v1.30.3 crio true true} ...
	I0806 07:22:49.060247   31793 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-118173 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.164
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-118173 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0806 07:22:49.060340   31793 ssh_runner.go:195] Run: crio config
	I0806 07:22:49.116968   31793 cni.go:84] Creating CNI manager for ""
	I0806 07:22:49.116985   31793 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0806 07:22:49.116994   31793 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0806 07:22:49.117012   31793 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.164 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-118173 NodeName:ha-118173 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.164"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.164 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0806 07:22:49.117137   31793 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.164
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-118173"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.164
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.164"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0806 07:22:49.117159   31793 kube-vip.go:115] generating kube-vip config ...
	I0806 07:22:49.117199   31793 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0806 07:22:49.133977   31793 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0806 07:22:49.134071   31793 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0806 07:22:49.134129   31793 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0806 07:22:49.145001   31793 binaries.go:44] Found k8s binaries, skipping transfer
	I0806 07:22:49.145059   31793 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0806 07:22:49.155390   31793 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0806 07:22:49.173597   31793 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0806 07:22:49.190947   31793 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0806 07:22:49.208738   31793 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0806 07:22:49.225473   31793 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0806 07:22:49.229532   31793 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 07:22:49.242073   31793 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 07:22:49.356832   31793 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 07:22:49.374197   31793 certs.go:68] Setting up /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173 for IP: 192.168.39.164
	I0806 07:22:49.374220   31793 certs.go:194] generating shared ca certs ...
	I0806 07:22:49.374235   31793 certs.go:226] acquiring lock for ca certs: {Name:mkf5c5aed91f4108054d9f2c742cad66c86afbed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 07:22:49.374367   31793 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.key
	I0806 07:22:49.374415   31793 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.key
	I0806 07:22:49.374424   31793 certs.go:256] generating profile certs ...
	I0806 07:22:49.374469   31793 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/client.key
	I0806 07:22:49.374482   31793 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/client.crt with IP's: []
	I0806 07:22:49.468767   31793 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/client.crt ...
	I0806 07:22:49.468796   31793 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/client.crt: {Name:mkdb6504129ad3f0008cbaa25ad6e428ceef3f50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 07:22:49.469000   31793 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/client.key ...
	I0806 07:22:49.469017   31793 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/client.key: {Name:mk6c635edafde6192639fe4ec7e973a177087c23 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 07:22:49.469109   31793 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.key.2ac65cee
	I0806 07:22:49.469126   31793 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.crt.2ac65cee with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.164 192.168.39.254]
	I0806 07:22:49.578237   31793 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.crt.2ac65cee ...
	I0806 07:22:49.578265   31793 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.crt.2ac65cee: {Name:mke131565c18c2601dc5eab7c24faae128de16f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 07:22:49.578415   31793 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.key.2ac65cee ...
	I0806 07:22:49.578428   31793 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.key.2ac65cee: {Name:mk7bc1c46e2a06fbc7907c0f77b4010f21a4b3bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 07:22:49.578503   31793 certs.go:381] copying /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.crt.2ac65cee -> /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.crt
	I0806 07:22:49.578579   31793 certs.go:385] copying /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.key.2ac65cee -> /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.key
	I0806 07:22:49.578630   31793 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/proxy-client.key
	I0806 07:22:49.578643   31793 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/proxy-client.crt with IP's: []
	I0806 07:22:49.824928   31793 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/proxy-client.crt ...
	I0806 07:22:49.824956   31793 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/proxy-client.crt: {Name:mk1d3336f6bea6ba58f669d05a304e02d240dab1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 07:22:49.825113   31793 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/proxy-client.key ...
	I0806 07:22:49.825123   31793 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/proxy-client.key: {Name:mk5400a9c46c36ea7c2478043ec2ddcc14a42c9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 07:22:49.825191   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0806 07:22:49.825208   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0806 07:22:49.825218   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0806 07:22:49.825231   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0806 07:22:49.825243   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0806 07:22:49.825259   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0806 07:22:49.825273   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0806 07:22:49.825286   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0806 07:22:49.825333   31793 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246.pem (1338 bytes)
	W0806 07:22:49.825364   31793 certs.go:480] ignoring /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246_empty.pem, impossibly tiny 0 bytes
	I0806 07:22:49.825374   31793 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem (1675 bytes)
	I0806 07:22:49.825399   31793 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem (1078 bytes)
	I0806 07:22:49.825442   31793 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem (1123 bytes)
	I0806 07:22:49.825467   31793 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem (1675 bytes)
	I0806 07:22:49.825505   31793 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem (1708 bytes)
	I0806 07:22:49.825531   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem -> /usr/share/ca-certificates/202462.pem
	I0806 07:22:49.825544   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0806 07:22:49.825556   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246.pem -> /usr/share/ca-certificates/20246.pem
	I0806 07:22:49.826059   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0806 07:22:49.852228   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0806 07:22:49.876745   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0806 07:22:49.900976   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0806 07:22:49.924627   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0806 07:22:49.949332   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0806 07:22:49.972966   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0806 07:22:49.997847   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0806 07:22:50.021034   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem --> /usr/share/ca-certificates/202462.pem (1708 bytes)
	I0806 07:22:50.044994   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0806 07:22:50.068735   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246.pem --> /usr/share/ca-certificates/20246.pem (1338 bytes)
	I0806 07:22:50.093218   31793 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0806 07:22:50.110297   31793 ssh_runner.go:195] Run: openssl version
	I0806 07:22:50.116593   31793 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202462.pem && ln -fs /usr/share/ca-certificates/202462.pem /etc/ssl/certs/202462.pem"
	I0806 07:22:50.127451   31793 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202462.pem
	I0806 07:22:50.132026   31793 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  6 07:17 /usr/share/ca-certificates/202462.pem
	I0806 07:22:50.132084   31793 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202462.pem
	I0806 07:22:50.137994   31793 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202462.pem /etc/ssl/certs/3ec20f2e.0"
	I0806 07:22:50.148960   31793 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0806 07:22:50.163116   31793 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0806 07:22:50.171691   31793 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  6 07:07 /usr/share/ca-certificates/minikubeCA.pem
	I0806 07:22:50.171755   31793 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0806 07:22:50.177861   31793 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0806 07:22:50.189174   31793 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20246.pem && ln -fs /usr/share/ca-certificates/20246.pem /etc/ssl/certs/20246.pem"
	I0806 07:22:50.206985   31793 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20246.pem
	I0806 07:22:50.215266   31793 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  6 07:17 /usr/share/ca-certificates/20246.pem
	I0806 07:22:50.215333   31793 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20246.pem
	I0806 07:22:50.222067   31793 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20246.pem /etc/ssl/certs/51391683.0"
	I0806 07:22:50.234620   31793 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0806 07:22:50.238723   31793 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0806 07:22:50.238786   31793 kubeadm.go:392] StartCluster: {Name:ha-118173 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-118173 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.164 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 07:22:50.238882   31793 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0806 07:22:50.238946   31793 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0806 07:22:50.283743   31793 cri.go:89] found id: ""
	I0806 07:22:50.283808   31793 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0806 07:22:50.294119   31793 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0806 07:22:50.306527   31793 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0806 07:22:50.318875   31793 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0806 07:22:50.318890   31793 kubeadm.go:157] found existing configuration files:
	
	I0806 07:22:50.318941   31793 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0806 07:22:50.328339   31793 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0806 07:22:50.328407   31793 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0806 07:22:50.338391   31793 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0806 07:22:50.347873   31793 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0806 07:22:50.347931   31793 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0806 07:22:50.358017   31793 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0806 07:22:50.367780   31793 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0806 07:22:50.367827   31793 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0806 07:22:50.377570   31793 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0806 07:22:50.386717   31793 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0806 07:22:50.386784   31793 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0806 07:22:50.396426   31793 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0806 07:22:50.503109   31793 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0806 07:22:50.503212   31793 kubeadm.go:310] [preflight] Running pre-flight checks
	I0806 07:22:50.632413   31793 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0806 07:22:50.632554   31793 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0806 07:22:50.632684   31793 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0806 07:22:50.837479   31793 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0806 07:22:50.940130   31793 out.go:204]   - Generating certificates and keys ...
	I0806 07:22:50.940235   31793 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0806 07:22:50.940304   31793 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0806 07:22:51.015566   31793 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0806 07:22:51.180117   31793 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0806 07:22:51.247206   31793 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0806 07:22:51.403988   31793 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0806 07:22:51.605886   31793 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0806 07:22:51.606072   31793 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-118173 localhost] and IPs [192.168.39.164 127.0.0.1 ::1]
	I0806 07:22:51.691308   31793 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0806 07:22:51.691444   31793 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-118173 localhost] and IPs [192.168.39.164 127.0.0.1 ::1]
	I0806 07:22:51.911058   31793 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0806 07:22:52.155713   31793 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0806 07:22:52.333276   31793 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0806 07:22:52.333340   31793 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0806 07:22:52.699252   31793 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0806 07:22:52.889643   31793 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0806 07:22:53.005971   31793 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0806 07:22:53.103664   31793 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0806 07:22:53.292265   31793 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0806 07:22:53.293138   31793 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0806 07:22:53.297138   31793 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0806 07:22:53.354587   31793 out.go:204]   - Booting up control plane ...
	I0806 07:22:53.354730   31793 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0806 07:22:53.354836   31793 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0806 07:22:53.354936   31793 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0806 07:22:53.355109   31793 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0806 07:22:53.355273   31793 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0806 07:22:53.355339   31793 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0806 07:22:53.441301   31793 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0806 07:22:53.441393   31793 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0806 07:22:53.942653   31793 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.740948ms
	I0806 07:22:53.942756   31793 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0806 07:22:59.881669   31793 kubeadm.go:310] [api-check] The API server is healthy after 5.942191318s
	I0806 07:22:59.901087   31793 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0806 07:22:59.920346   31793 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0806 07:22:59.954894   31793 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0806 07:22:59.955114   31793 kubeadm.go:310] [mark-control-plane] Marking the node ha-118173 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0806 07:22:59.977487   31793 kubeadm.go:310] [bootstrap-token] Using token: fqcri1.u6s5b3pjujut08vh
	I0806 07:22:59.978720   31793 out.go:204]   - Configuring RBAC rules ...
	I0806 07:22:59.978854   31793 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0806 07:22:59.989786   31793 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0806 07:23:00.003618   31793 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0806 07:23:00.013102   31793 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0806 07:23:00.018335   31793 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0806 07:23:00.022863   31793 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0806 07:23:00.289905   31793 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0806 07:23:00.746683   31793 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0806 07:23:01.289532   31793 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0806 07:23:01.290604   31793 kubeadm.go:310] 
	I0806 07:23:01.290685   31793 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0806 07:23:01.290693   31793 kubeadm.go:310] 
	I0806 07:23:01.290771   31793 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0806 07:23:01.290779   31793 kubeadm.go:310] 
	I0806 07:23:01.290827   31793 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0806 07:23:01.290905   31793 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0806 07:23:01.290979   31793 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0806 07:23:01.290999   31793 kubeadm.go:310] 
	I0806 07:23:01.291065   31793 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0806 07:23:01.291075   31793 kubeadm.go:310] 
	I0806 07:23:01.291131   31793 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0806 07:23:01.291141   31793 kubeadm.go:310] 
	I0806 07:23:01.291205   31793 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0806 07:23:01.291310   31793 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0806 07:23:01.291412   31793 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0806 07:23:01.291432   31793 kubeadm.go:310] 
	I0806 07:23:01.291580   31793 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0806 07:23:01.291705   31793 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0806 07:23:01.291716   31793 kubeadm.go:310] 
	I0806 07:23:01.291850   31793 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token fqcri1.u6s5b3pjujut08vh \
	I0806 07:23:01.292001   31793 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d36e5c1dfe989c7d561c809d7c06e0fc13926ac8f6d664cc5d6865ea5f79931b \
	I0806 07:23:01.292024   31793 kubeadm.go:310] 	--control-plane 
	I0806 07:23:01.292032   31793 kubeadm.go:310] 
	I0806 07:23:01.292132   31793 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0806 07:23:01.292142   31793 kubeadm.go:310] 
	I0806 07:23:01.292255   31793 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token fqcri1.u6s5b3pjujut08vh \
	I0806 07:23:01.292411   31793 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d36e5c1dfe989c7d561c809d7c06e0fc13926ac8f6d664cc5d6865ea5f79931b 
	I0806 07:23:01.294053   31793 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0806 07:23:01.294077   31793 cni.go:84] Creating CNI manager for ""
	I0806 07:23:01.294085   31793 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0806 07:23:01.295774   31793 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0806 07:23:01.297227   31793 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0806 07:23:01.302891   31793 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0806 07:23:01.302908   31793 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0806 07:23:01.323321   31793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0806 07:23:01.649030   31793 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0806 07:23:01.649123   31793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 07:23:01.649128   31793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-118173 minikube.k8s.io/updated_at=2024_08_06T07_23_01_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=e92cb06692f5ea1ba801d10d148e5e92e807f9c8 minikube.k8s.io/name=ha-118173 minikube.k8s.io/primary=true
	I0806 07:23:01.662300   31793 ops.go:34] apiserver oom_adj: -16
	I0806 07:23:01.837688   31793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 07:23:02.337917   31793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 07:23:02.838694   31793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 07:23:03.337851   31793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 07:23:03.837800   31793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 07:23:04.338252   31793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 07:23:04.838153   31793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 07:23:05.338045   31793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 07:23:05.838541   31793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 07:23:06.338506   31793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 07:23:06.838409   31793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 07:23:07.338469   31793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 07:23:07.837864   31793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 07:23:08.337724   31793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 07:23:08.838540   31793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 07:23:09.338677   31793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 07:23:09.838346   31793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 07:23:10.337750   31793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 07:23:10.838657   31793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 07:23:11.337722   31793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 07:23:11.838695   31793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 07:23:12.337840   31793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 07:23:12.837740   31793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 07:23:13.338482   31793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 07:23:13.415858   31793 kubeadm.go:1113] duration metric: took 11.766796251s to wait for elevateKubeSystemPrivileges
	I0806 07:23:13.415909   31793 kubeadm.go:394] duration metric: took 23.177126815s to StartCluster
	I0806 07:23:13.415933   31793 settings.go:142] acquiring lock: {Name:mkb0bfbcae3b4205ef43487a9de84cfd8afd2e5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 07:23:13.416026   31793 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19370-13057/kubeconfig
	I0806 07:23:13.417013   31793 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/kubeconfig: {Name:mk5c066914acf8e36556b29792f75b98076ca34a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 07:23:13.417279   31793 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0806 07:23:13.417288   31793 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.164 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0806 07:23:13.417312   31793 start.go:241] waiting for startup goroutines ...
	I0806 07:23:13.417325   31793 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0806 07:23:13.417389   31793 addons.go:69] Setting storage-provisioner=true in profile "ha-118173"
	I0806 07:23:13.417413   31793 addons.go:69] Setting default-storageclass=true in profile "ha-118173"
	I0806 07:23:13.417420   31793 addons.go:234] Setting addon storage-provisioner=true in "ha-118173"
	I0806 07:23:13.417444   31793 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-118173"
	I0806 07:23:13.417448   31793 host.go:66] Checking if "ha-118173" exists ...
	I0806 07:23:13.417527   31793 config.go:182] Loaded profile config "ha-118173": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 07:23:13.417841   31793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:23:13.417864   31793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:23:13.417874   31793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:23:13.417895   31793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:23:13.432576   31793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46107
	I0806 07:23:13.432584   31793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33159
	I0806 07:23:13.432988   31793 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:23:13.433080   31793 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:23:13.433451   31793 main.go:141] libmachine: Using API Version  1
	I0806 07:23:13.433466   31793 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:23:13.433624   31793 main.go:141] libmachine: Using API Version  1
	I0806 07:23:13.433645   31793 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:23:13.433808   31793 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:23:13.434018   31793 main.go:141] libmachine: (ha-118173) Calling .GetState
	I0806 07:23:13.434035   31793 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:23:13.434522   31793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:23:13.434563   31793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:23:13.436293   31793 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19370-13057/kubeconfig
	I0806 07:23:13.436651   31793 kapi.go:59] client config for ha-118173: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/client.crt", KeyFile:"/home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/client.key", CAFile:"/home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02a80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0806 07:23:13.437101   31793 cert_rotation.go:137] Starting client certificate rotation controller
	I0806 07:23:13.437355   31793 addons.go:234] Setting addon default-storageclass=true in "ha-118173"
	I0806 07:23:13.437401   31793 host.go:66] Checking if "ha-118173" exists ...
	I0806 07:23:13.437767   31793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:23:13.437814   31793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:23:13.450209   31793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45857
	I0806 07:23:13.450656   31793 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:23:13.451208   31793 main.go:141] libmachine: Using API Version  1
	I0806 07:23:13.451232   31793 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:23:13.451610   31793 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:23:13.451803   31793 main.go:141] libmachine: (ha-118173) Calling .GetState
	I0806 07:23:13.452631   31793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44311
	I0806 07:23:13.453234   31793 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:23:13.453700   31793 main.go:141] libmachine: (ha-118173) Calling .DriverName
	I0806 07:23:13.453727   31793 main.go:141] libmachine: Using API Version  1
	I0806 07:23:13.453747   31793 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:23:13.454060   31793 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:23:13.454512   31793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:23:13.454539   31793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:23:13.455504   31793 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 07:23:13.457284   31793 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0806 07:23:13.457309   31793 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0806 07:23:13.457331   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHHostname
	I0806 07:23:13.460218   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:23:13.460629   31793 main.go:141] libmachine: (ha-118173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:df:f0", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:22:33 +0000 UTC Type:0 Mac:52:54:00:ef:df:f0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-118173 Clientid:01:52:54:00:ef:df:f0}
	I0806 07:23:13.460651   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined IP address 192.168.39.164 and MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:23:13.460855   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHPort
	I0806 07:23:13.461047   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHKeyPath
	I0806 07:23:13.461214   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHUsername
	I0806 07:23:13.461338   31793 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173/id_rsa Username:docker}
	I0806 07:23:13.473573   31793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42685
	I0806 07:23:13.474048   31793 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:23:13.474556   31793 main.go:141] libmachine: Using API Version  1
	I0806 07:23:13.474589   31793 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:23:13.474961   31793 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:23:13.475167   31793 main.go:141] libmachine: (ha-118173) Calling .GetState
	I0806 07:23:13.477062   31793 main.go:141] libmachine: (ha-118173) Calling .DriverName
	I0806 07:23:13.477280   31793 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0806 07:23:13.477294   31793 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0806 07:23:13.477308   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHHostname
	I0806 07:23:13.480393   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:23:13.480895   31793 main.go:141] libmachine: (ha-118173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:df:f0", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:22:33 +0000 UTC Type:0 Mac:52:54:00:ef:df:f0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-118173 Clientid:01:52:54:00:ef:df:f0}
	I0806 07:23:13.480929   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined IP address 192.168.39.164 and MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:23:13.481068   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHPort
	I0806 07:23:13.481292   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHKeyPath
	I0806 07:23:13.481439   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHUsername
	I0806 07:23:13.481582   31793 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173/id_rsa Username:docker}
	I0806 07:23:13.551371   31793 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0806 07:23:13.591964   31793 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0806 07:23:13.695834   31793 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0806 07:23:14.162916   31793 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0806 07:23:14.515978   31793 main.go:141] libmachine: Making call to close driver server
	I0806 07:23:14.516007   31793 main.go:141] libmachine: (ha-118173) Calling .Close
	I0806 07:23:14.516016   31793 main.go:141] libmachine: Making call to close driver server
	I0806 07:23:14.516035   31793 main.go:141] libmachine: (ha-118173) Calling .Close
	I0806 07:23:14.516341   31793 main.go:141] libmachine: Successfully made call to close driver server
	I0806 07:23:14.516371   31793 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 07:23:14.516380   31793 main.go:141] libmachine: Making call to close driver server
	I0806 07:23:14.516388   31793 main.go:141] libmachine: (ha-118173) Calling .Close
	I0806 07:23:14.516388   31793 main.go:141] libmachine: Successfully made call to close driver server
	I0806 07:23:14.516401   31793 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 07:23:14.516412   31793 main.go:141] libmachine: Making call to close driver server
	I0806 07:23:14.516420   31793 main.go:141] libmachine: (ha-118173) Calling .Close
	I0806 07:23:14.516462   31793 main.go:141] libmachine: (ha-118173) DBG | Closing plugin on server side
	I0806 07:23:14.516544   31793 main.go:141] libmachine: (ha-118173) DBG | Closing plugin on server side
	I0806 07:23:14.516596   31793 main.go:141] libmachine: (ha-118173) DBG | Closing plugin on server side
	I0806 07:23:14.516620   31793 main.go:141] libmachine: Successfully made call to close driver server
	I0806 07:23:14.516628   31793 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 07:23:14.517941   31793 main.go:141] libmachine: (ha-118173) DBG | Closing plugin on server side
	I0806 07:23:14.517979   31793 main.go:141] libmachine: Successfully made call to close driver server
	I0806 07:23:14.517997   31793 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 07:23:14.518151   31793 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0806 07:23:14.518165   31793 round_trippers.go:469] Request Headers:
	I0806 07:23:14.518177   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:23:14.518191   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:23:14.528720   31793 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0806 07:23:14.530107   31793 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0806 07:23:14.530131   31793 round_trippers.go:469] Request Headers:
	I0806 07:23:14.530138   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:23:14.530141   31793 round_trippers.go:473]     Content-Type: application/json
	I0806 07:23:14.530144   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:23:14.532918   31793 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 07:23:14.533101   31793 main.go:141] libmachine: Making call to close driver server
	I0806 07:23:14.533126   31793 main.go:141] libmachine: (ha-118173) Calling .Close
	I0806 07:23:14.533390   31793 main.go:141] libmachine: Successfully made call to close driver server
	I0806 07:23:14.533405   31793 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 07:23:14.535118   31793 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0806 07:23:14.536337   31793 addons.go:510] duration metric: took 1.119007516s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0806 07:23:14.536388   31793 start.go:246] waiting for cluster config update ...
	I0806 07:23:14.536406   31793 start.go:255] writing updated cluster config ...
	I0806 07:23:14.538220   31793 out.go:177] 
	I0806 07:23:14.539561   31793 config.go:182] Loaded profile config "ha-118173": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 07:23:14.539664   31793 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/config.json ...
	I0806 07:23:14.541328   31793 out.go:177] * Starting "ha-118173-m02" control-plane node in "ha-118173" cluster
	I0806 07:23:14.542630   31793 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0806 07:23:14.542665   31793 cache.go:56] Caching tarball of preloaded images
	I0806 07:23:14.542777   31793 preload.go:172] Found /home/jenkins/minikube-integration/19370-13057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0806 07:23:14.542798   31793 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0806 07:23:14.542923   31793 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/config.json ...
	I0806 07:23:14.543152   31793 start.go:360] acquireMachinesLock for ha-118173-m02: {Name:mkc602ac9c8cc3360dcc0fcb8104303837aaab61 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 07:23:14.543214   31793 start.go:364] duration metric: took 36.596µs to acquireMachinesLock for "ha-118173-m02"
	I0806 07:23:14.543237   31793 start.go:93] Provisioning new machine with config: &{Name:ha-118173 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-118173 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.164 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0806 07:23:14.543331   31793 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0806 07:23:14.544956   31793 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0806 07:23:14.545089   31793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:23:14.545119   31793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:23:14.560354   31793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35915
	I0806 07:23:14.560785   31793 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:23:14.561300   31793 main.go:141] libmachine: Using API Version  1
	I0806 07:23:14.561318   31793 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:23:14.561639   31793 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:23:14.561889   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetMachineName
	I0806 07:23:14.562052   31793 main.go:141] libmachine: (ha-118173-m02) Calling .DriverName
	I0806 07:23:14.562234   31793 start.go:159] libmachine.API.Create for "ha-118173" (driver="kvm2")
	I0806 07:23:14.562255   31793 client.go:168] LocalClient.Create starting
	I0806 07:23:14.562291   31793 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem
	I0806 07:23:14.562331   31793 main.go:141] libmachine: Decoding PEM data...
	I0806 07:23:14.562344   31793 main.go:141] libmachine: Parsing certificate...
	I0806 07:23:14.562394   31793 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem
	I0806 07:23:14.562412   31793 main.go:141] libmachine: Decoding PEM data...
	I0806 07:23:14.562422   31793 main.go:141] libmachine: Parsing certificate...
	I0806 07:23:14.562437   31793 main.go:141] libmachine: Running pre-create checks...
	I0806 07:23:14.562445   31793 main.go:141] libmachine: (ha-118173-m02) Calling .PreCreateCheck
	I0806 07:23:14.562616   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetConfigRaw
	I0806 07:23:14.563050   31793 main.go:141] libmachine: Creating machine...
	I0806 07:23:14.563069   31793 main.go:141] libmachine: (ha-118173-m02) Calling .Create
	I0806 07:23:14.563233   31793 main.go:141] libmachine: (ha-118173-m02) Creating KVM machine...
	I0806 07:23:14.564414   31793 main.go:141] libmachine: (ha-118173-m02) DBG | found existing default KVM network
	I0806 07:23:14.564527   31793 main.go:141] libmachine: (ha-118173-m02) DBG | found existing private KVM network mk-ha-118173
	I0806 07:23:14.564689   31793 main.go:141] libmachine: (ha-118173-m02) Setting up store path in /home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173-m02 ...
	I0806 07:23:14.564713   31793 main.go:141] libmachine: (ha-118173-m02) Building disk image from file:///home/jenkins/minikube-integration/19370-13057/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0806 07:23:14.564790   31793 main.go:141] libmachine: (ha-118173-m02) DBG | I0806 07:23:14.564685   32199 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19370-13057/.minikube
	I0806 07:23:14.564891   31793 main.go:141] libmachine: (ha-118173-m02) Downloading /home/jenkins/minikube-integration/19370-13057/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19370-13057/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0806 07:23:14.794040   31793 main.go:141] libmachine: (ha-118173-m02) DBG | I0806 07:23:14.793908   32199 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173-m02/id_rsa...
	I0806 07:23:14.896464   31793 main.go:141] libmachine: (ha-118173-m02) DBG | I0806 07:23:14.896307   32199 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173-m02/ha-118173-m02.rawdisk...
	I0806 07:23:14.896495   31793 main.go:141] libmachine: (ha-118173-m02) DBG | Writing magic tar header
	I0806 07:23:14.896506   31793 main.go:141] libmachine: (ha-118173-m02) DBG | Writing SSH key tar header
	I0806 07:23:14.896520   31793 main.go:141] libmachine: (ha-118173-m02) DBG | I0806 07:23:14.896451   32199 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173-m02 ...
	I0806 07:23:14.896616   31793 main.go:141] libmachine: (ha-118173-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173-m02
	I0806 07:23:14.896649   31793 main.go:141] libmachine: (ha-118173-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19370-13057/.minikube/machines
	I0806 07:23:14.896661   31793 main.go:141] libmachine: (ha-118173-m02) Setting executable bit set on /home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173-m02 (perms=drwx------)
	I0806 07:23:14.896678   31793 main.go:141] libmachine: (ha-118173-m02) Setting executable bit set on /home/jenkins/minikube-integration/19370-13057/.minikube/machines (perms=drwxr-xr-x)
	I0806 07:23:14.896690   31793 main.go:141] libmachine: (ha-118173-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19370-13057/.minikube
	I0806 07:23:14.896704   31793 main.go:141] libmachine: (ha-118173-m02) Setting executable bit set on /home/jenkins/minikube-integration/19370-13057/.minikube (perms=drwxr-xr-x)
	I0806 07:23:14.896719   31793 main.go:141] libmachine: (ha-118173-m02) Setting executable bit set on /home/jenkins/minikube-integration/19370-13057 (perms=drwxrwxr-x)
	I0806 07:23:14.896732   31793 main.go:141] libmachine: (ha-118173-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0806 07:23:14.896745   31793 main.go:141] libmachine: (ha-118173-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19370-13057
	I0806 07:23:14.896755   31793 main.go:141] libmachine: (ha-118173-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0806 07:23:14.896770   31793 main.go:141] libmachine: (ha-118173-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0806 07:23:14.896782   31793 main.go:141] libmachine: (ha-118173-m02) Creating domain...
	I0806 07:23:14.896803   31793 main.go:141] libmachine: (ha-118173-m02) DBG | Checking permissions on dir: /home/jenkins
	I0806 07:23:14.896816   31793 main.go:141] libmachine: (ha-118173-m02) DBG | Checking permissions on dir: /home
	I0806 07:23:14.896862   31793 main.go:141] libmachine: (ha-118173-m02) DBG | Skipping /home - not owner
	I0806 07:23:14.897812   31793 main.go:141] libmachine: (ha-118173-m02) define libvirt domain using xml: 
	I0806 07:23:14.897835   31793 main.go:141] libmachine: (ha-118173-m02) <domain type='kvm'>
	I0806 07:23:14.897845   31793 main.go:141] libmachine: (ha-118173-m02)   <name>ha-118173-m02</name>
	I0806 07:23:14.897857   31793 main.go:141] libmachine: (ha-118173-m02)   <memory unit='MiB'>2200</memory>
	I0806 07:23:14.897868   31793 main.go:141] libmachine: (ha-118173-m02)   <vcpu>2</vcpu>
	I0806 07:23:14.897876   31793 main.go:141] libmachine: (ha-118173-m02)   <features>
	I0806 07:23:14.897884   31793 main.go:141] libmachine: (ha-118173-m02)     <acpi/>
	I0806 07:23:14.897893   31793 main.go:141] libmachine: (ha-118173-m02)     <apic/>
	I0806 07:23:14.897930   31793 main.go:141] libmachine: (ha-118173-m02)     <pae/>
	I0806 07:23:14.897956   31793 main.go:141] libmachine: (ha-118173-m02)     
	I0806 07:23:14.897979   31793 main.go:141] libmachine: (ha-118173-m02)   </features>
	I0806 07:23:14.897997   31793 main.go:141] libmachine: (ha-118173-m02)   <cpu mode='host-passthrough'>
	I0806 07:23:14.898009   31793 main.go:141] libmachine: (ha-118173-m02)   
	I0806 07:23:14.898019   31793 main.go:141] libmachine: (ha-118173-m02)   </cpu>
	I0806 07:23:14.898029   31793 main.go:141] libmachine: (ha-118173-m02)   <os>
	I0806 07:23:14.898034   31793 main.go:141] libmachine: (ha-118173-m02)     <type>hvm</type>
	I0806 07:23:14.898040   31793 main.go:141] libmachine: (ha-118173-m02)     <boot dev='cdrom'/>
	I0806 07:23:14.898047   31793 main.go:141] libmachine: (ha-118173-m02)     <boot dev='hd'/>
	I0806 07:23:14.898057   31793 main.go:141] libmachine: (ha-118173-m02)     <bootmenu enable='no'/>
	I0806 07:23:14.898066   31793 main.go:141] libmachine: (ha-118173-m02)   </os>
	I0806 07:23:14.898076   31793 main.go:141] libmachine: (ha-118173-m02)   <devices>
	I0806 07:23:14.898087   31793 main.go:141] libmachine: (ha-118173-m02)     <disk type='file' device='cdrom'>
	I0806 07:23:14.898100   31793 main.go:141] libmachine: (ha-118173-m02)       <source file='/home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173-m02/boot2docker.iso'/>
	I0806 07:23:14.898111   31793 main.go:141] libmachine: (ha-118173-m02)       <target dev='hdc' bus='scsi'/>
	I0806 07:23:14.898125   31793 main.go:141] libmachine: (ha-118173-m02)       <readonly/>
	I0806 07:23:14.898138   31793 main.go:141] libmachine: (ha-118173-m02)     </disk>
	I0806 07:23:14.898145   31793 main.go:141] libmachine: (ha-118173-m02)     <disk type='file' device='disk'>
	I0806 07:23:14.898157   31793 main.go:141] libmachine: (ha-118173-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0806 07:23:14.898166   31793 main.go:141] libmachine: (ha-118173-m02)       <source file='/home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173-m02/ha-118173-m02.rawdisk'/>
	I0806 07:23:14.898174   31793 main.go:141] libmachine: (ha-118173-m02)       <target dev='hda' bus='virtio'/>
	I0806 07:23:14.898182   31793 main.go:141] libmachine: (ha-118173-m02)     </disk>
	I0806 07:23:14.898192   31793 main.go:141] libmachine: (ha-118173-m02)     <interface type='network'>
	I0806 07:23:14.898201   31793 main.go:141] libmachine: (ha-118173-m02)       <source network='mk-ha-118173'/>
	I0806 07:23:14.898211   31793 main.go:141] libmachine: (ha-118173-m02)       <model type='virtio'/>
	I0806 07:23:14.898219   31793 main.go:141] libmachine: (ha-118173-m02)     </interface>
	I0806 07:23:14.898229   31793 main.go:141] libmachine: (ha-118173-m02)     <interface type='network'>
	I0806 07:23:14.898237   31793 main.go:141] libmachine: (ha-118173-m02)       <source network='default'/>
	I0806 07:23:14.898248   31793 main.go:141] libmachine: (ha-118173-m02)       <model type='virtio'/>
	I0806 07:23:14.898262   31793 main.go:141] libmachine: (ha-118173-m02)     </interface>
	I0806 07:23:14.898275   31793 main.go:141] libmachine: (ha-118173-m02)     <serial type='pty'>
	I0806 07:23:14.898287   31793 main.go:141] libmachine: (ha-118173-m02)       <target port='0'/>
	I0806 07:23:14.898295   31793 main.go:141] libmachine: (ha-118173-m02)     </serial>
	I0806 07:23:14.898302   31793 main.go:141] libmachine: (ha-118173-m02)     <console type='pty'>
	I0806 07:23:14.898313   31793 main.go:141] libmachine: (ha-118173-m02)       <target type='serial' port='0'/>
	I0806 07:23:14.898326   31793 main.go:141] libmachine: (ha-118173-m02)     </console>
	I0806 07:23:14.898337   31793 main.go:141] libmachine: (ha-118173-m02)     <rng model='virtio'>
	I0806 07:23:14.898361   31793 main.go:141] libmachine: (ha-118173-m02)       <backend model='random'>/dev/random</backend>
	I0806 07:23:14.898383   31793 main.go:141] libmachine: (ha-118173-m02)     </rng>
	I0806 07:23:14.898396   31793 main.go:141] libmachine: (ha-118173-m02)     
	I0806 07:23:14.898405   31793 main.go:141] libmachine: (ha-118173-m02)     
	I0806 07:23:14.898414   31793 main.go:141] libmachine: (ha-118173-m02)   </devices>
	I0806 07:23:14.898423   31793 main.go:141] libmachine: (ha-118173-m02) </domain>
	I0806 07:23:14.898433   31793 main.go:141] libmachine: (ha-118173-m02) 
	I0806 07:23:14.904881   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined MAC address 52:54:00:9e:4c:42 in network default
	I0806 07:23:14.905379   31793 main.go:141] libmachine: (ha-118173-m02) Ensuring networks are active...
	I0806 07:23:14.905400   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:23:14.906112   31793 main.go:141] libmachine: (ha-118173-m02) Ensuring network default is active
	I0806 07:23:14.906564   31793 main.go:141] libmachine: (ha-118173-m02) Ensuring network mk-ha-118173 is active
	I0806 07:23:14.906920   31793 main.go:141] libmachine: (ha-118173-m02) Getting domain xml...
	I0806 07:23:14.907661   31793 main.go:141] libmachine: (ha-118173-m02) Creating domain...
	I0806 07:23:16.128703   31793 main.go:141] libmachine: (ha-118173-m02) Waiting to get IP...
	I0806 07:23:16.129605   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:23:16.129962   31793 main.go:141] libmachine: (ha-118173-m02) DBG | unable to find current IP address of domain ha-118173-m02 in network mk-ha-118173
	I0806 07:23:16.130007   31793 main.go:141] libmachine: (ha-118173-m02) DBG | I0806 07:23:16.129959   32199 retry.go:31] will retry after 241.520623ms: waiting for machine to come up
	I0806 07:23:16.373661   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:23:16.374151   31793 main.go:141] libmachine: (ha-118173-m02) DBG | unable to find current IP address of domain ha-118173-m02 in network mk-ha-118173
	I0806 07:23:16.374181   31793 main.go:141] libmachine: (ha-118173-m02) DBG | I0806 07:23:16.374107   32199 retry.go:31] will retry after 260.223274ms: waiting for machine to come up
	I0806 07:23:16.635735   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:23:16.636249   31793 main.go:141] libmachine: (ha-118173-m02) DBG | unable to find current IP address of domain ha-118173-m02 in network mk-ha-118173
	I0806 07:23:16.636271   31793 main.go:141] libmachine: (ha-118173-m02) DBG | I0806 07:23:16.636212   32199 retry.go:31] will retry after 435.280892ms: waiting for machine to come up
	I0806 07:23:17.072887   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:23:17.073260   31793 main.go:141] libmachine: (ha-118173-m02) DBG | unable to find current IP address of domain ha-118173-m02 in network mk-ha-118173
	I0806 07:23:17.073280   31793 main.go:141] libmachine: (ha-118173-m02) DBG | I0806 07:23:17.073239   32199 retry.go:31] will retry after 455.559843ms: waiting for machine to come up
	I0806 07:23:17.530537   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:23:17.530979   31793 main.go:141] libmachine: (ha-118173-m02) DBG | unable to find current IP address of domain ha-118173-m02 in network mk-ha-118173
	I0806 07:23:17.531007   31793 main.go:141] libmachine: (ha-118173-m02) DBG | I0806 07:23:17.530930   32199 retry.go:31] will retry after 660.172837ms: waiting for machine to come up
	I0806 07:23:18.192659   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:23:18.193143   31793 main.go:141] libmachine: (ha-118173-m02) DBG | unable to find current IP address of domain ha-118173-m02 in network mk-ha-118173
	I0806 07:23:18.193177   31793 main.go:141] libmachine: (ha-118173-m02) DBG | I0806 07:23:18.193087   32199 retry.go:31] will retry after 876.754505ms: waiting for machine to come up
	I0806 07:23:19.071078   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:23:19.071460   31793 main.go:141] libmachine: (ha-118173-m02) DBG | unable to find current IP address of domain ha-118173-m02 in network mk-ha-118173
	I0806 07:23:19.071514   31793 main.go:141] libmachine: (ha-118173-m02) DBG | I0806 07:23:19.071409   32199 retry.go:31] will retry after 1.111120578s: waiting for machine to come up
	I0806 07:23:20.184316   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:23:20.184805   31793 main.go:141] libmachine: (ha-118173-m02) DBG | unable to find current IP address of domain ha-118173-m02 in network mk-ha-118173
	I0806 07:23:20.184881   31793 main.go:141] libmachine: (ha-118173-m02) DBG | I0806 07:23:20.184764   32199 retry.go:31] will retry after 1.325458338s: waiting for machine to come up
	I0806 07:23:21.512349   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:23:21.512851   31793 main.go:141] libmachine: (ha-118173-m02) DBG | unable to find current IP address of domain ha-118173-m02 in network mk-ha-118173
	I0806 07:23:21.512878   31793 main.go:141] libmachine: (ha-118173-m02) DBG | I0806 07:23:21.512812   32199 retry.go:31] will retry after 1.231406403s: waiting for machine to come up
	I0806 07:23:22.746319   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:23:22.746760   31793 main.go:141] libmachine: (ha-118173-m02) DBG | unable to find current IP address of domain ha-118173-m02 in network mk-ha-118173
	I0806 07:23:22.746788   31793 main.go:141] libmachine: (ha-118173-m02) DBG | I0806 07:23:22.746710   32199 retry.go:31] will retry after 1.568319665s: waiting for machine to come up
	I0806 07:23:24.317506   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:23:24.318026   31793 main.go:141] libmachine: (ha-118173-m02) DBG | unable to find current IP address of domain ha-118173-m02 in network mk-ha-118173
	I0806 07:23:24.318059   31793 main.go:141] libmachine: (ha-118173-m02) DBG | I0806 07:23:24.317972   32199 retry.go:31] will retry after 2.818119412s: waiting for machine to come up
	I0806 07:23:27.138815   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:23:27.139209   31793 main.go:141] libmachine: (ha-118173-m02) DBG | unable to find current IP address of domain ha-118173-m02 in network mk-ha-118173
	I0806 07:23:27.139236   31793 main.go:141] libmachine: (ha-118173-m02) DBG | I0806 07:23:27.139166   32199 retry.go:31] will retry after 3.589573898s: waiting for machine to come up
	I0806 07:23:30.730336   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:23:30.730764   31793 main.go:141] libmachine: (ha-118173-m02) DBG | unable to find current IP address of domain ha-118173-m02 in network mk-ha-118173
	I0806 07:23:30.730797   31793 main.go:141] libmachine: (ha-118173-m02) DBG | I0806 07:23:30.730715   32199 retry.go:31] will retry after 3.029021985s: waiting for machine to come up
	I0806 07:23:33.762866   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:23:33.763260   31793 main.go:141] libmachine: (ha-118173-m02) DBG | unable to find current IP address of domain ha-118173-m02 in network mk-ha-118173
	I0806 07:23:33.763286   31793 main.go:141] libmachine: (ha-118173-m02) DBG | I0806 07:23:33.763228   32199 retry.go:31] will retry after 4.735805318s: waiting for machine to come up
	I0806 07:23:38.501919   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:23:38.502361   31793 main.go:141] libmachine: (ha-118173-m02) Found IP for machine: 192.168.39.203
	I0806 07:23:38.502388   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has current primary IP address 192.168.39.203 and MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:23:38.502398   31793 main.go:141] libmachine: (ha-118173-m02) Reserving static IP address...
	I0806 07:23:38.502761   31793 main.go:141] libmachine: (ha-118173-m02) DBG | unable to find host DHCP lease matching {name: "ha-118173-m02", mac: "52:54:00:b4:e9:38", ip: "192.168.39.203"} in network mk-ha-118173
	I0806 07:23:38.573170   31793 main.go:141] libmachine: (ha-118173-m02) DBG | Getting to WaitForSSH function...
	I0806 07:23:38.573210   31793 main.go:141] libmachine: (ha-118173-m02) Reserved static IP address: 192.168.39.203
	I0806 07:23:38.573280   31793 main.go:141] libmachine: (ha-118173-m02) Waiting for SSH to be available...
	I0806 07:23:38.576055   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:23:38.576497   31793 main.go:141] libmachine: (ha-118173-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:e9:38", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:23:29 +0000 UTC Type:0 Mac:52:54:00:b4:e9:38 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:minikube Clientid:01:52:54:00:b4:e9:38}
	I0806 07:23:38.576525   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined IP address 192.168.39.203 and MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:23:38.576690   31793 main.go:141] libmachine: (ha-118173-m02) DBG | Using SSH client type: external
	I0806 07:23:38.576717   31793 main.go:141] libmachine: (ha-118173-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173-m02/id_rsa (-rw-------)
	I0806 07:23:38.576743   31793 main.go:141] libmachine: (ha-118173-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.203 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0806 07:23:38.576758   31793 main.go:141] libmachine: (ha-118173-m02) DBG | About to run SSH command:
	I0806 07:23:38.576775   31793 main.go:141] libmachine: (ha-118173-m02) DBG | exit 0
	I0806 07:23:38.704670   31793 main.go:141] libmachine: (ha-118173-m02) DBG | SSH cmd err, output: <nil>: 
	I0806 07:23:38.704958   31793 main.go:141] libmachine: (ha-118173-m02) KVM machine creation complete!
	I0806 07:23:38.705236   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetConfigRaw
	I0806 07:23:38.705780   31793 main.go:141] libmachine: (ha-118173-m02) Calling .DriverName
	I0806 07:23:38.705989   31793 main.go:141] libmachine: (ha-118173-m02) Calling .DriverName
	I0806 07:23:38.706130   31793 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0806 07:23:38.706144   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetState
	I0806 07:23:38.707347   31793 main.go:141] libmachine: Detecting operating system of created instance...
	I0806 07:23:38.707360   31793 main.go:141] libmachine: Waiting for SSH to be available...
	I0806 07:23:38.707370   31793 main.go:141] libmachine: Getting to WaitForSSH function...
	I0806 07:23:38.707376   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHHostname
	I0806 07:23:38.709658   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:23:38.710063   31793 main.go:141] libmachine: (ha-118173-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:e9:38", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:23:29 +0000 UTC Type:0 Mac:52:54:00:b4:e9:38 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-118173-m02 Clientid:01:52:54:00:b4:e9:38}
	I0806 07:23:38.710088   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined IP address 192.168.39.203 and MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:23:38.710237   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHPort
	I0806 07:23:38.710410   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHKeyPath
	I0806 07:23:38.710579   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHKeyPath
	I0806 07:23:38.710702   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHUsername
	I0806 07:23:38.710891   31793 main.go:141] libmachine: Using SSH client type: native
	I0806 07:23:38.711083   31793 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I0806 07:23:38.711096   31793 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0806 07:23:38.811830   31793 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 07:23:38.811851   31793 main.go:141] libmachine: Detecting the provisioner...
	I0806 07:23:38.811858   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHHostname
	I0806 07:23:38.814481   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:23:38.814894   31793 main.go:141] libmachine: (ha-118173-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:e9:38", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:23:29 +0000 UTC Type:0 Mac:52:54:00:b4:e9:38 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-118173-m02 Clientid:01:52:54:00:b4:e9:38}
	I0806 07:23:38.814919   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined IP address 192.168.39.203 and MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:23:38.815118   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHPort
	I0806 07:23:38.815319   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHKeyPath
	I0806 07:23:38.815502   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHKeyPath
	I0806 07:23:38.815683   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHUsername
	I0806 07:23:38.815922   31793 main.go:141] libmachine: Using SSH client type: native
	I0806 07:23:38.816122   31793 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I0806 07:23:38.816138   31793 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0806 07:23:38.921318   31793 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0806 07:23:38.921369   31793 main.go:141] libmachine: found compatible host: buildroot
	I0806 07:23:38.921375   31793 main.go:141] libmachine: Provisioning with buildroot...
	I0806 07:23:38.921384   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetMachineName
	I0806 07:23:38.921682   31793 buildroot.go:166] provisioning hostname "ha-118173-m02"
	I0806 07:23:38.921709   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetMachineName
	I0806 07:23:38.921900   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHHostname
	I0806 07:23:38.924353   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:23:38.924774   31793 main.go:141] libmachine: (ha-118173-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:e9:38", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:23:29 +0000 UTC Type:0 Mac:52:54:00:b4:e9:38 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-118173-m02 Clientid:01:52:54:00:b4:e9:38}
	I0806 07:23:38.924797   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined IP address 192.168.39.203 and MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:23:38.924944   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHPort
	I0806 07:23:38.925131   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHKeyPath
	I0806 07:23:38.925276   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHKeyPath
	I0806 07:23:38.925394   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHUsername
	I0806 07:23:38.925567   31793 main.go:141] libmachine: Using SSH client type: native
	I0806 07:23:38.925723   31793 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I0806 07:23:38.925734   31793 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-118173-m02 && echo "ha-118173-m02" | sudo tee /etc/hostname
	I0806 07:23:39.049943   31793 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-118173-m02
	
	I0806 07:23:39.049968   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHHostname
	I0806 07:23:39.052707   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:23:39.053074   31793 main.go:141] libmachine: (ha-118173-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:e9:38", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:23:29 +0000 UTC Type:0 Mac:52:54:00:b4:e9:38 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-118173-m02 Clientid:01:52:54:00:b4:e9:38}
	I0806 07:23:39.053116   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined IP address 192.168.39.203 and MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:23:39.053251   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHPort
	I0806 07:23:39.053452   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHKeyPath
	I0806 07:23:39.053676   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHKeyPath
	I0806 07:23:39.053849   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHUsername
	I0806 07:23:39.053979   31793 main.go:141] libmachine: Using SSH client type: native
	I0806 07:23:39.054134   31793 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I0806 07:23:39.054150   31793 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-118173-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-118173-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-118173-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0806 07:23:39.165468   31793 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 07:23:39.165500   31793 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19370-13057/.minikube CaCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19370-13057/.minikube}
	I0806 07:23:39.165536   31793 buildroot.go:174] setting up certificates
	I0806 07:23:39.165552   31793 provision.go:84] configureAuth start
	I0806 07:23:39.165567   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetMachineName
	I0806 07:23:39.165824   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetIP
	I0806 07:23:39.168581   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:23:39.168901   31793 main.go:141] libmachine: (ha-118173-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:e9:38", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:23:29 +0000 UTC Type:0 Mac:52:54:00:b4:e9:38 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-118173-m02 Clientid:01:52:54:00:b4:e9:38}
	I0806 07:23:39.168922   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined IP address 192.168.39.203 and MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:23:39.169112   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHHostname
	I0806 07:23:39.171068   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:23:39.171373   31793 main.go:141] libmachine: (ha-118173-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:e9:38", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:23:29 +0000 UTC Type:0 Mac:52:54:00:b4:e9:38 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-118173-m02 Clientid:01:52:54:00:b4:e9:38}
	I0806 07:23:39.171395   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined IP address 192.168.39.203 and MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:23:39.171569   31793 provision.go:143] copyHostCerts
	I0806 07:23:39.171619   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem
	I0806 07:23:39.171659   31793 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem, removing ...
	I0806 07:23:39.171670   31793 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem
	I0806 07:23:39.171744   31793 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem (1078 bytes)
	I0806 07:23:39.171869   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem
	I0806 07:23:39.171902   31793 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem, removing ...
	I0806 07:23:39.171912   31793 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem
	I0806 07:23:39.171957   31793 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem (1123 bytes)
	I0806 07:23:39.172026   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem
	I0806 07:23:39.172050   31793 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem, removing ...
	I0806 07:23:39.172058   31793 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem
	I0806 07:23:39.172093   31793 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem (1675 bytes)
	I0806 07:23:39.172160   31793 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem org=jenkins.ha-118173-m02 san=[127.0.0.1 192.168.39.203 ha-118173-m02 localhost minikube]
	I0806 07:23:39.428272   31793 provision.go:177] copyRemoteCerts
	I0806 07:23:39.428336   31793 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0806 07:23:39.428381   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHHostname
	I0806 07:23:39.431425   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:23:39.431759   31793 main.go:141] libmachine: (ha-118173-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:e9:38", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:23:29 +0000 UTC Type:0 Mac:52:54:00:b4:e9:38 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-118173-m02 Clientid:01:52:54:00:b4:e9:38}
	I0806 07:23:39.431790   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined IP address 192.168.39.203 and MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:23:39.431946   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHPort
	I0806 07:23:39.432186   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHKeyPath
	I0806 07:23:39.432408   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHUsername
	I0806 07:23:39.432547   31793 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173-m02/id_rsa Username:docker}
	I0806 07:23:39.515304   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0806 07:23:39.515394   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0806 07:23:39.540013   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0806 07:23:39.540074   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0806 07:23:39.564148   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0806 07:23:39.564220   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0806 07:23:39.589345   31793 provision.go:87] duration metric: took 423.777348ms to configureAuth
	I0806 07:23:39.589376   31793 buildroot.go:189] setting minikube options for container-runtime
	I0806 07:23:39.589625   31793 config.go:182] Loaded profile config "ha-118173": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 07:23:39.589711   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHHostname
	I0806 07:23:39.592583   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:23:39.593013   31793 main.go:141] libmachine: (ha-118173-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:e9:38", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:23:29 +0000 UTC Type:0 Mac:52:54:00:b4:e9:38 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-118173-m02 Clientid:01:52:54:00:b4:e9:38}
	I0806 07:23:39.593043   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined IP address 192.168.39.203 and MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:23:39.593244   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHPort
	I0806 07:23:39.593439   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHKeyPath
	I0806 07:23:39.593605   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHKeyPath
	I0806 07:23:39.593757   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHUsername
	I0806 07:23:39.593968   31793 main.go:141] libmachine: Using SSH client type: native
	I0806 07:23:39.594126   31793 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I0806 07:23:39.594141   31793 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0806 07:23:39.856759   31793 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0806 07:23:39.856787   31793 main.go:141] libmachine: Checking connection to Docker...
	I0806 07:23:39.856798   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetURL
	I0806 07:23:39.858208   31793 main.go:141] libmachine: (ha-118173-m02) DBG | Using libvirt version 6000000
	I0806 07:23:39.860425   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:23:39.860808   31793 main.go:141] libmachine: (ha-118173-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:e9:38", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:23:29 +0000 UTC Type:0 Mac:52:54:00:b4:e9:38 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-118173-m02 Clientid:01:52:54:00:b4:e9:38}
	I0806 07:23:39.860838   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined IP address 192.168.39.203 and MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:23:39.861014   31793 main.go:141] libmachine: Docker is up and running!
	I0806 07:23:39.861025   31793 main.go:141] libmachine: Reticulating splines...
	I0806 07:23:39.861032   31793 client.go:171] duration metric: took 25.298767274s to LocalClient.Create
	I0806 07:23:39.861066   31793 start.go:167] duration metric: took 25.298821987s to libmachine.API.Create "ha-118173"
	I0806 07:23:39.861080   31793 start.go:293] postStartSetup for "ha-118173-m02" (driver="kvm2")
	I0806 07:23:39.861093   31793 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0806 07:23:39.861126   31793 main.go:141] libmachine: (ha-118173-m02) Calling .DriverName
	I0806 07:23:39.861394   31793 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0806 07:23:39.861422   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHHostname
	I0806 07:23:39.863275   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:23:39.863555   31793 main.go:141] libmachine: (ha-118173-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:e9:38", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:23:29 +0000 UTC Type:0 Mac:52:54:00:b4:e9:38 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-118173-m02 Clientid:01:52:54:00:b4:e9:38}
	I0806 07:23:39.863578   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined IP address 192.168.39.203 and MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:23:39.863740   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHPort
	I0806 07:23:39.863897   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHKeyPath
	I0806 07:23:39.864034   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHUsername
	I0806 07:23:39.864174   31793 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173-m02/id_rsa Username:docker}
	I0806 07:23:39.943366   31793 ssh_runner.go:195] Run: cat /etc/os-release
	I0806 07:23:39.947556   31793 info.go:137] Remote host: Buildroot 2023.02.9
	I0806 07:23:39.947581   31793 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-13057/.minikube/addons for local assets ...
	I0806 07:23:39.947645   31793 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-13057/.minikube/files for local assets ...
	I0806 07:23:39.947715   31793 filesync.go:149] local asset: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem -> 202462.pem in /etc/ssl/certs
	I0806 07:23:39.947725   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem -> /etc/ssl/certs/202462.pem
	I0806 07:23:39.947804   31793 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0806 07:23:39.957875   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem --> /etc/ssl/certs/202462.pem (1708 bytes)
	I0806 07:23:39.980916   31793 start.go:296] duration metric: took 119.823767ms for postStartSetup
	I0806 07:23:39.980972   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetConfigRaw
	I0806 07:23:39.981487   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetIP
	I0806 07:23:39.983974   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:23:39.984290   31793 main.go:141] libmachine: (ha-118173-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:e9:38", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:23:29 +0000 UTC Type:0 Mac:52:54:00:b4:e9:38 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-118173-m02 Clientid:01:52:54:00:b4:e9:38}
	I0806 07:23:39.984310   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined IP address 192.168.39.203 and MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:23:39.984545   31793 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/config.json ...
	I0806 07:23:39.984788   31793 start.go:128] duration metric: took 25.441444463s to createHost
	I0806 07:23:39.984816   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHHostname
	I0806 07:23:39.986891   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:23:39.987261   31793 main.go:141] libmachine: (ha-118173-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:e9:38", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:23:29 +0000 UTC Type:0 Mac:52:54:00:b4:e9:38 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-118173-m02 Clientid:01:52:54:00:b4:e9:38}
	I0806 07:23:39.987285   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined IP address 192.168.39.203 and MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:23:39.987437   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHPort
	I0806 07:23:39.987624   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHKeyPath
	I0806 07:23:39.987778   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHKeyPath
	I0806 07:23:39.987916   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHUsername
	I0806 07:23:39.988070   31793 main.go:141] libmachine: Using SSH client type: native
	I0806 07:23:39.988260   31793 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I0806 07:23:39.988271   31793 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0806 07:23:40.089389   31793 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722929020.047280056
	
	I0806 07:23:40.089406   31793 fix.go:216] guest clock: 1722929020.047280056
	I0806 07:23:40.089414   31793 fix.go:229] Guest: 2024-08-06 07:23:40.047280056 +0000 UTC Remote: 2024-08-06 07:23:39.984804217 +0000 UTC m=+80.852542965 (delta=62.475839ms)
	I0806 07:23:40.089429   31793 fix.go:200] guest clock delta is within tolerance: 62.475839ms
	I0806 07:23:40.089434   31793 start.go:83] releasing machines lock for "ha-118173-m02", held for 25.546207878s
	I0806 07:23:40.089454   31793 main.go:141] libmachine: (ha-118173-m02) Calling .DriverName
	I0806 07:23:40.089709   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetIP
	I0806 07:23:40.092489   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:23:40.092829   31793 main.go:141] libmachine: (ha-118173-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:e9:38", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:23:29 +0000 UTC Type:0 Mac:52:54:00:b4:e9:38 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-118173-m02 Clientid:01:52:54:00:b4:e9:38}
	I0806 07:23:40.092862   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined IP address 192.168.39.203 and MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:23:40.095214   31793 out.go:177] * Found network options:
	I0806 07:23:40.096417   31793 out.go:177]   - NO_PROXY=192.168.39.164
	W0806 07:23:40.097790   31793 proxy.go:119] fail to check proxy env: Error ip not in block
	I0806 07:23:40.097820   31793 main.go:141] libmachine: (ha-118173-m02) Calling .DriverName
	I0806 07:23:40.098370   31793 main.go:141] libmachine: (ha-118173-m02) Calling .DriverName
	I0806 07:23:40.098548   31793 main.go:141] libmachine: (ha-118173-m02) Calling .DriverName
	I0806 07:23:40.098624   31793 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0806 07:23:40.098665   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHHostname
	W0806 07:23:40.098709   31793 proxy.go:119] fail to check proxy env: Error ip not in block
	I0806 07:23:40.098767   31793 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0806 07:23:40.098786   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHHostname
	I0806 07:23:40.101347   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:23:40.101606   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:23:40.101764   31793 main.go:141] libmachine: (ha-118173-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:e9:38", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:23:29 +0000 UTC Type:0 Mac:52:54:00:b4:e9:38 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-118173-m02 Clientid:01:52:54:00:b4:e9:38}
	I0806 07:23:40.101799   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined IP address 192.168.39.203 and MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:23:40.101913   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHPort
	I0806 07:23:40.102020   31793 main.go:141] libmachine: (ha-118173-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:e9:38", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:23:29 +0000 UTC Type:0 Mac:52:54:00:b4:e9:38 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-118173-m02 Clientid:01:52:54:00:b4:e9:38}
	I0806 07:23:40.102044   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined IP address 192.168.39.203 and MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:23:40.102125   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHKeyPath
	I0806 07:23:40.102264   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHPort
	I0806 07:23:40.102339   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHUsername
	I0806 07:23:40.102401   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHKeyPath
	I0806 07:23:40.102467   31793 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173-m02/id_rsa Username:docker}
	I0806 07:23:40.102543   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHUsername
	I0806 07:23:40.102688   31793 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173-m02/id_rsa Username:docker}
	I0806 07:23:40.342985   31793 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0806 07:23:40.349096   31793 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0806 07:23:40.349155   31793 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0806 07:23:40.365655   31793 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0806 07:23:40.365676   31793 start.go:495] detecting cgroup driver to use...
	I0806 07:23:40.365733   31793 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0806 07:23:40.381889   31793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 07:23:40.396850   31793 docker.go:217] disabling cri-docker service (if available) ...
	I0806 07:23:40.396913   31793 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0806 07:23:40.410497   31793 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0806 07:23:40.424845   31793 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0806 07:23:40.539692   31793 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0806 07:23:40.685184   31793 docker.go:233] disabling docker service ...
	I0806 07:23:40.685273   31793 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0806 07:23:40.700727   31793 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0806 07:23:40.714382   31793 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0806 07:23:40.850462   31793 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0806 07:23:40.964754   31793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0806 07:23:40.980912   31793 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 07:23:40.999347   31793 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0806 07:23:40.999409   31793 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 07:23:41.010272   31793 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0806 07:23:41.010334   31793 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 07:23:41.021257   31793 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 07:23:41.032088   31793 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 07:23:41.042973   31793 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0806 07:23:41.053877   31793 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 07:23:41.065099   31793 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 07:23:41.082457   31793 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 07:23:41.093443   31793 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0806 07:23:41.103313   31793 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0806 07:23:41.103365   31793 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0806 07:23:41.117155   31793 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0806 07:23:41.126922   31793 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 07:23:41.247215   31793 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0806 07:23:41.388168   31793 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0806 07:23:41.388244   31793 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0806 07:23:41.393399   31793 start.go:563] Will wait 60s for crictl version
	I0806 07:23:41.393463   31793 ssh_runner.go:195] Run: which crictl
	I0806 07:23:41.397232   31793 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0806 07:23:41.438841   31793 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0806 07:23:41.438927   31793 ssh_runner.go:195] Run: crio --version
	I0806 07:23:41.468576   31793 ssh_runner.go:195] Run: crio --version
	I0806 07:23:41.499325   31793 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0806 07:23:41.500991   31793 out.go:177]   - env NO_PROXY=192.168.39.164
	I0806 07:23:41.502673   31793 main.go:141] libmachine: (ha-118173-m02) Calling .GetIP
	I0806 07:23:41.505533   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:23:41.505877   31793 main.go:141] libmachine: (ha-118173-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:e9:38", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:23:29 +0000 UTC Type:0 Mac:52:54:00:b4:e9:38 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-118173-m02 Clientid:01:52:54:00:b4:e9:38}
	I0806 07:23:41.505906   31793 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined IP address 192.168.39.203 and MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:23:41.506119   31793 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0806 07:23:41.510695   31793 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 07:23:41.524289   31793 mustload.go:65] Loading cluster: ha-118173
	I0806 07:23:41.524528   31793 config.go:182] Loaded profile config "ha-118173": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 07:23:41.524834   31793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:23:41.524862   31793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:23:41.539057   31793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35173
	I0806 07:23:41.539472   31793 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:23:41.539888   31793 main.go:141] libmachine: Using API Version  1
	I0806 07:23:41.539905   31793 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:23:41.540191   31793 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:23:41.540399   31793 main.go:141] libmachine: (ha-118173) Calling .GetState
	I0806 07:23:41.541915   31793 host.go:66] Checking if "ha-118173" exists ...
	I0806 07:23:41.542205   31793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:23:41.542230   31793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:23:41.556556   31793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43703
	I0806 07:23:41.556964   31793 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:23:41.557543   31793 main.go:141] libmachine: Using API Version  1
	I0806 07:23:41.557566   31793 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:23:41.557904   31793 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:23:41.558089   31793 main.go:141] libmachine: (ha-118173) Calling .DriverName
	I0806 07:23:41.558293   31793 certs.go:68] Setting up /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173 for IP: 192.168.39.203
	I0806 07:23:41.558303   31793 certs.go:194] generating shared ca certs ...
	I0806 07:23:41.558319   31793 certs.go:226] acquiring lock for ca certs: {Name:mkf5c5aed91f4108054d9f2c742cad66c86afbed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 07:23:41.558436   31793 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.key
	I0806 07:23:41.558473   31793 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.key
	I0806 07:23:41.558481   31793 certs.go:256] generating profile certs ...
	I0806 07:23:41.558546   31793 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/client.key
	I0806 07:23:41.558571   31793 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.key.d5d45db8
	I0806 07:23:41.558585   31793 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.crt.d5d45db8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.164 192.168.39.203 192.168.39.254]
	I0806 07:23:41.718862   31793 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.crt.d5d45db8 ...
	I0806 07:23:41.718888   31793 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.crt.d5d45db8: {Name:mkd112cb4f72e74564806ce8d17c2ab9a5a6a539 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 07:23:41.719039   31793 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.key.d5d45db8 ...
	I0806 07:23:41.719051   31793 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.key.d5d45db8: {Name:mk1db63b82844c3bee083fbe16f3f90a3f7a181d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 07:23:41.719114   31793 certs.go:381] copying /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.crt.d5d45db8 -> /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.crt
	I0806 07:23:41.719243   31793 certs.go:385] copying /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.key.d5d45db8 -> /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.key
	I0806 07:23:41.719363   31793 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/proxy-client.key
	I0806 07:23:41.719378   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0806 07:23:41.719390   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0806 07:23:41.719401   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0806 07:23:41.719414   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0806 07:23:41.719424   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0806 07:23:41.719436   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0806 07:23:41.719446   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0806 07:23:41.719458   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0806 07:23:41.719501   31793 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246.pem (1338 bytes)
	W0806 07:23:41.719527   31793 certs.go:480] ignoring /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246_empty.pem, impossibly tiny 0 bytes
	I0806 07:23:41.719535   31793 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem (1675 bytes)
	I0806 07:23:41.719555   31793 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem (1078 bytes)
	I0806 07:23:41.719574   31793 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem (1123 bytes)
	I0806 07:23:41.719595   31793 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem (1675 bytes)
	I0806 07:23:41.719633   31793 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem (1708 bytes)
	I0806 07:23:41.719657   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0806 07:23:41.719670   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246.pem -> /usr/share/ca-certificates/20246.pem
	I0806 07:23:41.719682   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem -> /usr/share/ca-certificates/202462.pem
	I0806 07:23:41.719712   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHHostname
	I0806 07:23:41.722946   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:23:41.723406   31793 main.go:141] libmachine: (ha-118173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:df:f0", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:22:33 +0000 UTC Type:0 Mac:52:54:00:ef:df:f0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-118173 Clientid:01:52:54:00:ef:df:f0}
	I0806 07:23:41.723426   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined IP address 192.168.39.164 and MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:23:41.723686   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHPort
	I0806 07:23:41.723916   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHKeyPath
	I0806 07:23:41.724091   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHUsername
	I0806 07:23:41.724219   31793 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173/id_rsa Username:docker}
	I0806 07:23:41.792714   31793 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0806 07:23:41.797774   31793 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0806 07:23:41.815063   31793 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0806 07:23:41.820824   31793 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0806 07:23:41.837161   31793 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0806 07:23:41.842160   31793 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0806 07:23:41.852850   31793 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0806 07:23:41.856991   31793 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0806 07:23:41.867945   31793 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0806 07:23:41.872003   31793 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0806 07:23:41.882560   31793 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0806 07:23:41.886785   31793 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0806 07:23:41.896850   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0806 07:23:41.922379   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0806 07:23:41.946524   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0806 07:23:41.970191   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0806 07:23:41.992950   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0806 07:23:42.017118   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0806 07:23:42.041296   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0806 07:23:42.065962   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0806 07:23:42.090847   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0806 07:23:42.115845   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246.pem --> /usr/share/ca-certificates/20246.pem (1338 bytes)
	I0806 07:23:42.145312   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem --> /usr/share/ca-certificates/202462.pem (1708 bytes)
	I0806 07:23:42.182159   31793 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0806 07:23:42.199461   31793 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0806 07:23:42.217013   31793 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0806 07:23:42.234444   31793 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0806 07:23:42.251274   31793 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0806 07:23:42.269243   31793 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0806 07:23:42.286419   31793 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0806 07:23:42.303920   31793 ssh_runner.go:195] Run: openssl version
	I0806 07:23:42.310241   31793 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0806 07:23:42.322691   31793 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0806 07:23:42.328134   31793 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  6 07:07 /usr/share/ca-certificates/minikubeCA.pem
	I0806 07:23:42.328189   31793 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0806 07:23:42.334083   31793 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0806 07:23:42.345785   31793 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20246.pem && ln -fs /usr/share/ca-certificates/20246.pem /etc/ssl/certs/20246.pem"
	I0806 07:23:42.357553   31793 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20246.pem
	I0806 07:23:42.362666   31793 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  6 07:17 /usr/share/ca-certificates/20246.pem
	I0806 07:23:42.362723   31793 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20246.pem
	I0806 07:23:42.368742   31793 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20246.pem /etc/ssl/certs/51391683.0"
	I0806 07:23:42.380083   31793 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202462.pem && ln -fs /usr/share/ca-certificates/202462.pem /etc/ssl/certs/202462.pem"
	I0806 07:23:42.391907   31793 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202462.pem
	I0806 07:23:42.396595   31793 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  6 07:17 /usr/share/ca-certificates/202462.pem
	I0806 07:23:42.396640   31793 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202462.pem
	I0806 07:23:42.402341   31793 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202462.pem /etc/ssl/certs/3ec20f2e.0"
	I0806 07:23:42.413138   31793 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0806 07:23:42.417256   31793 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0806 07:23:42.417312   31793 kubeadm.go:934] updating node {m02 192.168.39.203 8443 v1.30.3 crio true true} ...
	I0806 07:23:42.417405   31793 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-118173-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.203
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-118173 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0806 07:23:42.417427   31793 kube-vip.go:115] generating kube-vip config ...
	I0806 07:23:42.417459   31793 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0806 07:23:42.434308   31793 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0806 07:23:42.434367   31793 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0806 07:23:42.434427   31793 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0806 07:23:42.444954   31793 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0806 07:23:42.445004   31793 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0806 07:23:42.455394   31793 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0806 07:23:42.455421   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0806 07:23:42.455484   31793 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19370-13057/.minikube/cache/linux/amd64/v1.30.3/kubeadm
	I0806 07:23:42.455484   31793 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19370-13057/.minikube/cache/linux/amd64/v1.30.3/kubelet
	I0806 07:23:42.455491   31793 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0806 07:23:42.460303   31793 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0806 07:23:42.460331   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0806 07:24:21.437846   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0806 07:24:21.437925   31793 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0806 07:24:21.443805   31793 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0806 07:24:21.443838   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0806 07:24:52.474908   31793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 07:24:52.492777   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0806 07:24:52.492881   31793 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0806 07:24:52.497512   31793 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0806 07:24:52.497544   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0806 07:24:52.908416   31793 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0806 07:24:52.919832   31793 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0806 07:24:52.938153   31793 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0806 07:24:52.956563   31793 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0806 07:24:52.976542   31793 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0806 07:24:52.981271   31793 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 07:24:52.995889   31793 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 07:24:53.127774   31793 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 07:24:53.145529   31793 host.go:66] Checking if "ha-118173" exists ...
	I0806 07:24:53.145838   31793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:24:53.145882   31793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:24:53.160853   31793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36913
	I0806 07:24:53.161309   31793 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:24:53.161774   31793 main.go:141] libmachine: Using API Version  1
	I0806 07:24:53.161794   31793 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:24:53.162254   31793 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:24:53.162449   31793 main.go:141] libmachine: (ha-118173) Calling .DriverName
	I0806 07:24:53.162622   31793 start.go:317] joinCluster: &{Name:ha-118173 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-118173 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.164 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.203 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 07:24:53.162744   31793 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0806 07:24:53.162761   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHHostname
	I0806 07:24:53.165603   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:24:53.166017   31793 main.go:141] libmachine: (ha-118173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:df:f0", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:22:33 +0000 UTC Type:0 Mac:52:54:00:ef:df:f0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-118173 Clientid:01:52:54:00:ef:df:f0}
	I0806 07:24:53.166042   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined IP address 192.168.39.164 and MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:24:53.166240   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHPort
	I0806 07:24:53.166416   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHKeyPath
	I0806 07:24:53.166555   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHUsername
	I0806 07:24:53.166686   31793 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173/id_rsa Username:docker}
	I0806 07:24:53.333140   31793 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.203 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0806 07:24:53.333189   31793 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token avyxt8.s35nwnnva82f8kjh --discovery-token-ca-cert-hash sha256:d36e5c1dfe989c7d561c809d7c06e0fc13926ac8f6d664cc5d6865ea5f79931b --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-118173-m02 --control-plane --apiserver-advertise-address=192.168.39.203 --apiserver-bind-port=8443"
	I0806 07:25:16.031076   31793 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token avyxt8.s35nwnnva82f8kjh --discovery-token-ca-cert-hash sha256:d36e5c1dfe989c7d561c809d7c06e0fc13926ac8f6d664cc5d6865ea5f79931b --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-118173-m02 --control-plane --apiserver-advertise-address=192.168.39.203 --apiserver-bind-port=8443": (22.697849276s)
	I0806 07:25:16.031117   31793 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0806 07:25:16.664310   31793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-118173-m02 minikube.k8s.io/updated_at=2024_08_06T07_25_16_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=e92cb06692f5ea1ba801d10d148e5e92e807f9c8 minikube.k8s.io/name=ha-118173 minikube.k8s.io/primary=false
	I0806 07:25:16.770689   31793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-118173-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0806 07:25:16.893611   31793 start.go:319] duration metric: took 23.730984068s to joinCluster
	I0806 07:25:16.893746   31793 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.203 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0806 07:25:16.894005   31793 config.go:182] Loaded profile config "ha-118173": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 07:25:16.895507   31793 out.go:177] * Verifying Kubernetes components...
	I0806 07:25:16.896985   31793 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 07:25:17.162479   31793 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 07:25:17.230606   31793 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19370-13057/kubeconfig
	I0806 07:25:17.230961   31793 kapi.go:59] client config for ha-118173: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/client.crt", KeyFile:"/home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/client.key", CAFile:"/home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02a80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0806 07:25:17.231049   31793 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.164:8443
	I0806 07:25:17.231317   31793 node_ready.go:35] waiting up to 6m0s for node "ha-118173-m02" to be "Ready" ...
	I0806 07:25:17.231431   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:25:17.231441   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:17.231453   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:17.231465   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:17.240891   31793 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0806 07:25:17.731957   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:25:17.731979   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:17.731987   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:17.731991   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:17.735860   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:25:18.232222   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:25:18.232240   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:18.232248   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:18.232253   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:18.235036   31793 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 07:25:18.731997   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:25:18.732018   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:18.732027   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:18.732031   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:18.735954   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:25:19.232194   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:25:19.232220   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:19.232231   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:19.232236   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:19.235505   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:25:19.236223   31793 node_ready.go:53] node "ha-118173-m02" has status "Ready":"False"
	I0806 07:25:19.732527   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:25:19.732549   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:19.732557   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:19.732561   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:19.736607   31793 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0806 07:25:20.231627   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:25:20.231654   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:20.231666   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:20.231674   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:20.234999   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:25:20.731732   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:25:20.731754   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:20.731762   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:20.731768   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:20.735635   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:25:21.232487   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:25:21.232510   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:21.232522   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:21.232528   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:21.235987   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:25:21.236526   31793 node_ready.go:53] node "ha-118173-m02" has status "Ready":"False"
	I0806 07:25:21.731825   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:25:21.731848   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:21.731856   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:21.731861   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:21.735273   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:25:22.232516   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:25:22.232544   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:22.232556   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:22.232565   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:22.237203   31793 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0806 07:25:22.732292   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:25:22.732311   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:22.732319   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:22.732324   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:22.736115   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:25:23.232186   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:25:23.232212   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:23.232223   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:23.232228   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:23.235284   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:25:23.731507   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:25:23.731528   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:23.731536   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:23.731539   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:23.734683   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:25:23.735299   31793 node_ready.go:53] node "ha-118173-m02" has status "Ready":"False"
	I0806 07:25:24.232015   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:25:24.232037   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:24.232045   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:24.232050   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:24.235359   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:25:24.731509   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:25:24.731531   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:24.731539   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:24.731544   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:24.735451   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:25:25.231659   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:25:25.231684   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:25.231692   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:25.231695   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:25.235553   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:25:25.731997   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:25:25.732024   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:25.732035   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:25.732040   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:25.735579   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:25:25.736264   31793 node_ready.go:53] node "ha-118173-m02" has status "Ready":"False"
	I0806 07:25:26.231536   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:25:26.231577   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:26.231585   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:26.231589   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:26.237718   31793 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0806 07:25:26.731570   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:25:26.731591   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:26.731599   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:26.731604   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:26.734799   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:25:27.231635   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:25:27.231657   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:27.231665   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:27.231670   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:27.235147   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:25:27.731608   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:25:27.731632   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:27.731644   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:27.731651   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:27.734845   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:25:28.232292   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:25:28.232316   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:28.232324   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:28.232329   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:28.235996   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:25:28.236527   31793 node_ready.go:53] node "ha-118173-m02" has status "Ready":"False"
	I0806 07:25:28.731928   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:25:28.731951   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:28.731959   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:28.731963   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:28.735401   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:25:29.232520   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:25:29.232542   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:29.232550   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:29.232555   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:29.237426   31793 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0806 07:25:29.732192   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:25:29.732213   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:29.732221   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:29.732225   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:29.736147   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:25:30.232194   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:25:30.232218   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:30.232226   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:30.232229   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:30.235717   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:25:30.731754   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:25:30.731775   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:30.731783   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:30.731786   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:30.735011   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:25:30.735571   31793 node_ready.go:53] node "ha-118173-m02" has status "Ready":"False"
	I0806 07:25:31.231922   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:25:31.231947   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:31.231959   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:31.231967   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:31.235232   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:25:31.731880   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:25:31.731905   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:31.731912   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:31.731916   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:31.735005   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:25:32.232000   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:25:32.232023   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:32.232030   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:32.232035   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:32.235450   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:25:32.732344   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:25:32.732377   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:32.732388   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:32.732393   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:32.736800   31793 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0806 07:25:32.737518   31793 node_ready.go:53] node "ha-118173-m02" has status "Ready":"False"
	I0806 07:25:33.232391   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:25:33.232411   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:33.232417   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:33.232421   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:33.235952   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:25:33.732111   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:25:33.732219   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:33.732245   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:33.732256   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:33.736463   31793 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0806 07:25:34.231988   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:25:34.232011   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:34.232020   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:34.232025   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:34.237949   31793 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0806 07:25:34.238523   31793 node_ready.go:49] node "ha-118173-m02" has status "Ready":"True"
	I0806 07:25:34.238550   31793 node_ready.go:38] duration metric: took 17.007202474s for node "ha-118173-m02" to be "Ready" ...
	I0806 07:25:34.238559   31793 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 07:25:34.238632   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods
	I0806 07:25:34.238640   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:34.238647   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:34.238651   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:34.242925   31793 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0806 07:25:34.249754   31793 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-t24ww" in "kube-system" namespace to be "Ready" ...
	I0806 07:25:34.249849   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-t24ww
	I0806 07:25:34.249862   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:34.249872   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:34.249881   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:34.252542   31793 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 07:25:34.253126   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173
	I0806 07:25:34.253147   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:34.253156   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:34.253162   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:34.255680   31793 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 07:25:34.256221   31793 pod_ready.go:92] pod "coredns-7db6d8ff4d-t24ww" in "kube-system" namespace has status "Ready":"True"
	I0806 07:25:34.256245   31793 pod_ready.go:81] duration metric: took 6.464038ms for pod "coredns-7db6d8ff4d-t24ww" in "kube-system" namespace to be "Ready" ...
	I0806 07:25:34.256262   31793 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-zscz4" in "kube-system" namespace to be "Ready" ...
	I0806 07:25:34.256333   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-zscz4
	I0806 07:25:34.256344   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:34.256355   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:34.256376   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:34.258788   31793 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 07:25:34.259613   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173
	I0806 07:25:34.259625   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:34.259632   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:34.259636   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:34.261936   31793 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 07:25:34.262707   31793 pod_ready.go:92] pod "coredns-7db6d8ff4d-zscz4" in "kube-system" namespace has status "Ready":"True"
	I0806 07:25:34.262723   31793 pod_ready.go:81] duration metric: took 6.449074ms for pod "coredns-7db6d8ff4d-zscz4" in "kube-system" namespace to be "Ready" ...
	I0806 07:25:34.262734   31793 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-118173" in "kube-system" namespace to be "Ready" ...
	I0806 07:25:34.262797   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods/etcd-ha-118173
	I0806 07:25:34.262806   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:34.262817   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:34.262826   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:34.268435   31793 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0806 07:25:34.268983   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173
	I0806 07:25:34.268996   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:34.269004   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:34.269008   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:34.271298   31793 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 07:25:34.271793   31793 pod_ready.go:92] pod "etcd-ha-118173" in "kube-system" namespace has status "Ready":"True"
	I0806 07:25:34.271806   31793 pod_ready.go:81] duration metric: took 9.061638ms for pod "etcd-ha-118173" in "kube-system" namespace to be "Ready" ...
	I0806 07:25:34.271816   31793 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-118173-m02" in "kube-system" namespace to be "Ready" ...
	I0806 07:25:34.271864   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods/etcd-ha-118173-m02
	I0806 07:25:34.271873   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:34.271881   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:34.271884   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:34.274574   31793 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 07:25:34.275005   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:25:34.275017   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:34.275023   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:34.275027   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:34.277285   31793 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 07:25:34.277675   31793 pod_ready.go:92] pod "etcd-ha-118173-m02" in "kube-system" namespace has status "Ready":"True"
	I0806 07:25:34.277690   31793 pod_ready.go:81] duration metric: took 5.863353ms for pod "etcd-ha-118173-m02" in "kube-system" namespace to be "Ready" ...
	I0806 07:25:34.277702   31793 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-118173" in "kube-system" namespace to be "Ready" ...
	I0806 07:25:34.432109   31793 request.go:629] Waited for 154.339738ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-118173
	I0806 07:25:34.432176   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-118173
	I0806 07:25:34.432184   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:34.432191   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:34.432194   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:34.435540   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:25:34.632520   31793 request.go:629] Waited for 196.269991ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.164:8443/api/v1/nodes/ha-118173
	I0806 07:25:34.632578   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173
	I0806 07:25:34.632591   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:34.632603   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:34.632611   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:34.635963   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:25:34.636695   31793 pod_ready.go:92] pod "kube-apiserver-ha-118173" in "kube-system" namespace has status "Ready":"True"
	I0806 07:25:34.636719   31793 pod_ready.go:81] duration metric: took 359.009332ms for pod "kube-apiserver-ha-118173" in "kube-system" namespace to be "Ready" ...
	I0806 07:25:34.636731   31793 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-118173-m02" in "kube-system" namespace to be "Ready" ...
	I0806 07:25:34.832805   31793 request.go:629] Waited for 196.001383ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-118173-m02
	I0806 07:25:34.832882   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-118173-m02
	I0806 07:25:34.832889   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:34.832899   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:34.832909   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:34.836332   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:25:35.032727   31793 request.go:629] Waited for 195.35856ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:25:35.032799   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:25:35.032806   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:35.032814   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:35.032823   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:35.035990   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:25:35.036627   31793 pod_ready.go:92] pod "kube-apiserver-ha-118173-m02" in "kube-system" namespace has status "Ready":"True"
	I0806 07:25:35.036645   31793 pod_ready.go:81] duration metric: took 399.907011ms for pod "kube-apiserver-ha-118173-m02" in "kube-system" namespace to be "Ready" ...
	I0806 07:25:35.036656   31793 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-118173" in "kube-system" namespace to be "Ready" ...
	I0806 07:25:35.232774   31793 request.go:629] Waited for 196.044112ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-118173
	I0806 07:25:35.232829   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-118173
	I0806 07:25:35.232834   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:35.232840   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:35.232843   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:35.236478   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:25:35.432555   31793 request.go:629] Waited for 195.362892ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.164:8443/api/v1/nodes/ha-118173
	I0806 07:25:35.432628   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173
	I0806 07:25:35.432633   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:35.432640   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:35.432646   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:35.436864   31793 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0806 07:25:35.437312   31793 pod_ready.go:92] pod "kube-controller-manager-ha-118173" in "kube-system" namespace has status "Ready":"True"
	I0806 07:25:35.437331   31793 pod_ready.go:81] duration metric: took 400.668512ms for pod "kube-controller-manager-ha-118173" in "kube-system" namespace to be "Ready" ...
	I0806 07:25:35.437341   31793 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-118173-m02" in "kube-system" namespace to be "Ready" ...
	I0806 07:25:35.632448   31793 request.go:629] Waited for 195.038471ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-118173-m02
	I0806 07:25:35.632511   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-118173-m02
	I0806 07:25:35.632516   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:35.632523   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:35.632528   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:35.635789   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:25:35.832759   31793 request.go:629] Waited for 196.377115ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:25:35.832816   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:25:35.832821   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:35.832829   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:35.832832   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:35.837090   31793 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0806 07:25:35.838172   31793 pod_ready.go:92] pod "kube-controller-manager-ha-118173-m02" in "kube-system" namespace has status "Ready":"True"
	I0806 07:25:35.838191   31793 pod_ready.go:81] duration metric: took 400.844363ms for pod "kube-controller-manager-ha-118173-m02" in "kube-system" namespace to be "Ready" ...
	I0806 07:25:35.838200   31793 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-k8sq6" in "kube-system" namespace to be "Ready" ...
	I0806 07:25:36.032421   31793 request.go:629] Waited for 194.163572ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods/kube-proxy-k8sq6
	I0806 07:25:36.032479   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods/kube-proxy-k8sq6
	I0806 07:25:36.032485   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:36.032492   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:36.032496   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:36.036194   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:25:36.232451   31793 request.go:629] Waited for 195.377758ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:25:36.232514   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:25:36.232520   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:36.232527   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:36.232531   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:36.235770   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:25:36.236203   31793 pod_ready.go:92] pod "kube-proxy-k8sq6" in "kube-system" namespace has status "Ready":"True"
	I0806 07:25:36.236222   31793 pod_ready.go:81] duration metric: took 398.015496ms for pod "kube-proxy-k8sq6" in "kube-system" namespace to be "Ready" ...
	I0806 07:25:36.236230   31793 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ksm6v" in "kube-system" namespace to be "Ready" ...
	I0806 07:25:36.432475   31793 request.go:629] Waited for 196.164598ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ksm6v
	I0806 07:25:36.432539   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ksm6v
	I0806 07:25:36.432544   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:36.432552   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:36.432555   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:36.436514   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:25:36.632167   31793 request.go:629] Waited for 194.780345ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.164:8443/api/v1/nodes/ha-118173
	I0806 07:25:36.632250   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173
	I0806 07:25:36.632257   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:36.632267   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:36.632271   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:36.635472   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:25:36.636150   31793 pod_ready.go:92] pod "kube-proxy-ksm6v" in "kube-system" namespace has status "Ready":"True"
	I0806 07:25:36.636167   31793 pod_ready.go:81] duration metric: took 399.931961ms for pod "kube-proxy-ksm6v" in "kube-system" namespace to be "Ready" ...
	I0806 07:25:36.636176   31793 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-118173" in "kube-system" namespace to be "Ready" ...
	I0806 07:25:36.832171   31793 request.go:629] Waited for 195.934613ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-118173
	I0806 07:25:36.832232   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-118173
	I0806 07:25:36.832237   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:36.832244   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:36.832249   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:36.835668   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:25:37.032717   31793 request.go:629] Waited for 196.372822ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.164:8443/api/v1/nodes/ha-118173
	I0806 07:25:37.032802   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173
	I0806 07:25:37.032812   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:37.032821   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:37.032830   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:37.036271   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:25:37.037079   31793 pod_ready.go:92] pod "kube-scheduler-ha-118173" in "kube-system" namespace has status "Ready":"True"
	I0806 07:25:37.037107   31793 pod_ready.go:81] duration metric: took 400.922437ms for pod "kube-scheduler-ha-118173" in "kube-system" namespace to be "Ready" ...
	I0806 07:25:37.037136   31793 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-118173-m02" in "kube-system" namespace to be "Ready" ...
	I0806 07:25:37.232073   31793 request.go:629] Waited for 194.871904ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-118173-m02
	I0806 07:25:37.232163   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-118173-m02
	I0806 07:25:37.232169   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:37.232177   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:37.232182   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:37.235509   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:25:37.432425   31793 request.go:629] Waited for 196.3954ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:25:37.432508   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:25:37.432519   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:37.432533   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:37.432541   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:37.435728   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:25:37.436289   31793 pod_ready.go:92] pod "kube-scheduler-ha-118173-m02" in "kube-system" namespace has status "Ready":"True"
	I0806 07:25:37.436307   31793 pod_ready.go:81] duration metric: took 399.162402ms for pod "kube-scheduler-ha-118173-m02" in "kube-system" namespace to be "Ready" ...
	I0806 07:25:37.436317   31793 pod_ready.go:38] duration metric: took 3.197729224s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 07:25:37.436329   31793 api_server.go:52] waiting for apiserver process to appear ...
	I0806 07:25:37.436400   31793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 07:25:37.452668   31793 api_server.go:72] duration metric: took 20.558880334s to wait for apiserver process to appear ...
	I0806 07:25:37.452691   31793 api_server.go:88] waiting for apiserver healthz status ...
	I0806 07:25:37.452713   31793 api_server.go:253] Checking apiserver healthz at https://192.168.39.164:8443/healthz ...
	I0806 07:25:37.458850   31793 api_server.go:279] https://192.168.39.164:8443/healthz returned 200:
	ok
	I0806 07:25:37.458921   31793 round_trippers.go:463] GET https://192.168.39.164:8443/version
	I0806 07:25:37.458930   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:37.458937   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:37.458941   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:37.459754   31793 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0806 07:25:37.459851   31793 api_server.go:141] control plane version: v1.30.3
	I0806 07:25:37.459867   31793 api_server.go:131] duration metric: took 7.16968ms to wait for apiserver health ...
	I0806 07:25:37.459875   31793 system_pods.go:43] waiting for kube-system pods to appear ...
	I0806 07:25:37.632308   31793 request.go:629] Waited for 172.341978ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods
	I0806 07:25:37.632373   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods
	I0806 07:25:37.632380   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:37.632390   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:37.632396   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:37.638640   31793 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0806 07:25:37.642715   31793 system_pods.go:59] 17 kube-system pods found
	I0806 07:25:37.642746   31793 system_pods.go:61] "coredns-7db6d8ff4d-t24ww" [8effbe6c-3107-409e-9c94-dee4103d12b0] Running
	I0806 07:25:37.642752   31793 system_pods.go:61] "coredns-7db6d8ff4d-zscz4" [ef588a32-ac32-4e38-926e-6ee7a662a399] Running
	I0806 07:25:37.642758   31793 system_pods.go:61] "etcd-ha-118173" [a72e8b83-1e33-429a-8d98-98936a0e3ab8] Running
	I0806 07:25:37.642763   31793 system_pods.go:61] "etcd-ha-118173-m02" [cb090a5d-5760-48c6-abea-735b88daf0be] Running
	I0806 07:25:37.642767   31793 system_pods.go:61] "kindnet-n7jmb" [fa496bfb-7bf0-4ebc-adbd-36d992d8e279] Running
	I0806 07:25:37.642771   31793 system_pods.go:61] "kindnet-z9w6l" [c02fe720-31c6-4146-af1a-0ada3aa7e530] Running
	I0806 07:25:37.642776   31793 system_pods.go:61] "kube-apiserver-ha-118173" [902898f9-8ba6-4d78-a648-49d412dce202] Running
	I0806 07:25:37.642780   31793 system_pods.go:61] "kube-apiserver-ha-118173-m02" [1b9cea4a-7755-4cd8-a240-b4911dc45edf] Running
	I0806 07:25:37.642785   31793 system_pods.go:61] "kube-controller-manager-ha-118173" [1af832f2-bceb-4c89-afa8-09a78b7782e1] Running
	I0806 07:25:37.642790   31793 system_pods.go:61] "kube-controller-manager-ha-118173-m02" [7be7c963-d9af-4cda-95dd-9a953b40f284] Running
	I0806 07:25:37.642795   31793 system_pods.go:61] "kube-proxy-k8sq6" [b1ec850b-6ce0-4710-aa99-419bda3b3e93] Running
	I0806 07:25:37.642802   31793 system_pods.go:61] "kube-proxy-ksm6v" [594bc178-8319-4922-9132-69a9c6890b95] Running
	I0806 07:25:37.642809   31793 system_pods.go:61] "kube-scheduler-ha-118173" [6fda2f13-0df0-42ed-9d67-8dd30b33fa3d] Running
	I0806 07:25:37.642814   31793 system_pods.go:61] "kube-scheduler-ha-118173-m02" [f097af54-fb79-40f2-aaf4-6cfbb0d8c449] Running
	I0806 07:25:37.642821   31793 system_pods.go:61] "kube-vip-ha-118173" [594d77d2-83e6-420d-8f0a-f54e9fd704f3] Running
	I0806 07:25:37.642826   31793 system_pods.go:61] "kube-vip-ha-118173-m02" [66b58218-4fc2-4a96-81b9-31ade7dda807] Running
	I0806 07:25:37.642834   31793 system_pods.go:61] "storage-provisioner" [8379466d-0241-40e4-a262-d762b3cda2eb] Running
	I0806 07:25:37.642847   31793 system_pods.go:74] duration metric: took 182.962704ms to wait for pod list to return data ...
	I0806 07:25:37.642861   31793 default_sa.go:34] waiting for default service account to be created ...
	I0806 07:25:37.832084   31793 request.go:629] Waited for 189.137863ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.164:8443/api/v1/namespaces/default/serviceaccounts
	I0806 07:25:37.832155   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/namespaces/default/serviceaccounts
	I0806 07:25:37.832164   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:37.832178   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:37.832187   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:37.835212   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:25:37.835448   31793 default_sa.go:45] found service account: "default"
	I0806 07:25:37.835464   31793 default_sa.go:55] duration metric: took 192.597186ms for default service account to be created ...
	I0806 07:25:37.835473   31793 system_pods.go:116] waiting for k8s-apps to be running ...
	I0806 07:25:38.032946   31793 request.go:629] Waited for 197.387462ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods
	I0806 07:25:38.033012   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods
	I0806 07:25:38.033018   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:38.033026   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:38.033031   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:38.038210   31793 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0806 07:25:38.042288   31793 system_pods.go:86] 17 kube-system pods found
	I0806 07:25:38.042313   31793 system_pods.go:89] "coredns-7db6d8ff4d-t24ww" [8effbe6c-3107-409e-9c94-dee4103d12b0] Running
	I0806 07:25:38.042318   31793 system_pods.go:89] "coredns-7db6d8ff4d-zscz4" [ef588a32-ac32-4e38-926e-6ee7a662a399] Running
	I0806 07:25:38.042323   31793 system_pods.go:89] "etcd-ha-118173" [a72e8b83-1e33-429a-8d98-98936a0e3ab8] Running
	I0806 07:25:38.042327   31793 system_pods.go:89] "etcd-ha-118173-m02" [cb090a5d-5760-48c6-abea-735b88daf0be] Running
	I0806 07:25:38.042331   31793 system_pods.go:89] "kindnet-n7jmb" [fa496bfb-7bf0-4ebc-adbd-36d992d8e279] Running
	I0806 07:25:38.042335   31793 system_pods.go:89] "kindnet-z9w6l" [c02fe720-31c6-4146-af1a-0ada3aa7e530] Running
	I0806 07:25:38.042339   31793 system_pods.go:89] "kube-apiserver-ha-118173" [902898f9-8ba6-4d78-a648-49d412dce202] Running
	I0806 07:25:38.042343   31793 system_pods.go:89] "kube-apiserver-ha-118173-m02" [1b9cea4a-7755-4cd8-a240-b4911dc45edf] Running
	I0806 07:25:38.042347   31793 system_pods.go:89] "kube-controller-manager-ha-118173" [1af832f2-bceb-4c89-afa8-09a78b7782e1] Running
	I0806 07:25:38.042354   31793 system_pods.go:89] "kube-controller-manager-ha-118173-m02" [7be7c963-d9af-4cda-95dd-9a953b40f284] Running
	I0806 07:25:38.042357   31793 system_pods.go:89] "kube-proxy-k8sq6" [b1ec850b-6ce0-4710-aa99-419bda3b3e93] Running
	I0806 07:25:38.042364   31793 system_pods.go:89] "kube-proxy-ksm6v" [594bc178-8319-4922-9132-69a9c6890b95] Running
	I0806 07:25:38.042368   31793 system_pods.go:89] "kube-scheduler-ha-118173" [6fda2f13-0df0-42ed-9d67-8dd30b33fa3d] Running
	I0806 07:25:38.042375   31793 system_pods.go:89] "kube-scheduler-ha-118173-m02" [f097af54-fb79-40f2-aaf4-6cfbb0d8c449] Running
	I0806 07:25:38.042379   31793 system_pods.go:89] "kube-vip-ha-118173" [594d77d2-83e6-420d-8f0a-f54e9fd704f3] Running
	I0806 07:25:38.042382   31793 system_pods.go:89] "kube-vip-ha-118173-m02" [66b58218-4fc2-4a96-81b9-31ade7dda807] Running
	I0806 07:25:38.042385   31793 system_pods.go:89] "storage-provisioner" [8379466d-0241-40e4-a262-d762b3cda2eb] Running
	I0806 07:25:38.042394   31793 system_pods.go:126] duration metric: took 206.915419ms to wait for k8s-apps to be running ...
	I0806 07:25:38.042415   31793 system_svc.go:44] waiting for kubelet service to be running ....
	I0806 07:25:38.042458   31793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 07:25:38.059099   31793 system_svc.go:56] duration metric: took 16.685529ms WaitForService to wait for kubelet
	I0806 07:25:38.059130   31793 kubeadm.go:582] duration metric: took 21.165344066s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0806 07:25:38.059158   31793 node_conditions.go:102] verifying NodePressure condition ...
	I0806 07:25:38.232061   31793 request.go:629] Waited for 172.812266ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.164:8443/api/v1/nodes
	I0806 07:25:38.232132   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes
	I0806 07:25:38.232139   31793 round_trippers.go:469] Request Headers:
	I0806 07:25:38.232150   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:25:38.232157   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:25:38.235719   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:25:38.236625   31793 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0806 07:25:38.236652   31793 node_conditions.go:123] node cpu capacity is 2
	I0806 07:25:38.236665   31793 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0806 07:25:38.236670   31793 node_conditions.go:123] node cpu capacity is 2
	I0806 07:25:38.236679   31793 node_conditions.go:105] duration metric: took 177.515342ms to run NodePressure ...
	I0806 07:25:38.236696   31793 start.go:241] waiting for startup goroutines ...
	I0806 07:25:38.236725   31793 start.go:255] writing updated cluster config ...
	I0806 07:25:38.238847   31793 out.go:177] 
	I0806 07:25:38.240286   31793 config.go:182] Loaded profile config "ha-118173": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 07:25:38.240397   31793 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/config.json ...
	I0806 07:25:38.242101   31793 out.go:177] * Starting "ha-118173-m03" control-plane node in "ha-118173" cluster
	I0806 07:25:38.243383   31793 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0806 07:25:38.243402   31793 cache.go:56] Caching tarball of preloaded images
	I0806 07:25:38.243486   31793 preload.go:172] Found /home/jenkins/minikube-integration/19370-13057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0806 07:25:38.243496   31793 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0806 07:25:38.243572   31793 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/config.json ...
	I0806 07:25:38.243722   31793 start.go:360] acquireMachinesLock for ha-118173-m03: {Name:mkc602ac9c8cc3360dcc0fcb8104303837aaab61 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 07:25:38.243760   31793 start.go:364] duration metric: took 21.021µs to acquireMachinesLock for "ha-118173-m03"
	I0806 07:25:38.243775   31793 start.go:93] Provisioning new machine with config: &{Name:ha-118173 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-118173 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.164 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.203 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0806 07:25:38.243863   31793 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0806 07:25:38.245301   31793 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0806 07:25:38.245373   31793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:25:38.245413   31793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:25:38.259751   31793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35003
	I0806 07:25:38.260142   31793 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:25:38.260638   31793 main.go:141] libmachine: Using API Version  1
	I0806 07:25:38.260658   31793 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:25:38.260969   31793 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:25:38.261159   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetMachineName
	I0806 07:25:38.261340   31793 main.go:141] libmachine: (ha-118173-m03) Calling .DriverName
	I0806 07:25:38.261601   31793 start.go:159] libmachine.API.Create for "ha-118173" (driver="kvm2")
	I0806 07:25:38.261631   31793 client.go:168] LocalClient.Create starting
	I0806 07:25:38.261680   31793 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem
	I0806 07:25:38.261720   31793 main.go:141] libmachine: Decoding PEM data...
	I0806 07:25:38.261741   31793 main.go:141] libmachine: Parsing certificate...
	I0806 07:25:38.261804   31793 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem
	I0806 07:25:38.261831   31793 main.go:141] libmachine: Decoding PEM data...
	I0806 07:25:38.261845   31793 main.go:141] libmachine: Parsing certificate...
	I0806 07:25:38.261871   31793 main.go:141] libmachine: Running pre-create checks...
	I0806 07:25:38.261883   31793 main.go:141] libmachine: (ha-118173-m03) Calling .PreCreateCheck
	I0806 07:25:38.262125   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetConfigRaw
	I0806 07:25:38.262505   31793 main.go:141] libmachine: Creating machine...
	I0806 07:25:38.262521   31793 main.go:141] libmachine: (ha-118173-m03) Calling .Create
	I0806 07:25:38.262648   31793 main.go:141] libmachine: (ha-118173-m03) Creating KVM machine...
	I0806 07:25:38.263779   31793 main.go:141] libmachine: (ha-118173-m03) DBG | found existing default KVM network
	I0806 07:25:38.263871   31793 main.go:141] libmachine: (ha-118173-m03) DBG | found existing private KVM network mk-ha-118173
	I0806 07:25:38.263996   31793 main.go:141] libmachine: (ha-118173-m03) Setting up store path in /home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173-m03 ...
	I0806 07:25:38.264020   31793 main.go:141] libmachine: (ha-118173-m03) Building disk image from file:///home/jenkins/minikube-integration/19370-13057/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0806 07:25:38.264073   31793 main.go:141] libmachine: (ha-118173-m03) DBG | I0806 07:25:38.263979   32846 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19370-13057/.minikube
	I0806 07:25:38.264175   31793 main.go:141] libmachine: (ha-118173-m03) Downloading /home/jenkins/minikube-integration/19370-13057/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19370-13057/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0806 07:25:38.519561   31793 main.go:141] libmachine: (ha-118173-m03) DBG | I0806 07:25:38.519444   32846 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173-m03/id_rsa...
	I0806 07:25:38.699444   31793 main.go:141] libmachine: (ha-118173-m03) DBG | I0806 07:25:38.699332   32846 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173-m03/ha-118173-m03.rawdisk...
	I0806 07:25:38.699476   31793 main.go:141] libmachine: (ha-118173-m03) DBG | Writing magic tar header
	I0806 07:25:38.699489   31793 main.go:141] libmachine: (ha-118173-m03) DBG | Writing SSH key tar header
	I0806 07:25:38.699644   31793 main.go:141] libmachine: (ha-118173-m03) DBG | I0806 07:25:38.699561   32846 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173-m03 ...
	I0806 07:25:38.699720   31793 main.go:141] libmachine: (ha-118173-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173-m03
	I0806 07:25:38.699753   31793 main.go:141] libmachine: (ha-118173-m03) Setting executable bit set on /home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173-m03 (perms=drwx------)
	I0806 07:25:38.699774   31793 main.go:141] libmachine: (ha-118173-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19370-13057/.minikube/machines
	I0806 07:25:38.699790   31793 main.go:141] libmachine: (ha-118173-m03) Setting executable bit set on /home/jenkins/minikube-integration/19370-13057/.minikube/machines (perms=drwxr-xr-x)
	I0806 07:25:38.699823   31793 main.go:141] libmachine: (ha-118173-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19370-13057/.minikube
	I0806 07:25:38.699879   31793 main.go:141] libmachine: (ha-118173-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19370-13057
	I0806 07:25:38.699895   31793 main.go:141] libmachine: (ha-118173-m03) Setting executable bit set on /home/jenkins/minikube-integration/19370-13057/.minikube (perms=drwxr-xr-x)
	I0806 07:25:38.699909   31793 main.go:141] libmachine: (ha-118173-m03) Setting executable bit set on /home/jenkins/minikube-integration/19370-13057 (perms=drwxrwxr-x)
	I0806 07:25:38.699920   31793 main.go:141] libmachine: (ha-118173-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0806 07:25:38.699937   31793 main.go:141] libmachine: (ha-118173-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0806 07:25:38.699947   31793 main.go:141] libmachine: (ha-118173-m03) Creating domain...
	I0806 07:25:38.699960   31793 main.go:141] libmachine: (ha-118173-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0806 07:25:38.699973   31793 main.go:141] libmachine: (ha-118173-m03) DBG | Checking permissions on dir: /home/jenkins
	I0806 07:25:38.699986   31793 main.go:141] libmachine: (ha-118173-m03) DBG | Checking permissions on dir: /home
	I0806 07:25:38.699996   31793 main.go:141] libmachine: (ha-118173-m03) DBG | Skipping /home - not owner
	I0806 07:25:38.701096   31793 main.go:141] libmachine: (ha-118173-m03) define libvirt domain using xml: 
	I0806 07:25:38.701113   31793 main.go:141] libmachine: (ha-118173-m03) <domain type='kvm'>
	I0806 07:25:38.701124   31793 main.go:141] libmachine: (ha-118173-m03)   <name>ha-118173-m03</name>
	I0806 07:25:38.701132   31793 main.go:141] libmachine: (ha-118173-m03)   <memory unit='MiB'>2200</memory>
	I0806 07:25:38.701139   31793 main.go:141] libmachine: (ha-118173-m03)   <vcpu>2</vcpu>
	I0806 07:25:38.701145   31793 main.go:141] libmachine: (ha-118173-m03)   <features>
	I0806 07:25:38.701153   31793 main.go:141] libmachine: (ha-118173-m03)     <acpi/>
	I0806 07:25:38.701161   31793 main.go:141] libmachine: (ha-118173-m03)     <apic/>
	I0806 07:25:38.701169   31793 main.go:141] libmachine: (ha-118173-m03)     <pae/>
	I0806 07:25:38.701183   31793 main.go:141] libmachine: (ha-118173-m03)     
	I0806 07:25:38.701190   31793 main.go:141] libmachine: (ha-118173-m03)   </features>
	I0806 07:25:38.701196   31793 main.go:141] libmachine: (ha-118173-m03)   <cpu mode='host-passthrough'>
	I0806 07:25:38.701224   31793 main.go:141] libmachine: (ha-118173-m03)   
	I0806 07:25:38.701244   31793 main.go:141] libmachine: (ha-118173-m03)   </cpu>
	I0806 07:25:38.701254   31793 main.go:141] libmachine: (ha-118173-m03)   <os>
	I0806 07:25:38.701266   31793 main.go:141] libmachine: (ha-118173-m03)     <type>hvm</type>
	I0806 07:25:38.701277   31793 main.go:141] libmachine: (ha-118173-m03)     <boot dev='cdrom'/>
	I0806 07:25:38.701288   31793 main.go:141] libmachine: (ha-118173-m03)     <boot dev='hd'/>
	I0806 07:25:38.701301   31793 main.go:141] libmachine: (ha-118173-m03)     <bootmenu enable='no'/>
	I0806 07:25:38.701312   31793 main.go:141] libmachine: (ha-118173-m03)   </os>
	I0806 07:25:38.701334   31793 main.go:141] libmachine: (ha-118173-m03)   <devices>
	I0806 07:25:38.701356   31793 main.go:141] libmachine: (ha-118173-m03)     <disk type='file' device='cdrom'>
	I0806 07:25:38.701367   31793 main.go:141] libmachine: (ha-118173-m03)       <source file='/home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173-m03/boot2docker.iso'/>
	I0806 07:25:38.701375   31793 main.go:141] libmachine: (ha-118173-m03)       <target dev='hdc' bus='scsi'/>
	I0806 07:25:38.701382   31793 main.go:141] libmachine: (ha-118173-m03)       <readonly/>
	I0806 07:25:38.701390   31793 main.go:141] libmachine: (ha-118173-m03)     </disk>
	I0806 07:25:38.701397   31793 main.go:141] libmachine: (ha-118173-m03)     <disk type='file' device='disk'>
	I0806 07:25:38.701407   31793 main.go:141] libmachine: (ha-118173-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0806 07:25:38.701418   31793 main.go:141] libmachine: (ha-118173-m03)       <source file='/home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173-m03/ha-118173-m03.rawdisk'/>
	I0806 07:25:38.701428   31793 main.go:141] libmachine: (ha-118173-m03)       <target dev='hda' bus='virtio'/>
	I0806 07:25:38.701443   31793 main.go:141] libmachine: (ha-118173-m03)     </disk>
	I0806 07:25:38.701460   31793 main.go:141] libmachine: (ha-118173-m03)     <interface type='network'>
	I0806 07:25:38.701475   31793 main.go:141] libmachine: (ha-118173-m03)       <source network='mk-ha-118173'/>
	I0806 07:25:38.701485   31793 main.go:141] libmachine: (ha-118173-m03)       <model type='virtio'/>
	I0806 07:25:38.701496   31793 main.go:141] libmachine: (ha-118173-m03)     </interface>
	I0806 07:25:38.701502   31793 main.go:141] libmachine: (ha-118173-m03)     <interface type='network'>
	I0806 07:25:38.701513   31793 main.go:141] libmachine: (ha-118173-m03)       <source network='default'/>
	I0806 07:25:38.701521   31793 main.go:141] libmachine: (ha-118173-m03)       <model type='virtio'/>
	I0806 07:25:38.701541   31793 main.go:141] libmachine: (ha-118173-m03)     </interface>
	I0806 07:25:38.701560   31793 main.go:141] libmachine: (ha-118173-m03)     <serial type='pty'>
	I0806 07:25:38.701570   31793 main.go:141] libmachine: (ha-118173-m03)       <target port='0'/>
	I0806 07:25:38.701577   31793 main.go:141] libmachine: (ha-118173-m03)     </serial>
	I0806 07:25:38.701582   31793 main.go:141] libmachine: (ha-118173-m03)     <console type='pty'>
	I0806 07:25:38.701589   31793 main.go:141] libmachine: (ha-118173-m03)       <target type='serial' port='0'/>
	I0806 07:25:38.701597   31793 main.go:141] libmachine: (ha-118173-m03)     </console>
	I0806 07:25:38.701617   31793 main.go:141] libmachine: (ha-118173-m03)     <rng model='virtio'>
	I0806 07:25:38.701646   31793 main.go:141] libmachine: (ha-118173-m03)       <backend model='random'>/dev/random</backend>
	I0806 07:25:38.701668   31793 main.go:141] libmachine: (ha-118173-m03)     </rng>
	I0806 07:25:38.701680   31793 main.go:141] libmachine: (ha-118173-m03)     
	I0806 07:25:38.701687   31793 main.go:141] libmachine: (ha-118173-m03)     
	I0806 07:25:38.701699   31793 main.go:141] libmachine: (ha-118173-m03)   </devices>
	I0806 07:25:38.701725   31793 main.go:141] libmachine: (ha-118173-m03) </domain>
	I0806 07:25:38.701735   31793 main.go:141] libmachine: (ha-118173-m03) 
	I0806 07:25:38.708186   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined MAC address 52:54:00:3d:80:df in network default
	I0806 07:25:38.708909   31793 main.go:141] libmachine: (ha-118173-m03) Ensuring networks are active...
	I0806 07:25:38.708935   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:25:38.709762   31793 main.go:141] libmachine: (ha-118173-m03) Ensuring network default is active
	I0806 07:25:38.710102   31793 main.go:141] libmachine: (ha-118173-m03) Ensuring network mk-ha-118173 is active
	I0806 07:25:38.710565   31793 main.go:141] libmachine: (ha-118173-m03) Getting domain xml...
	I0806 07:25:38.711391   31793 main.go:141] libmachine: (ha-118173-m03) Creating domain...
	I0806 07:25:39.962106   31793 main.go:141] libmachine: (ha-118173-m03) Waiting to get IP...
	I0806 07:25:39.962843   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:25:39.963231   31793 main.go:141] libmachine: (ha-118173-m03) DBG | unable to find current IP address of domain ha-118173-m03 in network mk-ha-118173
	I0806 07:25:39.963252   31793 main.go:141] libmachine: (ha-118173-m03) DBG | I0806 07:25:39.963211   32846 retry.go:31] will retry after 253.81788ms: waiting for machine to come up
	I0806 07:25:40.218554   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:25:40.219002   31793 main.go:141] libmachine: (ha-118173-m03) DBG | unable to find current IP address of domain ha-118173-m03 in network mk-ha-118173
	I0806 07:25:40.219027   31793 main.go:141] libmachine: (ha-118173-m03) DBG | I0806 07:25:40.218954   32846 retry.go:31] will retry after 329.858258ms: waiting for machine to come up
	I0806 07:25:40.550462   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:25:40.550944   31793 main.go:141] libmachine: (ha-118173-m03) DBG | unable to find current IP address of domain ha-118173-m03 in network mk-ha-118173
	I0806 07:25:40.550974   31793 main.go:141] libmachine: (ha-118173-m03) DBG | I0806 07:25:40.550896   32846 retry.go:31] will retry after 424.863606ms: waiting for machine to come up
	I0806 07:25:40.977595   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:25:40.978124   31793 main.go:141] libmachine: (ha-118173-m03) DBG | unable to find current IP address of domain ha-118173-m03 in network mk-ha-118173
	I0806 07:25:40.978153   31793 main.go:141] libmachine: (ha-118173-m03) DBG | I0806 07:25:40.978078   32846 retry.go:31] will retry after 507.514765ms: waiting for machine to come up
	I0806 07:25:41.487166   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:25:41.487571   31793 main.go:141] libmachine: (ha-118173-m03) DBG | unable to find current IP address of domain ha-118173-m03 in network mk-ha-118173
	I0806 07:25:41.487603   31793 main.go:141] libmachine: (ha-118173-m03) DBG | I0806 07:25:41.487518   32846 retry.go:31] will retry after 519.071599ms: waiting for machine to come up
	I0806 07:25:42.008220   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:25:42.008682   31793 main.go:141] libmachine: (ha-118173-m03) DBG | unable to find current IP address of domain ha-118173-m03 in network mk-ha-118173
	I0806 07:25:42.008728   31793 main.go:141] libmachine: (ha-118173-m03) DBG | I0806 07:25:42.008623   32846 retry.go:31] will retry after 742.185102ms: waiting for machine to come up
	I0806 07:25:42.751904   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:25:42.752388   31793 main.go:141] libmachine: (ha-118173-m03) DBG | unable to find current IP address of domain ha-118173-m03 in network mk-ha-118173
	I0806 07:25:42.752421   31793 main.go:141] libmachine: (ha-118173-m03) DBG | I0806 07:25:42.752323   32846 retry.go:31] will retry after 997.028177ms: waiting for machine to come up
	I0806 07:25:43.751392   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:25:43.751833   31793 main.go:141] libmachine: (ha-118173-m03) DBG | unable to find current IP address of domain ha-118173-m03 in network mk-ha-118173
	I0806 07:25:43.751863   31793 main.go:141] libmachine: (ha-118173-m03) DBG | I0806 07:25:43.751795   32846 retry.go:31] will retry after 1.349314892s: waiting for machine to come up
	I0806 07:25:45.103401   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:25:45.103893   31793 main.go:141] libmachine: (ha-118173-m03) DBG | unable to find current IP address of domain ha-118173-m03 in network mk-ha-118173
	I0806 07:25:45.103925   31793 main.go:141] libmachine: (ha-118173-m03) DBG | I0806 07:25:45.103835   32846 retry.go:31] will retry after 1.43342486s: waiting for machine to come up
	I0806 07:25:46.538389   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:25:46.538917   31793 main.go:141] libmachine: (ha-118173-m03) DBG | unable to find current IP address of domain ha-118173-m03 in network mk-ha-118173
	I0806 07:25:46.538945   31793 main.go:141] libmachine: (ha-118173-m03) DBG | I0806 07:25:46.538866   32846 retry.go:31] will retry after 1.86809666s: waiting for machine to come up
	I0806 07:25:48.408706   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:25:48.409180   31793 main.go:141] libmachine: (ha-118173-m03) DBG | unable to find current IP address of domain ha-118173-m03 in network mk-ha-118173
	I0806 07:25:48.409213   31793 main.go:141] libmachine: (ha-118173-m03) DBG | I0806 07:25:48.409118   32846 retry.go:31] will retry after 2.833688321s: waiting for machine to come up
	I0806 07:25:51.245831   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:25:51.246286   31793 main.go:141] libmachine: (ha-118173-m03) DBG | unable to find current IP address of domain ha-118173-m03 in network mk-ha-118173
	I0806 07:25:51.246310   31793 main.go:141] libmachine: (ha-118173-m03) DBG | I0806 07:25:51.246235   32846 retry.go:31] will retry after 2.559468863s: waiting for machine to come up
	I0806 07:25:53.806894   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:25:53.807177   31793 main.go:141] libmachine: (ha-118173-m03) DBG | unable to find current IP address of domain ha-118173-m03 in network mk-ha-118173
	I0806 07:25:53.807203   31793 main.go:141] libmachine: (ha-118173-m03) DBG | I0806 07:25:53.807132   32846 retry.go:31] will retry after 2.791250888s: waiting for machine to come up
	I0806 07:25:56.601795   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:25:56.602245   31793 main.go:141] libmachine: (ha-118173-m03) DBG | unable to find current IP address of domain ha-118173-m03 in network mk-ha-118173
	I0806 07:25:56.602277   31793 main.go:141] libmachine: (ha-118173-m03) DBG | I0806 07:25:56.602203   32846 retry.go:31] will retry after 3.476094859s: waiting for machine to come up
	I0806 07:26:00.080126   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:26:00.080656   31793 main.go:141] libmachine: (ha-118173-m03) Found IP for machine: 192.168.39.5
	I0806 07:26:00.080686   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has current primary IP address 192.168.39.5 and MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:26:00.080695   31793 main.go:141] libmachine: (ha-118173-m03) Reserving static IP address...
	I0806 07:26:00.081120   31793 main.go:141] libmachine: (ha-118173-m03) DBG | unable to find host DHCP lease matching {name: "ha-118173-m03", mac: "52:54:00:d8:90:55", ip: "192.168.39.5"} in network mk-ha-118173
	I0806 07:26:00.156514   31793 main.go:141] libmachine: (ha-118173-m03) DBG | Getting to WaitForSSH function...
	I0806 07:26:00.156541   31793 main.go:141] libmachine: (ha-118173-m03) Reserved static IP address: 192.168.39.5
	I0806 07:26:00.156587   31793 main.go:141] libmachine: (ha-118173-m03) Waiting for SSH to be available...
	I0806 07:26:00.159715   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:26:00.160006   31793 main.go:141] libmachine: (ha-118173-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:90:55", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:25:53 +0000 UTC Type:0 Mac:52:54:00:d8:90:55 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:minikube Clientid:01:52:54:00:d8:90:55}
	I0806 07:26:00.160031   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined IP address 192.168.39.5 and MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:26:00.160229   31793 main.go:141] libmachine: (ha-118173-m03) DBG | Using SSH client type: external
	I0806 07:26:00.160256   31793 main.go:141] libmachine: (ha-118173-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173-m03/id_rsa (-rw-------)
	I0806 07:26:00.160286   31793 main.go:141] libmachine: (ha-118173-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.5 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0806 07:26:00.160302   31793 main.go:141] libmachine: (ha-118173-m03) DBG | About to run SSH command:
	I0806 07:26:00.160319   31793 main.go:141] libmachine: (ha-118173-m03) DBG | exit 0
	I0806 07:26:00.284464   31793 main.go:141] libmachine: (ha-118173-m03) DBG | SSH cmd err, output: <nil>: 
	I0806 07:26:00.284733   31793 main.go:141] libmachine: (ha-118173-m03) KVM machine creation complete!
	I0806 07:26:00.285098   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetConfigRaw
	I0806 07:26:00.285617   31793 main.go:141] libmachine: (ha-118173-m03) Calling .DriverName
	I0806 07:26:00.285808   31793 main.go:141] libmachine: (ha-118173-m03) Calling .DriverName
	I0806 07:26:00.286024   31793 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0806 07:26:00.286038   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetState
	I0806 07:26:00.287085   31793 main.go:141] libmachine: Detecting operating system of created instance...
	I0806 07:26:00.287100   31793 main.go:141] libmachine: Waiting for SSH to be available...
	I0806 07:26:00.287108   31793 main.go:141] libmachine: Getting to WaitForSSH function...
	I0806 07:26:00.287115   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHHostname
	I0806 07:26:00.289837   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:26:00.290335   31793 main.go:141] libmachine: (ha-118173-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:90:55", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:25:53 +0000 UTC Type:0 Mac:52:54:00:d8:90:55 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-118173-m03 Clientid:01:52:54:00:d8:90:55}
	I0806 07:26:00.290365   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined IP address 192.168.39.5 and MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:26:00.290528   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHPort
	I0806 07:26:00.290699   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHKeyPath
	I0806 07:26:00.290859   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHKeyPath
	I0806 07:26:00.291008   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHUsername
	I0806 07:26:00.291214   31793 main.go:141] libmachine: Using SSH client type: native
	I0806 07:26:00.291432   31793 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.5 22 <nil> <nil>}
	I0806 07:26:00.291446   31793 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0806 07:26:00.387933   31793 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 07:26:00.387962   31793 main.go:141] libmachine: Detecting the provisioner...
	I0806 07:26:00.387973   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHHostname
	I0806 07:26:00.390810   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:26:00.391158   31793 main.go:141] libmachine: (ha-118173-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:90:55", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:25:53 +0000 UTC Type:0 Mac:52:54:00:d8:90:55 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-118173-m03 Clientid:01:52:54:00:d8:90:55}
	I0806 07:26:00.391194   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined IP address 192.168.39.5 and MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:26:00.391294   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHPort
	I0806 07:26:00.391521   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHKeyPath
	I0806 07:26:00.391706   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHKeyPath
	I0806 07:26:00.391834   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHUsername
	I0806 07:26:00.392051   31793 main.go:141] libmachine: Using SSH client type: native
	I0806 07:26:00.392233   31793 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.5 22 <nil> <nil>}
	I0806 07:26:00.392245   31793 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0806 07:26:00.489407   31793 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0806 07:26:00.489472   31793 main.go:141] libmachine: found compatible host: buildroot
	I0806 07:26:00.489481   31793 main.go:141] libmachine: Provisioning with buildroot...
	I0806 07:26:00.489489   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetMachineName
	I0806 07:26:00.489721   31793 buildroot.go:166] provisioning hostname "ha-118173-m03"
	I0806 07:26:00.489744   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetMachineName
	I0806 07:26:00.489940   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHHostname
	I0806 07:26:00.492500   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:26:00.492853   31793 main.go:141] libmachine: (ha-118173-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:90:55", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:25:53 +0000 UTC Type:0 Mac:52:54:00:d8:90:55 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-118173-m03 Clientid:01:52:54:00:d8:90:55}
	I0806 07:26:00.492872   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined IP address 192.168.39.5 and MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:26:00.493049   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHPort
	I0806 07:26:00.493228   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHKeyPath
	I0806 07:26:00.493363   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHKeyPath
	I0806 07:26:00.493526   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHUsername
	I0806 07:26:00.493716   31793 main.go:141] libmachine: Using SSH client type: native
	I0806 07:26:00.493865   31793 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.5 22 <nil> <nil>}
	I0806 07:26:00.493877   31793 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-118173-m03 && echo "ha-118173-m03" | sudo tee /etc/hostname
	I0806 07:26:00.608866   31793 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-118173-m03
	
	I0806 07:26:00.608897   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHHostname
	I0806 07:26:00.611843   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:26:00.612310   31793 main.go:141] libmachine: (ha-118173-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:90:55", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:25:53 +0000 UTC Type:0 Mac:52:54:00:d8:90:55 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-118173-m03 Clientid:01:52:54:00:d8:90:55}
	I0806 07:26:00.612336   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined IP address 192.168.39.5 and MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:26:00.612542   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHPort
	I0806 07:26:00.612712   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHKeyPath
	I0806 07:26:00.612948   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHKeyPath
	I0806 07:26:00.613125   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHUsername
	I0806 07:26:00.613329   31793 main.go:141] libmachine: Using SSH client type: native
	I0806 07:26:00.613538   31793 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.5 22 <nil> <nil>}
	I0806 07:26:00.613563   31793 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-118173-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-118173-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-118173-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0806 07:26:00.729098   31793 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 07:26:00.729126   31793 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19370-13057/.minikube CaCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19370-13057/.minikube}
	I0806 07:26:00.729148   31793 buildroot.go:174] setting up certificates
	I0806 07:26:00.729158   31793 provision.go:84] configureAuth start
	I0806 07:26:00.729167   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetMachineName
	I0806 07:26:00.729449   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetIP
	I0806 07:26:00.731911   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:26:00.732347   31793 main.go:141] libmachine: (ha-118173-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:90:55", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:25:53 +0000 UTC Type:0 Mac:52:54:00:d8:90:55 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-118173-m03 Clientid:01:52:54:00:d8:90:55}
	I0806 07:26:00.732404   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined IP address 192.168.39.5 and MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:26:00.732591   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHHostname
	I0806 07:26:00.734978   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:26:00.735326   31793 main.go:141] libmachine: (ha-118173-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:90:55", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:25:53 +0000 UTC Type:0 Mac:52:54:00:d8:90:55 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-118173-m03 Clientid:01:52:54:00:d8:90:55}
	I0806 07:26:00.735348   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined IP address 192.168.39.5 and MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:26:00.735575   31793 provision.go:143] copyHostCerts
	I0806 07:26:00.735613   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem
	I0806 07:26:00.735647   31793 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem, removing ...
	I0806 07:26:00.735657   31793 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem
	I0806 07:26:00.735722   31793 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem (1675 bytes)
	I0806 07:26:00.735799   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem
	I0806 07:26:00.735817   31793 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem, removing ...
	I0806 07:26:00.735823   31793 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem
	I0806 07:26:00.735848   31793 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem (1078 bytes)
	I0806 07:26:00.735894   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem
	I0806 07:26:00.735910   31793 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem, removing ...
	I0806 07:26:00.735917   31793 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem
	I0806 07:26:00.735937   31793 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem (1123 bytes)
	I0806 07:26:00.735986   31793 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem org=jenkins.ha-118173-m03 san=[127.0.0.1 192.168.39.5 ha-118173-m03 localhost minikube]
	I0806 07:26:00.984026   31793 provision.go:177] copyRemoteCerts
	I0806 07:26:00.984092   31793 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0806 07:26:00.984121   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHHostname
	I0806 07:26:00.986810   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:26:00.987181   31793 main.go:141] libmachine: (ha-118173-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:90:55", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:25:53 +0000 UTC Type:0 Mac:52:54:00:d8:90:55 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-118173-m03 Clientid:01:52:54:00:d8:90:55}
	I0806 07:26:00.987206   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined IP address 192.168.39.5 and MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:26:00.987324   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHPort
	I0806 07:26:00.987524   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHKeyPath
	I0806 07:26:00.987692   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHUsername
	I0806 07:26:00.987864   31793 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173-m03/id_rsa Username:docker}
	I0806 07:26:01.067777   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0806 07:26:01.067854   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0806 07:26:01.094064   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0806 07:26:01.094141   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0806 07:26:01.120123   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0806 07:26:01.120212   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0806 07:26:01.150115   31793 provision.go:87] duration metric: took 420.942527ms to configureAuth
	I0806 07:26:01.150151   31793 buildroot.go:189] setting minikube options for container-runtime
	I0806 07:26:01.150461   31793 config.go:182] Loaded profile config "ha-118173": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 07:26:01.150558   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHHostname
	I0806 07:26:01.153106   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:26:01.153396   31793 main.go:141] libmachine: (ha-118173-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:90:55", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:25:53 +0000 UTC Type:0 Mac:52:54:00:d8:90:55 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-118173-m03 Clientid:01:52:54:00:d8:90:55}
	I0806 07:26:01.153422   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined IP address 192.168.39.5 and MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:26:01.153643   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHPort
	I0806 07:26:01.153848   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHKeyPath
	I0806 07:26:01.154014   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHKeyPath
	I0806 07:26:01.154160   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHUsername
	I0806 07:26:01.154305   31793 main.go:141] libmachine: Using SSH client type: native
	I0806 07:26:01.154543   31793 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.5 22 <nil> <nil>}
	I0806 07:26:01.154565   31793 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0806 07:26:01.421958   31793 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0806 07:26:01.421989   31793 main.go:141] libmachine: Checking connection to Docker...
	I0806 07:26:01.421997   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetURL
	I0806 07:26:01.423367   31793 main.go:141] libmachine: (ha-118173-m03) DBG | Using libvirt version 6000000
	I0806 07:26:01.425864   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:26:01.426219   31793 main.go:141] libmachine: (ha-118173-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:90:55", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:25:53 +0000 UTC Type:0 Mac:52:54:00:d8:90:55 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-118173-m03 Clientid:01:52:54:00:d8:90:55}
	I0806 07:26:01.426236   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined IP address 192.168.39.5 and MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:26:01.426448   31793 main.go:141] libmachine: Docker is up and running!
	I0806 07:26:01.426466   31793 main.go:141] libmachine: Reticulating splines...
	I0806 07:26:01.426475   31793 client.go:171] duration metric: took 23.164820044s to LocalClient.Create
	I0806 07:26:01.426502   31793 start.go:167] duration metric: took 23.164903305s to libmachine.API.Create "ha-118173"
	I0806 07:26:01.426513   31793 start.go:293] postStartSetup for "ha-118173-m03" (driver="kvm2")
	I0806 07:26:01.426529   31793 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0806 07:26:01.426550   31793 main.go:141] libmachine: (ha-118173-m03) Calling .DriverName
	I0806 07:26:01.426769   31793 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0806 07:26:01.426796   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHHostname
	I0806 07:26:01.429026   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:26:01.429415   31793 main.go:141] libmachine: (ha-118173-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:90:55", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:25:53 +0000 UTC Type:0 Mac:52:54:00:d8:90:55 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-118173-m03 Clientid:01:52:54:00:d8:90:55}
	I0806 07:26:01.429442   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined IP address 192.168.39.5 and MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:26:01.429570   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHPort
	I0806 07:26:01.429896   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHKeyPath
	I0806 07:26:01.430061   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHUsername
	I0806 07:26:01.430206   31793 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173-m03/id_rsa Username:docker}
	I0806 07:26:01.506828   31793 ssh_runner.go:195] Run: cat /etc/os-release
	I0806 07:26:01.511729   31793 info.go:137] Remote host: Buildroot 2023.02.9
	I0806 07:26:01.511756   31793 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-13057/.minikube/addons for local assets ...
	I0806 07:26:01.511837   31793 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-13057/.minikube/files for local assets ...
	I0806 07:26:01.511933   31793 filesync.go:149] local asset: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem -> 202462.pem in /etc/ssl/certs
	I0806 07:26:01.511943   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem -> /etc/ssl/certs/202462.pem
	I0806 07:26:01.512024   31793 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0806 07:26:01.522139   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem --> /etc/ssl/certs/202462.pem (1708 bytes)
	I0806 07:26:01.546443   31793 start.go:296] duration metric: took 119.913363ms for postStartSetup
	I0806 07:26:01.546501   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetConfigRaw
	I0806 07:26:01.547117   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetIP
	I0806 07:26:01.549806   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:26:01.550193   31793 main.go:141] libmachine: (ha-118173-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:90:55", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:25:53 +0000 UTC Type:0 Mac:52:54:00:d8:90:55 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-118173-m03 Clientid:01:52:54:00:d8:90:55}
	I0806 07:26:01.550218   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined IP address 192.168.39.5 and MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:26:01.550469   31793 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/config.json ...
	I0806 07:26:01.550702   31793 start.go:128] duration metric: took 23.306828362s to createHost
	I0806 07:26:01.550729   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHHostname
	I0806 07:26:01.552946   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:26:01.553298   31793 main.go:141] libmachine: (ha-118173-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:90:55", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:25:53 +0000 UTC Type:0 Mac:52:54:00:d8:90:55 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-118173-m03 Clientid:01:52:54:00:d8:90:55}
	I0806 07:26:01.553322   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined IP address 192.168.39.5 and MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:26:01.553499   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHPort
	I0806 07:26:01.553674   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHKeyPath
	I0806 07:26:01.553853   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHKeyPath
	I0806 07:26:01.554007   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHUsername
	I0806 07:26:01.554208   31793 main.go:141] libmachine: Using SSH client type: native
	I0806 07:26:01.554358   31793 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.5 22 <nil> <nil>}
	I0806 07:26:01.554368   31793 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0806 07:26:01.653350   31793 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722929161.626097717
	
	I0806 07:26:01.653375   31793 fix.go:216] guest clock: 1722929161.626097717
	I0806 07:26:01.653386   31793 fix.go:229] Guest: 2024-08-06 07:26:01.626097717 +0000 UTC Remote: 2024-08-06 07:26:01.550715976 +0000 UTC m=+222.418454735 (delta=75.381741ms)
	I0806 07:26:01.653402   31793 fix.go:200] guest clock delta is within tolerance: 75.381741ms
	I0806 07:26:01.653407   31793 start.go:83] releasing machines lock for "ha-118173-m03", held for 23.409638424s
	I0806 07:26:01.653427   31793 main.go:141] libmachine: (ha-118173-m03) Calling .DriverName
	I0806 07:26:01.653708   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetIP
	I0806 07:26:01.656171   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:26:01.656521   31793 main.go:141] libmachine: (ha-118173-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:90:55", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:25:53 +0000 UTC Type:0 Mac:52:54:00:d8:90:55 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-118173-m03 Clientid:01:52:54:00:d8:90:55}
	I0806 07:26:01.656565   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined IP address 192.168.39.5 and MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:26:01.658586   31793 out.go:177] * Found network options:
	I0806 07:26:01.659857   31793 out.go:177]   - NO_PROXY=192.168.39.164,192.168.39.203
	W0806 07:26:01.661537   31793 proxy.go:119] fail to check proxy env: Error ip not in block
	W0806 07:26:01.661563   31793 proxy.go:119] fail to check proxy env: Error ip not in block
	I0806 07:26:01.661580   31793 main.go:141] libmachine: (ha-118173-m03) Calling .DriverName
	I0806 07:26:01.662093   31793 main.go:141] libmachine: (ha-118173-m03) Calling .DriverName
	I0806 07:26:01.662244   31793 main.go:141] libmachine: (ha-118173-m03) Calling .DriverName
	I0806 07:26:01.662317   31793 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0806 07:26:01.662350   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHHostname
	W0806 07:26:01.662409   31793 proxy.go:119] fail to check proxy env: Error ip not in block
	W0806 07:26:01.662432   31793 proxy.go:119] fail to check proxy env: Error ip not in block
	I0806 07:26:01.662499   31793 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0806 07:26:01.662520   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHHostname
	I0806 07:26:01.665281   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:26:01.665505   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:26:01.665754   31793 main.go:141] libmachine: (ha-118173-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:90:55", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:25:53 +0000 UTC Type:0 Mac:52:54:00:d8:90:55 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-118173-m03 Clientid:01:52:54:00:d8:90:55}
	I0806 07:26:01.665781   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined IP address 192.168.39.5 and MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:26:01.665953   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHPort
	I0806 07:26:01.666073   31793 main.go:141] libmachine: (ha-118173-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:90:55", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:25:53 +0000 UTC Type:0 Mac:52:54:00:d8:90:55 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-118173-m03 Clientid:01:52:54:00:d8:90:55}
	I0806 07:26:01.666098   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined IP address 192.168.39.5 and MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:26:01.666114   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHKeyPath
	I0806 07:26:01.666221   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHPort
	I0806 07:26:01.666300   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHUsername
	I0806 07:26:01.666372   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHKeyPath
	I0806 07:26:01.666457   31793 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173-m03/id_rsa Username:docker}
	I0806 07:26:01.666542   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHUsername
	I0806 07:26:01.666767   31793 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173-m03/id_rsa Username:docker}
	I0806 07:26:01.900625   31793 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0806 07:26:01.907540   31793 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0806 07:26:01.907612   31793 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0806 07:26:01.923703   31793 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0806 07:26:01.923735   31793 start.go:495] detecting cgroup driver to use...
	I0806 07:26:01.923802   31793 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0806 07:26:01.942642   31793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 07:26:01.959561   31793 docker.go:217] disabling cri-docker service (if available) ...
	I0806 07:26:01.959620   31793 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0806 07:26:01.976256   31793 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0806 07:26:01.991578   31793 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0806 07:26:02.112813   31793 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0806 07:26:02.295165   31793 docker.go:233] disabling docker service ...
	I0806 07:26:02.295252   31793 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0806 07:26:02.310496   31793 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0806 07:26:02.323908   31793 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0806 07:26:02.457283   31793 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0806 07:26:02.595447   31793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0806 07:26:02.610752   31793 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 07:26:02.630674   31793 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0806 07:26:02.630747   31793 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 07:26:02.642400   31793 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0806 07:26:02.642475   31793 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 07:26:02.655832   31793 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 07:26:02.667712   31793 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 07:26:02.678799   31793 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0806 07:26:02.690213   31793 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 07:26:02.701911   31793 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 07:26:02.720289   31793 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 07:26:02.731273   31793 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0806 07:26:02.741656   31793 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0806 07:26:02.741716   31793 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0806 07:26:02.755714   31793 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0806 07:26:02.766032   31793 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 07:26:02.889089   31793 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0806 07:26:03.035858   31793 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0806 07:26:03.035945   31793 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0806 07:26:03.040995   31793 start.go:563] Will wait 60s for crictl version
	I0806 07:26:03.041065   31793 ssh_runner.go:195] Run: which crictl
	I0806 07:26:03.045396   31793 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0806 07:26:03.085950   31793 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0806 07:26:03.086031   31793 ssh_runner.go:195] Run: crio --version
	I0806 07:26:03.120483   31793 ssh_runner.go:195] Run: crio --version
	I0806 07:26:03.155295   31793 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0806 07:26:03.156976   31793 out.go:177]   - env NO_PROXY=192.168.39.164
	I0806 07:26:03.158491   31793 out.go:177]   - env NO_PROXY=192.168.39.164,192.168.39.203
	I0806 07:26:03.160002   31793 main.go:141] libmachine: (ha-118173-m03) Calling .GetIP
	I0806 07:26:03.163210   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:26:03.163624   31793 main.go:141] libmachine: (ha-118173-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:90:55", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:25:53 +0000 UTC Type:0 Mac:52:54:00:d8:90:55 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-118173-m03 Clientid:01:52:54:00:d8:90:55}
	I0806 07:26:03.163652   31793 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined IP address 192.168.39.5 and MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:26:03.163939   31793 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0806 07:26:03.168887   31793 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 07:26:03.183278   31793 mustload.go:65] Loading cluster: ha-118173
	I0806 07:26:03.183505   31793 config.go:182] Loaded profile config "ha-118173": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 07:26:03.183798   31793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:26:03.183840   31793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:26:03.199392   31793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39879
	I0806 07:26:03.199825   31793 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:26:03.200334   31793 main.go:141] libmachine: Using API Version  1
	I0806 07:26:03.200356   31793 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:26:03.200667   31793 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:26:03.200899   31793 main.go:141] libmachine: (ha-118173) Calling .GetState
	I0806 07:26:03.202517   31793 host.go:66] Checking if "ha-118173" exists ...
	I0806 07:26:03.202859   31793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:26:03.202901   31793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:26:03.217607   31793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41951
	I0806 07:26:03.218077   31793 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:26:03.218543   31793 main.go:141] libmachine: Using API Version  1
	I0806 07:26:03.218572   31793 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:26:03.218849   31793 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:26:03.219022   31793 main.go:141] libmachine: (ha-118173) Calling .DriverName
	I0806 07:26:03.219170   31793 certs.go:68] Setting up /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173 for IP: 192.168.39.5
	I0806 07:26:03.219180   31793 certs.go:194] generating shared ca certs ...
	I0806 07:26:03.219200   31793 certs.go:226] acquiring lock for ca certs: {Name:mkf5c5aed91f4108054d9f2c742cad66c86afbed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 07:26:03.219352   31793 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.key
	I0806 07:26:03.219405   31793 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.key
	I0806 07:26:03.219418   31793 certs.go:256] generating profile certs ...
	I0806 07:26:03.219512   31793 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/client.key
	I0806 07:26:03.219545   31793 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.key.118ed01f
	I0806 07:26:03.219565   31793 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.crt.118ed01f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.164 192.168.39.203 192.168.39.5 192.168.39.254]
	I0806 07:26:03.790973   31793 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.crt.118ed01f ...
	I0806 07:26:03.791005   31793 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.crt.118ed01f: {Name:mk160d5b503440707e900fe943ebfd12722f0369 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 07:26:03.791193   31793 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.key.118ed01f ...
	I0806 07:26:03.791209   31793 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.key.118ed01f: {Name:mk29b8e58ad4c290b5a99b264ef4e8d778b897a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 07:26:03.791577   31793 certs.go:381] copying /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.crt.118ed01f -> /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.crt
	I0806 07:26:03.791742   31793 certs.go:385] copying /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.key.118ed01f -> /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.key
	I0806 07:26:03.791922   31793 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/proxy-client.key
	I0806 07:26:03.791938   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0806 07:26:03.791956   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0806 07:26:03.791972   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0806 07:26:03.791987   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0806 07:26:03.792002   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0806 07:26:03.792016   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0806 07:26:03.792035   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0806 07:26:03.792051   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0806 07:26:03.792140   31793 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246.pem (1338 bytes)
	W0806 07:26:03.792174   31793 certs.go:480] ignoring /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246_empty.pem, impossibly tiny 0 bytes
	I0806 07:26:03.792184   31793 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem (1675 bytes)
	I0806 07:26:03.792212   31793 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem (1078 bytes)
	I0806 07:26:03.792256   31793 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem (1123 bytes)
	I0806 07:26:03.792287   31793 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem (1675 bytes)
	I0806 07:26:03.792337   31793 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem (1708 bytes)
	I0806 07:26:03.792395   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246.pem -> /usr/share/ca-certificates/20246.pem
	I0806 07:26:03.792419   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem -> /usr/share/ca-certificates/202462.pem
	I0806 07:26:03.792434   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0806 07:26:03.792472   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHHostname
	I0806 07:26:03.795656   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:26:03.796025   31793 main.go:141] libmachine: (ha-118173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:df:f0", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:22:33 +0000 UTC Type:0 Mac:52:54:00:ef:df:f0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-118173 Clientid:01:52:54:00:ef:df:f0}
	I0806 07:26:03.796052   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined IP address 192.168.39.164 and MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:26:03.796266   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHPort
	I0806 07:26:03.796517   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHKeyPath
	I0806 07:26:03.796716   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHUsername
	I0806 07:26:03.796892   31793 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173/id_rsa Username:docker}
	I0806 07:26:03.868736   31793 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0806 07:26:03.874200   31793 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0806 07:26:03.886613   31793 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0806 07:26:03.891395   31793 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0806 07:26:03.902189   31793 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0806 07:26:03.906694   31793 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0806 07:26:03.918181   31793 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0806 07:26:03.923501   31793 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0806 07:26:03.933783   31793 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0806 07:26:03.938009   31793 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0806 07:26:03.948897   31793 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0806 07:26:03.953325   31793 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0806 07:26:03.973911   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0806 07:26:04.000150   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0806 07:26:04.027442   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0806 07:26:04.054785   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0806 07:26:04.085177   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0806 07:26:04.109453   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0806 07:26:04.133341   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0806 07:26:04.159382   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0806 07:26:04.187603   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246.pem --> /usr/share/ca-certificates/20246.pem (1338 bytes)
	I0806 07:26:04.213249   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem --> /usr/share/ca-certificates/202462.pem (1708 bytes)
	I0806 07:26:04.238384   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0806 07:26:04.263823   31793 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0806 07:26:04.283177   31793 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0806 07:26:04.301534   31793 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0806 07:26:04.318498   31793 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0806 07:26:04.336549   31793 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0806 07:26:04.355271   31793 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0806 07:26:04.375707   31793 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0806 07:26:04.394842   31793 ssh_runner.go:195] Run: openssl version
	I0806 07:26:04.401153   31793 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20246.pem && ln -fs /usr/share/ca-certificates/20246.pem /etc/ssl/certs/20246.pem"
	I0806 07:26:04.413407   31793 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20246.pem
	I0806 07:26:04.418337   31793 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  6 07:17 /usr/share/ca-certificates/20246.pem
	I0806 07:26:04.418392   31793 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20246.pem
	I0806 07:26:04.424880   31793 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20246.pem /etc/ssl/certs/51391683.0"
	I0806 07:26:04.437017   31793 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202462.pem && ln -fs /usr/share/ca-certificates/202462.pem /etc/ssl/certs/202462.pem"
	I0806 07:26:04.449104   31793 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202462.pem
	I0806 07:26:04.454119   31793 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  6 07:17 /usr/share/ca-certificates/202462.pem
	I0806 07:26:04.454176   31793 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202462.pem
	I0806 07:26:04.460183   31793 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202462.pem /etc/ssl/certs/3ec20f2e.0"
	I0806 07:26:04.474665   31793 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0806 07:26:04.486694   31793 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0806 07:26:04.491177   31793 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  6 07:07 /usr/share/ca-certificates/minikubeCA.pem
	I0806 07:26:04.491232   31793 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0806 07:26:04.497002   31793 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0806 07:26:04.508071   31793 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0806 07:26:04.512272   31793 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0806 07:26:04.512326   31793 kubeadm.go:934] updating node {m03 192.168.39.5 8443 v1.30.3 crio true true} ...
	I0806 07:26:04.512448   31793 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-118173-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-118173 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0806 07:26:04.512481   31793 kube-vip.go:115] generating kube-vip config ...
	I0806 07:26:04.512538   31793 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0806 07:26:04.530021   31793 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0806 07:26:04.530095   31793 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0806 07:26:04.530149   31793 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0806 07:26:04.540726   31793 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0806 07:26:04.540775   31793 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0806 07:26:04.551763   31793 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0806 07:26:04.551783   31793 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256
	I0806 07:26:04.551792   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0806 07:26:04.551783   31793 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256
	I0806 07:26:04.551832   31793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 07:26:04.551842   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0806 07:26:04.551873   31793 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0806 07:26:04.551923   31793 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0806 07:26:04.568440   31793 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0806 07:26:04.568472   31793 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0806 07:26:04.568502   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0806 07:26:04.568524   31793 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0806 07:26:04.568549   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0806 07:26:04.568559   31793 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0806 07:26:04.588276   31793 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0806 07:26:04.588309   31793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0806 07:26:05.487753   31793 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0806 07:26:05.497735   31793 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0806 07:26:05.516951   31793 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0806 07:26:05.534727   31793 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0806 07:26:05.552747   31793 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0806 07:26:05.557382   31793 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 07:26:05.570646   31793 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 07:26:05.695533   31793 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 07:26:05.713632   31793 host.go:66] Checking if "ha-118173" exists ...
	I0806 07:26:05.714151   31793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:26:05.714203   31793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:26:05.730480   31793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35975
	I0806 07:26:05.730971   31793 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:26:05.731548   31793 main.go:141] libmachine: Using API Version  1
	I0806 07:26:05.731573   31793 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:26:05.731912   31793 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:26:05.732099   31793 main.go:141] libmachine: (ha-118173) Calling .DriverName
	I0806 07:26:05.732224   31793 start.go:317] joinCluster: &{Name:ha-118173 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-118173 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.164 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.203 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false in
spektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 07:26:05.732350   31793 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0806 07:26:05.732385   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHHostname
	I0806 07:26:05.735411   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:26:05.735896   31793 main.go:141] libmachine: (ha-118173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:df:f0", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:22:33 +0000 UTC Type:0 Mac:52:54:00:ef:df:f0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-118173 Clientid:01:52:54:00:ef:df:f0}
	I0806 07:26:05.735921   31793 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined IP address 192.168.39.164 and MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:26:05.736112   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHPort
	I0806 07:26:05.736295   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHKeyPath
	I0806 07:26:05.736453   31793 main.go:141] libmachine: (ha-118173) Calling .GetSSHUsername
	I0806 07:26:05.736590   31793 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173/id_rsa Username:docker}
	I0806 07:26:05.903963   31793 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0806 07:26:05.904008   31793 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token g3wtqj.q988fugfbkq4crnx --discovery-token-ca-cert-hash sha256:d36e5c1dfe989c7d561c809d7c06e0fc13926ac8f6d664cc5d6865ea5f79931b --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-118173-m03 --control-plane --apiserver-advertise-address=192.168.39.5 --apiserver-bind-port=8443"
	I0806 07:26:29.635900   31793 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token g3wtqj.q988fugfbkq4crnx --discovery-token-ca-cert-hash sha256:d36e5c1dfe989c7d561c809d7c06e0fc13926ac8f6d664cc5d6865ea5f79931b --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-118173-m03 --control-plane --apiserver-advertise-address=192.168.39.5 --apiserver-bind-port=8443": (23.731845764s)
	I0806 07:26:29.635941   31793 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0806 07:26:30.248222   31793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-118173-m03 minikube.k8s.io/updated_at=2024_08_06T07_26_30_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=e92cb06692f5ea1ba801d10d148e5e92e807f9c8 minikube.k8s.io/name=ha-118173 minikube.k8s.io/primary=false
	I0806 07:26:30.395083   31793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-118173-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0806 07:26:30.517574   31793 start.go:319] duration metric: took 24.785347039s to joinCluster
	I0806 07:26:30.517656   31793 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0806 07:26:30.517993   31793 config.go:182] Loaded profile config "ha-118173": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 07:26:30.519229   31793 out.go:177] * Verifying Kubernetes components...
	I0806 07:26:30.520557   31793 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 07:26:30.820588   31793 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 07:26:30.893001   31793 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19370-13057/kubeconfig
	I0806 07:26:30.893238   31793 kapi.go:59] client config for ha-118173: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/client.crt", KeyFile:"/home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/client.key", CAFile:"/home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02a80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0806 07:26:30.893308   31793 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.164:8443
	I0806 07:26:30.893495   31793 node_ready.go:35] waiting up to 6m0s for node "ha-118173-m03" to be "Ready" ...
	I0806 07:26:30.893561   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m03
	I0806 07:26:30.893568   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:30.893576   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:30.893581   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:30.898173   31793 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0806 07:26:31.394312   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m03
	I0806 07:26:31.394330   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:31.394339   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:31.394343   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:31.404531   31793 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0806 07:26:31.893911   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m03
	I0806 07:26:31.893932   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:31.893942   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:31.893946   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:31.897287   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:32.394296   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m03
	I0806 07:26:32.394317   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:32.394326   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:32.394329   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:32.397248   31793 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 07:26:32.894446   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m03
	I0806 07:26:32.894465   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:32.894473   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:32.894477   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:32.898223   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:32.898920   31793 node_ready.go:53] node "ha-118173-m03" has status "Ready":"False"
	I0806 07:26:33.394070   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m03
	I0806 07:26:33.394101   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:33.394113   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:33.394121   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:33.397822   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:33.894436   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m03
	I0806 07:26:33.894457   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:33.894467   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:33.894473   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:33.897835   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:34.393883   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m03
	I0806 07:26:34.393906   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:34.393913   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:34.393917   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:34.397798   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:34.894314   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m03
	I0806 07:26:34.894337   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:34.894346   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:34.894352   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:34.897374   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:35.394441   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m03
	I0806 07:26:35.394461   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:35.394470   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:35.394474   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:35.398022   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:35.398632   31793 node_ready.go:53] node "ha-118173-m03" has status "Ready":"False"
	I0806 07:26:35.894485   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m03
	I0806 07:26:35.894511   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:35.894522   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:35.894527   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:35.900795   31793 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0806 07:26:36.393723   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m03
	I0806 07:26:36.393749   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:36.393759   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:36.393764   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:36.397167   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:36.894035   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m03
	I0806 07:26:36.894056   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:36.894065   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:36.894069   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:36.897659   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:37.393672   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m03
	I0806 07:26:37.393693   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:37.393700   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:37.393704   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:37.397785   31793 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0806 07:26:37.399056   31793 node_ready.go:53] node "ha-118173-m03" has status "Ready":"False"
	I0806 07:26:37.894161   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m03
	I0806 07:26:37.894180   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:37.894188   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:37.894192   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:37.897420   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:38.394383   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m03
	I0806 07:26:38.394405   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:38.394412   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:38.394429   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:38.398194   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:38.894482   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m03
	I0806 07:26:38.894506   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:38.894514   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:38.894517   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:38.897948   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:39.393760   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m03
	I0806 07:26:39.393782   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:39.393790   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:39.393795   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:39.397172   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:39.894595   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m03
	I0806 07:26:39.894616   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:39.894624   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:39.894630   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:39.897283   31793 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 07:26:39.897870   31793 node_ready.go:53] node "ha-118173-m03" has status "Ready":"False"
	I0806 07:26:40.394307   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m03
	I0806 07:26:40.394327   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:40.394335   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:40.394338   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:40.397953   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:40.894692   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m03
	I0806 07:26:40.894714   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:40.894724   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:40.894730   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:40.898332   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:41.394367   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m03
	I0806 07:26:41.394387   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:41.394396   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:41.394398   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:41.398227   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:41.894258   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m03
	I0806 07:26:41.894278   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:41.894286   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:41.894292   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:41.897447   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:41.897975   31793 node_ready.go:53] node "ha-118173-m03" has status "Ready":"False"
	I0806 07:26:42.394339   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m03
	I0806 07:26:42.394362   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:42.394375   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:42.394396   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:42.397881   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:42.894463   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m03
	I0806 07:26:42.894485   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:42.894493   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:42.894497   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:42.897853   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:43.394582   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m03
	I0806 07:26:43.394602   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:43.394609   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:43.394612   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:43.398469   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:43.893791   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m03
	I0806 07:26:43.893817   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:43.893827   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:43.893832   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:43.896507   31793 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 07:26:44.393832   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m03
	I0806 07:26:44.393853   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:44.393859   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:44.393863   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:44.397200   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:44.397677   31793 node_ready.go:53] node "ha-118173-m03" has status "Ready":"False"
	I0806 07:26:44.894407   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m03
	I0806 07:26:44.894429   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:44.894437   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:44.894443   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:44.898032   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:45.394292   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m03
	I0806 07:26:45.394317   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:45.394328   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:45.394334   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:45.397652   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:45.894446   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m03
	I0806 07:26:45.894468   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:45.894477   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:45.894481   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:45.905209   31793 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0806 07:26:46.394412   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m03
	I0806 07:26:46.394435   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:46.394443   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:46.394449   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:46.398644   31793 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0806 07:26:46.399085   31793 node_ready.go:53] node "ha-118173-m03" has status "Ready":"False"
	I0806 07:26:46.894480   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m03
	I0806 07:26:46.894500   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:46.894512   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:46.894516   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:46.897733   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:47.393735   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m03
	I0806 07:26:47.393757   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:47.393765   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:47.393769   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:47.397905   31793 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0806 07:26:47.894101   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m03
	I0806 07:26:47.894122   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:47.894130   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:47.894136   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:47.897026   31793 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 07:26:48.394653   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m03
	I0806 07:26:48.394679   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:48.394686   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:48.394691   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:48.398489   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:48.399199   31793 node_ready.go:53] node "ha-118173-m03" has status "Ready":"False"
	I0806 07:26:48.894555   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m03
	I0806 07:26:48.894578   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:48.894586   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:48.894589   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:48.898291   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:49.394177   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m03
	I0806 07:26:49.394197   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:49.394206   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:49.394210   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:49.397263   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:49.893721   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m03
	I0806 07:26:49.893746   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:49.893756   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:49.893763   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:49.898053   31793 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0806 07:26:49.898806   31793 node_ready.go:49] node "ha-118173-m03" has status "Ready":"True"
	I0806 07:26:49.898832   31793 node_ready.go:38] duration metric: took 19.005324006s for node "ha-118173-m03" to be "Ready" ...
	I0806 07:26:49.898841   31793 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 07:26:49.898909   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods
	I0806 07:26:49.898917   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:49.898925   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:49.898931   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:49.905307   31793 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0806 07:26:49.911989   31793 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-t24ww" in "kube-system" namespace to be "Ready" ...
	I0806 07:26:49.912086   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-t24ww
	I0806 07:26:49.912093   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:49.912103   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:49.912114   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:49.916830   31793 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0806 07:26:49.917808   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173
	I0806 07:26:49.917824   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:49.917839   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:49.917857   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:49.921623   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:49.922142   31793 pod_ready.go:92] pod "coredns-7db6d8ff4d-t24ww" in "kube-system" namespace has status "Ready":"True"
	I0806 07:26:49.922160   31793 pod_ready.go:81] duration metric: took 10.141106ms for pod "coredns-7db6d8ff4d-t24ww" in "kube-system" namespace to be "Ready" ...
	I0806 07:26:49.922169   31793 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-zscz4" in "kube-system" namespace to be "Ready" ...
	I0806 07:26:49.922240   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-zscz4
	I0806 07:26:49.922250   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:49.922260   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:49.922265   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:49.926952   31793 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0806 07:26:49.927652   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173
	I0806 07:26:49.927673   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:49.927684   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:49.927689   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:49.933246   31793 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0806 07:26:49.934440   31793 pod_ready.go:92] pod "coredns-7db6d8ff4d-zscz4" in "kube-system" namespace has status "Ready":"True"
	I0806 07:26:49.934458   31793 pod_ready.go:81] duration metric: took 12.283667ms for pod "coredns-7db6d8ff4d-zscz4" in "kube-system" namespace to be "Ready" ...
	I0806 07:26:49.934467   31793 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-118173" in "kube-system" namespace to be "Ready" ...
	I0806 07:26:49.934523   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods/etcd-ha-118173
	I0806 07:26:49.934531   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:49.934539   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:49.934546   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:49.937746   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:49.938336   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173
	I0806 07:26:49.938352   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:49.938360   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:49.938366   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:49.940809   31793 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 07:26:49.941273   31793 pod_ready.go:92] pod "etcd-ha-118173" in "kube-system" namespace has status "Ready":"True"
	I0806 07:26:49.941292   31793 pod_ready.go:81] duration metric: took 6.818286ms for pod "etcd-ha-118173" in "kube-system" namespace to be "Ready" ...
	I0806 07:26:49.941305   31793 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-118173-m02" in "kube-system" namespace to be "Ready" ...
	I0806 07:26:49.941368   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods/etcd-ha-118173-m02
	I0806 07:26:49.941379   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:49.941389   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:49.941398   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:49.943935   31793 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 07:26:49.944548   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:26:49.944565   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:49.944573   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:49.944579   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:49.947018   31793 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 07:26:49.947464   31793 pod_ready.go:92] pod "etcd-ha-118173-m02" in "kube-system" namespace has status "Ready":"True"
	I0806 07:26:49.947485   31793 pod_ready.go:81] duration metric: took 6.172581ms for pod "etcd-ha-118173-m02" in "kube-system" namespace to be "Ready" ...
	I0806 07:26:49.947498   31793 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-118173-m03" in "kube-system" namespace to be "Ready" ...
	I0806 07:26:50.093774   31793 request.go:629] Waited for 146.217844ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods/etcd-ha-118173-m03
	I0806 07:26:50.093836   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods/etcd-ha-118173-m03
	I0806 07:26:50.093843   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:50.093875   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:50.093890   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:50.097338   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:50.294585   31793 request.go:629] Waited for 196.342537ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.164:8443/api/v1/nodes/ha-118173-m03
	I0806 07:26:50.294658   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m03
	I0806 07:26:50.294668   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:50.294679   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:50.294686   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:50.298098   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:50.298714   31793 pod_ready.go:92] pod "etcd-ha-118173-m03" in "kube-system" namespace has status "Ready":"True"
	I0806 07:26:50.298733   31793 pod_ready.go:81] duration metric: took 351.227961ms for pod "etcd-ha-118173-m03" in "kube-system" namespace to be "Ready" ...
	I0806 07:26:50.298753   31793 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-118173" in "kube-system" namespace to be "Ready" ...
	I0806 07:26:50.494109   31793 request.go:629] Waited for 195.27208ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-118173
	I0806 07:26:50.494174   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-118173
	I0806 07:26:50.494180   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:50.494190   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:50.494197   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:50.497605   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:50.694672   31793 request.go:629] Waited for 196.360731ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.164:8443/api/v1/nodes/ha-118173
	I0806 07:26:50.694748   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173
	I0806 07:26:50.694756   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:50.694764   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:50.694769   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:50.698064   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:50.698765   31793 pod_ready.go:92] pod "kube-apiserver-ha-118173" in "kube-system" namespace has status "Ready":"True"
	I0806 07:26:50.698794   31793 pod_ready.go:81] duration metric: took 400.030299ms for pod "kube-apiserver-ha-118173" in "kube-system" namespace to be "Ready" ...
	I0806 07:26:50.698813   31793 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-118173-m02" in "kube-system" namespace to be "Ready" ...
	I0806 07:26:50.894310   31793 request.go:629] Waited for 195.425844ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-118173-m02
	I0806 07:26:50.894402   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-118173-m02
	I0806 07:26:50.894411   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:50.894423   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:50.894432   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:50.898426   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:51.094114   31793 request.go:629] Waited for 194.23602ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:26:51.094165   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:26:51.094170   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:51.094177   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:51.094186   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:51.097389   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:51.098083   31793 pod_ready.go:92] pod "kube-apiserver-ha-118173-m02" in "kube-system" namespace has status "Ready":"True"
	I0806 07:26:51.098112   31793 pod_ready.go:81] duration metric: took 399.290638ms for pod "kube-apiserver-ha-118173-m02" in "kube-system" namespace to be "Ready" ...
	I0806 07:26:51.098124   31793 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-118173-m03" in "kube-system" namespace to be "Ready" ...
	I0806 07:26:51.294154   31793 request.go:629] Waited for 195.956609ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-118173-m03
	I0806 07:26:51.294220   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-118173-m03
	I0806 07:26:51.294225   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:51.294233   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:51.294238   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:51.297482   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:51.494507   31793 request.go:629] Waited for 196.312988ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.164:8443/api/v1/nodes/ha-118173-m03
	I0806 07:26:51.494579   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m03
	I0806 07:26:51.494585   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:51.494592   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:51.494599   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:51.497907   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:51.498505   31793 pod_ready.go:92] pod "kube-apiserver-ha-118173-m03" in "kube-system" namespace has status "Ready":"True"
	I0806 07:26:51.498525   31793 pod_ready.go:81] duration metric: took 400.392272ms for pod "kube-apiserver-ha-118173-m03" in "kube-system" namespace to be "Ready" ...
	I0806 07:26:51.498536   31793 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-118173" in "kube-system" namespace to be "Ready" ...
	I0806 07:26:51.694347   31793 request.go:629] Waited for 195.742134ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-118173
	I0806 07:26:51.694423   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-118173
	I0806 07:26:51.694433   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:51.694442   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:51.694452   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:51.697903   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:51.894150   31793 request.go:629] Waited for 195.450271ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.164:8443/api/v1/nodes/ha-118173
	I0806 07:26:51.894218   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173
	I0806 07:26:51.894225   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:51.894235   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:51.894243   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:51.897758   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:51.898472   31793 pod_ready.go:92] pod "kube-controller-manager-ha-118173" in "kube-system" namespace has status "Ready":"True"
	I0806 07:26:51.898489   31793 pod_ready.go:81] duration metric: took 399.945238ms for pod "kube-controller-manager-ha-118173" in "kube-system" namespace to be "Ready" ...
	I0806 07:26:51.898498   31793 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-118173-m02" in "kube-system" namespace to be "Ready" ...
	I0806 07:26:52.094678   31793 request.go:629] Waited for 196.122099ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-118173-m02
	I0806 07:26:52.094762   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-118173-m02
	I0806 07:26:52.094773   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:52.094784   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:52.094794   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:52.097635   31793 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 07:26:52.294550   31793 request.go:629] Waited for 196.349599ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:26:52.294599   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:26:52.294604   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:52.294612   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:52.294617   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:52.297756   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:52.298622   31793 pod_ready.go:92] pod "kube-controller-manager-ha-118173-m02" in "kube-system" namespace has status "Ready":"True"
	I0806 07:26:52.298640   31793 pod_ready.go:81] duration metric: took 400.135914ms for pod "kube-controller-manager-ha-118173-m02" in "kube-system" namespace to be "Ready" ...
	I0806 07:26:52.298650   31793 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-118173-m03" in "kube-system" namespace to be "Ready" ...
	I0806 07:26:52.494669   31793 request.go:629] Waited for 195.954169ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-118173-m03
	I0806 07:26:52.494752   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-118173-m03
	I0806 07:26:52.494763   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:52.494772   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:52.494780   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:52.498294   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:52.694057   31793 request.go:629] Waited for 194.283931ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.164:8443/api/v1/nodes/ha-118173-m03
	I0806 07:26:52.694118   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m03
	I0806 07:26:52.694125   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:52.694135   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:52.694140   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:52.697249   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:52.697936   31793 pod_ready.go:92] pod "kube-controller-manager-ha-118173-m03" in "kube-system" namespace has status "Ready":"True"
	I0806 07:26:52.697955   31793 pod_ready.go:81] duration metric: took 399.29885ms for pod "kube-controller-manager-ha-118173-m03" in "kube-system" namespace to be "Ready" ...
	I0806 07:26:52.697964   31793 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-k8sq6" in "kube-system" namespace to be "Ready" ...
	I0806 07:26:52.894022   31793 request.go:629] Waited for 195.997855ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods/kube-proxy-k8sq6
	I0806 07:26:52.894085   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods/kube-proxy-k8sq6
	I0806 07:26:52.894090   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:52.894097   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:52.894102   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:52.897672   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:53.093725   31793 request.go:629] Waited for 195.278717ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:26:53.093775   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:26:53.093780   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:53.093787   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:53.093792   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:53.097778   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:53.098338   31793 pod_ready.go:92] pod "kube-proxy-k8sq6" in "kube-system" namespace has status "Ready":"True"
	I0806 07:26:53.098362   31793 pod_ready.go:81] duration metric: took 400.389883ms for pod "kube-proxy-k8sq6" in "kube-system" namespace to be "Ready" ...
	I0806 07:26:53.098374   31793 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ksm6v" in "kube-system" namespace to be "Ready" ...
	I0806 07:26:53.294181   31793 request.go:629] Waited for 195.738103ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ksm6v
	I0806 07:26:53.294260   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ksm6v
	I0806 07:26:53.294270   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:53.294278   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:53.294286   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:53.297478   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:53.494327   31793 request.go:629] Waited for 195.99984ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.164:8443/api/v1/nodes/ha-118173
	I0806 07:26:53.494398   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173
	I0806 07:26:53.494403   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:53.494410   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:53.494417   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:53.497588   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:53.498416   31793 pod_ready.go:92] pod "kube-proxy-ksm6v" in "kube-system" namespace has status "Ready":"True"
	I0806 07:26:53.498437   31793 pod_ready.go:81] duration metric: took 400.055677ms for pod "kube-proxy-ksm6v" in "kube-system" namespace to be "Ready" ...
	I0806 07:26:53.498446   31793 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-sw4nj" in "kube-system" namespace to be "Ready" ...
	I0806 07:26:53.694567   31793 request.go:629] Waited for 196.056434ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sw4nj
	I0806 07:26:53.694638   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sw4nj
	I0806 07:26:53.694643   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:53.694651   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:53.694655   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:53.698323   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:53.894528   31793 request.go:629] Waited for 195.375859ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.164:8443/api/v1/nodes/ha-118173-m03
	I0806 07:26:53.894586   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m03
	I0806 07:26:53.894591   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:53.894606   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:53.894614   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:53.898117   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:53.898684   31793 pod_ready.go:92] pod "kube-proxy-sw4nj" in "kube-system" namespace has status "Ready":"True"
	I0806 07:26:53.898707   31793 pod_ready.go:81] duration metric: took 400.253401ms for pod "kube-proxy-sw4nj" in "kube-system" namespace to be "Ready" ...
	I0806 07:26:53.898719   31793 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-118173" in "kube-system" namespace to be "Ready" ...
	I0806 07:26:54.093723   31793 request.go:629] Waited for 194.937669ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-118173
	I0806 07:26:54.093808   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-118173
	I0806 07:26:54.093820   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:54.093829   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:54.093835   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:54.097356   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:54.294184   31793 request.go:629] Waited for 196.178585ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.164:8443/api/v1/nodes/ha-118173
	I0806 07:26:54.294238   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173
	I0806 07:26:54.294242   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:54.294251   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:54.294255   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:54.297777   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:54.298275   31793 pod_ready.go:92] pod "kube-scheduler-ha-118173" in "kube-system" namespace has status "Ready":"True"
	I0806 07:26:54.298300   31793 pod_ready.go:81] duration metric: took 399.572268ms for pod "kube-scheduler-ha-118173" in "kube-system" namespace to be "Ready" ...
	I0806 07:26:54.298313   31793 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-118173-m02" in "kube-system" namespace to be "Ready" ...
	I0806 07:26:54.493783   31793 request.go:629] Waited for 195.383958ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-118173-m02
	I0806 07:26:54.493862   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-118173-m02
	I0806 07:26:54.493873   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:54.493884   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:54.493892   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:54.497242   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:54.694212   31793 request.go:629] Waited for 196.322219ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:26:54.694284   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m02
	I0806 07:26:54.694291   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:54.694302   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:54.694307   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:54.697506   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:54.698023   31793 pod_ready.go:92] pod "kube-scheduler-ha-118173-m02" in "kube-system" namespace has status "Ready":"True"
	I0806 07:26:54.698040   31793 pod_ready.go:81] duration metric: took 399.718807ms for pod "kube-scheduler-ha-118173-m02" in "kube-system" namespace to be "Ready" ...
	I0806 07:26:54.698052   31793 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-118173-m03" in "kube-system" namespace to be "Ready" ...
	I0806 07:26:54.894184   31793 request.go:629] Waited for 196.056976ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-118173-m03
	I0806 07:26:54.894248   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-118173-m03
	I0806 07:26:54.894255   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:54.894263   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:54.894266   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:54.897428   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:55.094353   31793 request.go:629] Waited for 196.221298ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.164:8443/api/v1/nodes/ha-118173-m03
	I0806 07:26:55.094421   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes/ha-118173-m03
	I0806 07:26:55.094435   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:55.094444   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:55.094455   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:55.097913   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:55.098569   31793 pod_ready.go:92] pod "kube-scheduler-ha-118173-m03" in "kube-system" namespace has status "Ready":"True"
	I0806 07:26:55.098587   31793 pod_ready.go:81] duration metric: took 400.528767ms for pod "kube-scheduler-ha-118173-m03" in "kube-system" namespace to be "Ready" ...
	I0806 07:26:55.098597   31793 pod_ready.go:38] duration metric: took 5.199746098s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 07:26:55.098611   31793 api_server.go:52] waiting for apiserver process to appear ...
	I0806 07:26:55.098669   31793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 07:26:55.115674   31793 api_server.go:72] duration metric: took 24.597982005s to wait for apiserver process to appear ...
	I0806 07:26:55.115742   31793 api_server.go:88] waiting for apiserver healthz status ...
	I0806 07:26:55.115771   31793 api_server.go:253] Checking apiserver healthz at https://192.168.39.164:8443/healthz ...
	I0806 07:26:55.121113   31793 api_server.go:279] https://192.168.39.164:8443/healthz returned 200:
	ok
	I0806 07:26:55.121190   31793 round_trippers.go:463] GET https://192.168.39.164:8443/version
	I0806 07:26:55.121200   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:55.121211   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:55.121219   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:55.122118   31793 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0806 07:26:55.122192   31793 api_server.go:141] control plane version: v1.30.3
	I0806 07:26:55.122211   31793 api_server.go:131] duration metric: took 6.456898ms to wait for apiserver health ...
	I0806 07:26:55.122224   31793 system_pods.go:43] waiting for kube-system pods to appear ...
	I0806 07:26:55.294643   31793 request.go:629] Waited for 172.315519ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods
	I0806 07:26:55.294709   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods
	I0806 07:26:55.294719   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:55.294727   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:55.294735   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:55.301274   31793 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0806 07:26:55.307768   31793 system_pods.go:59] 24 kube-system pods found
	I0806 07:26:55.307797   31793 system_pods.go:61] "coredns-7db6d8ff4d-t24ww" [8effbe6c-3107-409e-9c94-dee4103d12b0] Running
	I0806 07:26:55.307802   31793 system_pods.go:61] "coredns-7db6d8ff4d-zscz4" [ef588a32-ac32-4e38-926e-6ee7a662a399] Running
	I0806 07:26:55.307805   31793 system_pods.go:61] "etcd-ha-118173" [a72e8b83-1e33-429a-8d98-98936a0e3ab8] Running
	I0806 07:26:55.307814   31793 system_pods.go:61] "etcd-ha-118173-m02" [cb090a5d-5760-48c6-abea-735b88daf0be] Running
	I0806 07:26:55.307818   31793 system_pods.go:61] "etcd-ha-118173-m03" [176012b3-c835-4dcc-9adc-46ce943a3570] Running
	I0806 07:26:55.307821   31793 system_pods.go:61] "kindnet-2bmzh" [8b1ecbca-a98b-473c-949b-a8a6d3ba1354] Running
	I0806 07:26:55.307824   31793 system_pods.go:61] "kindnet-n7jmb" [fa496bfb-7bf0-4ebc-adbd-36d992d8e279] Running
	I0806 07:26:55.307827   31793 system_pods.go:61] "kindnet-z9w6l" [c02fe720-31c6-4146-af1a-0ada3aa7e530] Running
	I0806 07:26:55.307830   31793 system_pods.go:61] "kube-apiserver-ha-118173" [902898f9-8ba6-4d78-a648-49d412dce202] Running
	I0806 07:26:55.307833   31793 system_pods.go:61] "kube-apiserver-ha-118173-m02" [1b9cea4a-7755-4cd8-a240-b4911dc45edf] Running
	I0806 07:26:55.307836   31793 system_pods.go:61] "kube-apiserver-ha-118173-m03" [a1795be8-57bf-4e24-8165-f66d7b4778b4] Running
	I0806 07:26:55.307840   31793 system_pods.go:61] "kube-controller-manager-ha-118173" [1af832f2-bceb-4c89-afa8-09a78b7782e1] Running
	I0806 07:26:55.307843   31793 system_pods.go:61] "kube-controller-manager-ha-118173-m02" [7be7c963-d9af-4cda-95dd-9a953b40f284] Running
	I0806 07:26:55.307846   31793 system_pods.go:61] "kube-controller-manager-ha-118173-m03" [a777af70-7701-41aa-b149-ccbe376d6464] Running
	I0806 07:26:55.307850   31793 system_pods.go:61] "kube-proxy-k8sq6" [b1ec850b-6ce0-4710-aa99-419bda3b3e93] Running
	I0806 07:26:55.307853   31793 system_pods.go:61] "kube-proxy-ksm6v" [594bc178-8319-4922-9132-69a9c6890b95] Running
	I0806 07:26:55.307856   31793 system_pods.go:61] "kube-proxy-sw4nj" [a75ef027-13c0-49c6-8580-6adb8afa497b] Running
	I0806 07:26:55.307859   31793 system_pods.go:61] "kube-scheduler-ha-118173" [6fda2f13-0df0-42ed-9d67-8dd30b33fa3d] Running
	I0806 07:26:55.307862   31793 system_pods.go:61] "kube-scheduler-ha-118173-m02" [f097af54-fb79-40f2-aaf4-6cfbb0d8c449] Running
	I0806 07:26:55.307865   31793 system_pods.go:61] "kube-scheduler-ha-118173-m03" [2360b607-29c0-4731-afc7-c76d7499e1fb] Running
	I0806 07:26:55.307867   31793 system_pods.go:61] "kube-vip-ha-118173" [594d77d2-83e6-420d-8f0a-f54e9fd704f3] Running
	I0806 07:26:55.307870   31793 system_pods.go:61] "kube-vip-ha-118173-m02" [66b58218-4fc2-4a96-81b9-31ade7dda807] Running
	I0806 07:26:55.307873   31793 system_pods.go:61] "kube-vip-ha-118173-m03" [af675cae-7ed6-4e95-a6a9-8a6b16e4c450] Running
	I0806 07:26:55.307876   31793 system_pods.go:61] "storage-provisioner" [8379466d-0241-40e4-a262-d762b3cda2eb] Running
	I0806 07:26:55.307881   31793 system_pods.go:74] duration metric: took 185.648809ms to wait for pod list to return data ...
	I0806 07:26:55.307891   31793 default_sa.go:34] waiting for default service account to be created ...
	I0806 07:26:55.494342   31793 request.go:629] Waited for 186.376671ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.164:8443/api/v1/namespaces/default/serviceaccounts
	I0806 07:26:55.494397   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/namespaces/default/serviceaccounts
	I0806 07:26:55.494402   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:55.494409   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:55.494414   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:55.497896   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:55.498002   31793 default_sa.go:45] found service account: "default"
	I0806 07:26:55.498017   31793 default_sa.go:55] duration metric: took 190.118149ms for default service account to be created ...
	I0806 07:26:55.498024   31793 system_pods.go:116] waiting for k8s-apps to be running ...
	I0806 07:26:55.694421   31793 request.go:629] Waited for 196.305041ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods
	I0806 07:26:55.694487   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/namespaces/kube-system/pods
	I0806 07:26:55.694492   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:55.694499   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:55.694506   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:55.701186   31793 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0806 07:26:55.707131   31793 system_pods.go:86] 24 kube-system pods found
	I0806 07:26:55.707159   31793 system_pods.go:89] "coredns-7db6d8ff4d-t24ww" [8effbe6c-3107-409e-9c94-dee4103d12b0] Running
	I0806 07:26:55.707165   31793 system_pods.go:89] "coredns-7db6d8ff4d-zscz4" [ef588a32-ac32-4e38-926e-6ee7a662a399] Running
	I0806 07:26:55.707169   31793 system_pods.go:89] "etcd-ha-118173" [a72e8b83-1e33-429a-8d98-98936a0e3ab8] Running
	I0806 07:26:55.707173   31793 system_pods.go:89] "etcd-ha-118173-m02" [cb090a5d-5760-48c6-abea-735b88daf0be] Running
	I0806 07:26:55.707179   31793 system_pods.go:89] "etcd-ha-118173-m03" [176012b3-c835-4dcc-9adc-46ce943a3570] Running
	I0806 07:26:55.707184   31793 system_pods.go:89] "kindnet-2bmzh" [8b1ecbca-a98b-473c-949b-a8a6d3ba1354] Running
	I0806 07:26:55.707190   31793 system_pods.go:89] "kindnet-n7jmb" [fa496bfb-7bf0-4ebc-adbd-36d992d8e279] Running
	I0806 07:26:55.707196   31793 system_pods.go:89] "kindnet-z9w6l" [c02fe720-31c6-4146-af1a-0ada3aa7e530] Running
	I0806 07:26:55.707205   31793 system_pods.go:89] "kube-apiserver-ha-118173" [902898f9-8ba6-4d78-a648-49d412dce202] Running
	I0806 07:26:55.707213   31793 system_pods.go:89] "kube-apiserver-ha-118173-m02" [1b9cea4a-7755-4cd8-a240-b4911dc45edf] Running
	I0806 07:26:55.707223   31793 system_pods.go:89] "kube-apiserver-ha-118173-m03" [a1795be8-57bf-4e24-8165-f66d7b4778b4] Running
	I0806 07:26:55.707228   31793 system_pods.go:89] "kube-controller-manager-ha-118173" [1af832f2-bceb-4c89-afa8-09a78b7782e1] Running
	I0806 07:26:55.707233   31793 system_pods.go:89] "kube-controller-manager-ha-118173-m02" [7be7c963-d9af-4cda-95dd-9a953b40f284] Running
	I0806 07:26:55.707240   31793 system_pods.go:89] "kube-controller-manager-ha-118173-m03" [a777af70-7701-41aa-b149-ccbe376d6464] Running
	I0806 07:26:55.707244   31793 system_pods.go:89] "kube-proxy-k8sq6" [b1ec850b-6ce0-4710-aa99-419bda3b3e93] Running
	I0806 07:26:55.707250   31793 system_pods.go:89] "kube-proxy-ksm6v" [594bc178-8319-4922-9132-69a9c6890b95] Running
	I0806 07:26:55.707257   31793 system_pods.go:89] "kube-proxy-sw4nj" [a75ef027-13c0-49c6-8580-6adb8afa497b] Running
	I0806 07:26:55.707261   31793 system_pods.go:89] "kube-scheduler-ha-118173" [6fda2f13-0df0-42ed-9d67-8dd30b33fa3d] Running
	I0806 07:26:55.707266   31793 system_pods.go:89] "kube-scheduler-ha-118173-m02" [f097af54-fb79-40f2-aaf4-6cfbb0d8c449] Running
	I0806 07:26:55.707270   31793 system_pods.go:89] "kube-scheduler-ha-118173-m03" [2360b607-29c0-4731-afc7-c76d7499e1fb] Running
	I0806 07:26:55.707274   31793 system_pods.go:89] "kube-vip-ha-118173" [594d77d2-83e6-420d-8f0a-f54e9fd704f3] Running
	I0806 07:26:55.707278   31793 system_pods.go:89] "kube-vip-ha-118173-m02" [66b58218-4fc2-4a96-81b9-31ade7dda807] Running
	I0806 07:26:55.707283   31793 system_pods.go:89] "kube-vip-ha-118173-m03" [af675cae-7ed6-4e95-a6a9-8a6b16e4c450] Running
	I0806 07:26:55.707292   31793 system_pods.go:89] "storage-provisioner" [8379466d-0241-40e4-a262-d762b3cda2eb] Running
	I0806 07:26:55.707303   31793 system_pods.go:126] duration metric: took 209.272042ms to wait for k8s-apps to be running ...
	I0806 07:26:55.707315   31793 system_svc.go:44] waiting for kubelet service to be running ....
	I0806 07:26:55.707363   31793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 07:26:55.724292   31793 system_svc.go:56] duration metric: took 16.970607ms WaitForService to wait for kubelet
	I0806 07:26:55.724317   31793 kubeadm.go:582] duration metric: took 25.206628498s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0806 07:26:55.724341   31793 node_conditions.go:102] verifying NodePressure condition ...
	I0806 07:26:55.894025   31793 request.go:629] Waited for 169.596181ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.164:8443/api/v1/nodes
	I0806 07:26:55.894077   31793 round_trippers.go:463] GET https://192.168.39.164:8443/api/v1/nodes
	I0806 07:26:55.894081   31793 round_trippers.go:469] Request Headers:
	I0806 07:26:55.894088   31793 round_trippers.go:473]     Accept: application/json, */*
	I0806 07:26:55.894095   31793 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0806 07:26:55.897684   31793 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 07:26:55.898994   31793 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0806 07:26:55.899018   31793 node_conditions.go:123] node cpu capacity is 2
	I0806 07:26:55.899054   31793 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0806 07:26:55.899063   31793 node_conditions.go:123] node cpu capacity is 2
	I0806 07:26:55.899070   31793 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0806 07:26:55.899074   31793 node_conditions.go:123] node cpu capacity is 2
	I0806 07:26:55.899083   31793 node_conditions.go:105] duration metric: took 174.735498ms to run NodePressure ...
	I0806 07:26:55.899098   31793 start.go:241] waiting for startup goroutines ...
	I0806 07:26:55.899124   31793 start.go:255] writing updated cluster config ...
	I0806 07:26:55.899554   31793 ssh_runner.go:195] Run: rm -f paused
	I0806 07:26:55.950770   31793 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0806 07:26:55.953632   31793 out.go:177] * Done! kubectl is now configured to use "ha-118173" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 06 07:31:41 ha-118173 crio[676]: time="2024-08-06 07:31:41.469922089Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722929501469894186,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154770,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d07364f0-e72a-4870-bdce-9ea1ad4f4165 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 07:31:41 ha-118173 crio[676]: time="2024-08-06 07:31:41.471138408Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ce8ffead-a61f-4113-963a-9e47f98eb928 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 07:31:41 ha-118173 crio[676]: time="2024-08-06 07:31:41.471308871Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ce8ffead-a61f-4113-963a-9e47f98eb928 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 07:31:41 ha-118173 crio[676]: time="2024-08-06 07:31:41.471690832Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:10437bdde23713cae1b222edffa34be0e9583f43a83c01f10aae555f0dd7542e,PodSandboxId:3f725d083d59ecb8b65c8b09466e36c8b01dd5742fa8d97e108de4a49f47de04,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722929221668199667,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-rml5d,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4a82c0ca-1596-4dfe-a2f9-3a34f22480e5,},Annotations:map[string]string{io.kubernetes.container.hash: 950719af,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c60313490ee322d54c5b3908e628bff77d0ca10249f7253eb9cbd44745a805af,PodSandboxId:b221107c8052d9be54b23c0ee2e882f4adc57ade5813f27e7dfc4e4f71006663,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722929010372163680,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zscz4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef588a32-ac32-4e38-926e-6ee7a662a399,},Annotations:map[string]string{io.kubernetes.container.hash: 38e96eed,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:321504a9bbbfc57376fccab559a6ebc7388670dc0bf123b5f9fd41b1da99adaa,PodSandboxId:4a9fe2c539265caf233fd5f3778ebc3e6092ce121ddf647e3f72622e31022e53,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722929010343273638,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-t24ww,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
8effbe6c-3107-409e-9c94-dee4103d12b0,},Annotations:map[string]string{io.kubernetes.container.hash: 7604f64,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44b9ff2f36bf90e89da591e552ad006221f4c5a515009116c79504170f19368e,PodSandboxId:cabb1138777e527cb2a559ae6e3b154f442e8d3b8d44cc59ed6a4b9a49ce65ea,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING
,CreatedAt:1722929010265192394,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8379466d-0241-40e4-a262-d762b3cda2eb,},Annotations:map[string]string{io.kubernetes.container.hash: d079fdad,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8f5ae0da505fc7f79225ba7115c83b9e909128bc28df767fce1fc3ba7a01392,PodSandboxId:5bbdb233d391e1127588924ac263113c9aa0148b24fc7113df3ca0891d7e0fdc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CON
TAINER_RUNNING,CreatedAt:1722928998315681833,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-n7jmb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa496bfb-7bf0-4ebc-adbd-36d992d8e279,},Annotations:map[string]string{io.kubernetes.container.hash: db2b7a8c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5a5bdf0b2a62163f6aff6abe864eedc47b461e233989f23ef7770c226763325,PodSandboxId:2ed57fdbf65f1846c200a67b50a1043fe8c70dd79c90d1a1c67e5e0634c8088b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722928994
187870823,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ksm6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 594bc178-8319-4922-9132-69a9c6890b95,},Annotations:map[string]string{io.kubernetes.container.hash: 50a78a81,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ea977593bf7ff53f7419e821deaa1cf5813b4c116730030cd8d417d078be7fa,PodSandboxId:eb00561ba0cb5a9d0f8cd015251b78530e734c81712da3c4aeb67fbd1dc5a274,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172292897749
4204189,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cabf9e912268f41f81e7fabcaf2337ce,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a94703035c76b778779a89b7fc9176c7a956ebec52628aa1720d26bf2a1c31f5,PodSandboxId:33f4dab8a223724c5140e304cb2bb6b2c32716d0818935a0a1f10a3e73c27f55,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722928974564889139,Labels:map[strin
g]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6346700b562b8d3f4c18c66b3813bc28,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27f4fdf6fa77a51e1295c12b459cda687ed6ae68875218e24ba7959fb37814ed,PodSandboxId:5ebdb09ee65d6de5671cacfdefaaebdf7b5b00f25aa61461d39015ab5cb5578b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722928974548891631,Labels:map[string]s
tring{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 080589493a4905d0fb10f52a1a0e25a5,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ffee4f9f096a33fc08f1f6b266ea17ad2181a58a153f7fb1703ded978b23a46,PodSandboxId:e8d8f5e72e2b25ee44de736fd829832c7731bada1325d1781559721e92200f54,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722928974494900073,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f74055e61dcf5f2385b1934a1a8e3b73,},Annotations:map[string]string{io.kubernetes.container.hash: 27fe2ed3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:306b97b37955570f0a18d38ff1afab7bb09e01c47a3418dba2aeed0f00d6052d,PodSandboxId:5055641759bdb7b0e57568f6d5310c72587ea715571f3fff96ff5d200be59dc2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722928974442532867,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernet
es.pod.name: etcd-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65b7cf59bb7c42e61e37dcbebce4eec4,},Annotations:map[string]string{io.kubernetes.container.hash: fa683f08,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ce8ffead-a61f-4113-963a-9e47f98eb928 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 07:31:41 ha-118173 crio[676]: time="2024-08-06 07:31:41.531211533Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=da8d5978-4911-4f11-bcc8-3cefd8dd097c name=/runtime.v1.RuntimeService/Version
	Aug 06 07:31:41 ha-118173 crio[676]: time="2024-08-06 07:31:41.531285862Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=da8d5978-4911-4f11-bcc8-3cefd8dd097c name=/runtime.v1.RuntimeService/Version
	Aug 06 07:31:41 ha-118173 crio[676]: time="2024-08-06 07:31:41.532361110Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=832698ee-bffd-4a0b-885a-2fef3babe073 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 07:31:41 ha-118173 crio[676]: time="2024-08-06 07:31:41.532939522Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722929501532915447,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154770,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=832698ee-bffd-4a0b-885a-2fef3babe073 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 07:31:41 ha-118173 crio[676]: time="2024-08-06 07:31:41.533723454Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bcf266ca-9b1d-4016-bb69-bd940b99cefb name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 07:31:41 ha-118173 crio[676]: time="2024-08-06 07:31:41.533790645Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bcf266ca-9b1d-4016-bb69-bd940b99cefb name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 07:31:41 ha-118173 crio[676]: time="2024-08-06 07:31:41.534035725Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:10437bdde23713cae1b222edffa34be0e9583f43a83c01f10aae555f0dd7542e,PodSandboxId:3f725d083d59ecb8b65c8b09466e36c8b01dd5742fa8d97e108de4a49f47de04,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722929221668199667,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-rml5d,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4a82c0ca-1596-4dfe-a2f9-3a34f22480e5,},Annotations:map[string]string{io.kubernetes.container.hash: 950719af,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c60313490ee322d54c5b3908e628bff77d0ca10249f7253eb9cbd44745a805af,PodSandboxId:b221107c8052d9be54b23c0ee2e882f4adc57ade5813f27e7dfc4e4f71006663,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722929010372163680,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zscz4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef588a32-ac32-4e38-926e-6ee7a662a399,},Annotations:map[string]string{io.kubernetes.container.hash: 38e96eed,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:321504a9bbbfc57376fccab559a6ebc7388670dc0bf123b5f9fd41b1da99adaa,PodSandboxId:4a9fe2c539265caf233fd5f3778ebc3e6092ce121ddf647e3f72622e31022e53,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722929010343273638,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-t24ww,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
8effbe6c-3107-409e-9c94-dee4103d12b0,},Annotations:map[string]string{io.kubernetes.container.hash: 7604f64,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44b9ff2f36bf90e89da591e552ad006221f4c5a515009116c79504170f19368e,PodSandboxId:cabb1138777e527cb2a559ae6e3b154f442e8d3b8d44cc59ed6a4b9a49ce65ea,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING
,CreatedAt:1722929010265192394,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8379466d-0241-40e4-a262-d762b3cda2eb,},Annotations:map[string]string{io.kubernetes.container.hash: d079fdad,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8f5ae0da505fc7f79225ba7115c83b9e909128bc28df767fce1fc3ba7a01392,PodSandboxId:5bbdb233d391e1127588924ac263113c9aa0148b24fc7113df3ca0891d7e0fdc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CON
TAINER_RUNNING,CreatedAt:1722928998315681833,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-n7jmb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa496bfb-7bf0-4ebc-adbd-36d992d8e279,},Annotations:map[string]string{io.kubernetes.container.hash: db2b7a8c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5a5bdf0b2a62163f6aff6abe864eedc47b461e233989f23ef7770c226763325,PodSandboxId:2ed57fdbf65f1846c200a67b50a1043fe8c70dd79c90d1a1c67e5e0634c8088b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722928994
187870823,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ksm6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 594bc178-8319-4922-9132-69a9c6890b95,},Annotations:map[string]string{io.kubernetes.container.hash: 50a78a81,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ea977593bf7ff53f7419e821deaa1cf5813b4c116730030cd8d417d078be7fa,PodSandboxId:eb00561ba0cb5a9d0f8cd015251b78530e734c81712da3c4aeb67fbd1dc5a274,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172292897749
4204189,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cabf9e912268f41f81e7fabcaf2337ce,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a94703035c76b778779a89b7fc9176c7a956ebec52628aa1720d26bf2a1c31f5,PodSandboxId:33f4dab8a223724c5140e304cb2bb6b2c32716d0818935a0a1f10a3e73c27f55,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722928974564889139,Labels:map[strin
g]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6346700b562b8d3f4c18c66b3813bc28,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27f4fdf6fa77a51e1295c12b459cda687ed6ae68875218e24ba7959fb37814ed,PodSandboxId:5ebdb09ee65d6de5671cacfdefaaebdf7b5b00f25aa61461d39015ab5cb5578b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722928974548891631,Labels:map[string]s
tring{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 080589493a4905d0fb10f52a1a0e25a5,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ffee4f9f096a33fc08f1f6b266ea17ad2181a58a153f7fb1703ded978b23a46,PodSandboxId:e8d8f5e72e2b25ee44de736fd829832c7731bada1325d1781559721e92200f54,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722928974494900073,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f74055e61dcf5f2385b1934a1a8e3b73,},Annotations:map[string]string{io.kubernetes.container.hash: 27fe2ed3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:306b97b37955570f0a18d38ff1afab7bb09e01c47a3418dba2aeed0f00d6052d,PodSandboxId:5055641759bdb7b0e57568f6d5310c72587ea715571f3fff96ff5d200be59dc2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722928974442532867,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernet
es.pod.name: etcd-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65b7cf59bb7c42e61e37dcbebce4eec4,},Annotations:map[string]string{io.kubernetes.container.hash: fa683f08,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bcf266ca-9b1d-4016-bb69-bd940b99cefb name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 07:31:41 ha-118173 crio[676]: time="2024-08-06 07:31:41.579461867Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ffff596b-df74-43e4-8daf-728bef682fbc name=/runtime.v1.RuntimeService/Version
	Aug 06 07:31:41 ha-118173 crio[676]: time="2024-08-06 07:31:41.579701300Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ffff596b-df74-43e4-8daf-728bef682fbc name=/runtime.v1.RuntimeService/Version
	Aug 06 07:31:41 ha-118173 crio[676]: time="2024-08-06 07:31:41.581890231Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9910587c-8dc8-462d-ad4c-eb8a19743863 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 07:31:41 ha-118173 crio[676]: time="2024-08-06 07:31:41.582335861Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722929501582315138,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154770,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9910587c-8dc8-462d-ad4c-eb8a19743863 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 07:31:41 ha-118173 crio[676]: time="2024-08-06 07:31:41.583368337Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fb3d8b7b-8ee4-454c-80b1-29c099309a2f name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 07:31:41 ha-118173 crio[676]: time="2024-08-06 07:31:41.583439289Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fb3d8b7b-8ee4-454c-80b1-29c099309a2f name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 07:31:41 ha-118173 crio[676]: time="2024-08-06 07:31:41.583805201Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:10437bdde23713cae1b222edffa34be0e9583f43a83c01f10aae555f0dd7542e,PodSandboxId:3f725d083d59ecb8b65c8b09466e36c8b01dd5742fa8d97e108de4a49f47de04,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722929221668199667,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-rml5d,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4a82c0ca-1596-4dfe-a2f9-3a34f22480e5,},Annotations:map[string]string{io.kubernetes.container.hash: 950719af,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c60313490ee322d54c5b3908e628bff77d0ca10249f7253eb9cbd44745a805af,PodSandboxId:b221107c8052d9be54b23c0ee2e882f4adc57ade5813f27e7dfc4e4f71006663,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722929010372163680,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zscz4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef588a32-ac32-4e38-926e-6ee7a662a399,},Annotations:map[string]string{io.kubernetes.container.hash: 38e96eed,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:321504a9bbbfc57376fccab559a6ebc7388670dc0bf123b5f9fd41b1da99adaa,PodSandboxId:4a9fe2c539265caf233fd5f3778ebc3e6092ce121ddf647e3f72622e31022e53,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722929010343273638,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-t24ww,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
8effbe6c-3107-409e-9c94-dee4103d12b0,},Annotations:map[string]string{io.kubernetes.container.hash: 7604f64,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44b9ff2f36bf90e89da591e552ad006221f4c5a515009116c79504170f19368e,PodSandboxId:cabb1138777e527cb2a559ae6e3b154f442e8d3b8d44cc59ed6a4b9a49ce65ea,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING
,CreatedAt:1722929010265192394,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8379466d-0241-40e4-a262-d762b3cda2eb,},Annotations:map[string]string{io.kubernetes.container.hash: d079fdad,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8f5ae0da505fc7f79225ba7115c83b9e909128bc28df767fce1fc3ba7a01392,PodSandboxId:5bbdb233d391e1127588924ac263113c9aa0148b24fc7113df3ca0891d7e0fdc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CON
TAINER_RUNNING,CreatedAt:1722928998315681833,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-n7jmb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa496bfb-7bf0-4ebc-adbd-36d992d8e279,},Annotations:map[string]string{io.kubernetes.container.hash: db2b7a8c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5a5bdf0b2a62163f6aff6abe864eedc47b461e233989f23ef7770c226763325,PodSandboxId:2ed57fdbf65f1846c200a67b50a1043fe8c70dd79c90d1a1c67e5e0634c8088b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722928994
187870823,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ksm6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 594bc178-8319-4922-9132-69a9c6890b95,},Annotations:map[string]string{io.kubernetes.container.hash: 50a78a81,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ea977593bf7ff53f7419e821deaa1cf5813b4c116730030cd8d417d078be7fa,PodSandboxId:eb00561ba0cb5a9d0f8cd015251b78530e734c81712da3c4aeb67fbd1dc5a274,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172292897749
4204189,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cabf9e912268f41f81e7fabcaf2337ce,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a94703035c76b778779a89b7fc9176c7a956ebec52628aa1720d26bf2a1c31f5,PodSandboxId:33f4dab8a223724c5140e304cb2bb6b2c32716d0818935a0a1f10a3e73c27f55,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722928974564889139,Labels:map[strin
g]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6346700b562b8d3f4c18c66b3813bc28,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27f4fdf6fa77a51e1295c12b459cda687ed6ae68875218e24ba7959fb37814ed,PodSandboxId:5ebdb09ee65d6de5671cacfdefaaebdf7b5b00f25aa61461d39015ab5cb5578b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722928974548891631,Labels:map[string]s
tring{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 080589493a4905d0fb10f52a1a0e25a5,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ffee4f9f096a33fc08f1f6b266ea17ad2181a58a153f7fb1703ded978b23a46,PodSandboxId:e8d8f5e72e2b25ee44de736fd829832c7731bada1325d1781559721e92200f54,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722928974494900073,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f74055e61dcf5f2385b1934a1a8e3b73,},Annotations:map[string]string{io.kubernetes.container.hash: 27fe2ed3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:306b97b37955570f0a18d38ff1afab7bb09e01c47a3418dba2aeed0f00d6052d,PodSandboxId:5055641759bdb7b0e57568f6d5310c72587ea715571f3fff96ff5d200be59dc2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722928974442532867,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernet
es.pod.name: etcd-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65b7cf59bb7c42e61e37dcbebce4eec4,},Annotations:map[string]string{io.kubernetes.container.hash: fa683f08,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fb3d8b7b-8ee4-454c-80b1-29c099309a2f name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 07:31:41 ha-118173 crio[676]: time="2024-08-06 07:31:41.622833047Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=83195f7e-9f70-4ba6-9e3f-0f8384475bfd name=/runtime.v1.RuntimeService/Version
	Aug 06 07:31:41 ha-118173 crio[676]: time="2024-08-06 07:31:41.622909036Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=83195f7e-9f70-4ba6-9e3f-0f8384475bfd name=/runtime.v1.RuntimeService/Version
	Aug 06 07:31:41 ha-118173 crio[676]: time="2024-08-06 07:31:41.624012168Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3c57b0af-ae50-4c2b-9274-a5cb3b42e83c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 07:31:41 ha-118173 crio[676]: time="2024-08-06 07:31:41.624498640Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722929501624472851,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154770,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3c57b0af-ae50-4c2b-9274-a5cb3b42e83c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 07:31:41 ha-118173 crio[676]: time="2024-08-06 07:31:41.625187450Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3b3079ef-0ef8-4aeb-869f-bfaa576376f0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 07:31:41 ha-118173 crio[676]: time="2024-08-06 07:31:41.625273933Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3b3079ef-0ef8-4aeb-869f-bfaa576376f0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 07:31:41 ha-118173 crio[676]: time="2024-08-06 07:31:41.625507065Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:10437bdde23713cae1b222edffa34be0e9583f43a83c01f10aae555f0dd7542e,PodSandboxId:3f725d083d59ecb8b65c8b09466e36c8b01dd5742fa8d97e108de4a49f47de04,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722929221668199667,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-rml5d,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4a82c0ca-1596-4dfe-a2f9-3a34f22480e5,},Annotations:map[string]string{io.kubernetes.container.hash: 950719af,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c60313490ee322d54c5b3908e628bff77d0ca10249f7253eb9cbd44745a805af,PodSandboxId:b221107c8052d9be54b23c0ee2e882f4adc57ade5813f27e7dfc4e4f71006663,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722929010372163680,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zscz4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef588a32-ac32-4e38-926e-6ee7a662a399,},Annotations:map[string]string{io.kubernetes.container.hash: 38e96eed,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:321504a9bbbfc57376fccab559a6ebc7388670dc0bf123b5f9fd41b1da99adaa,PodSandboxId:4a9fe2c539265caf233fd5f3778ebc3e6092ce121ddf647e3f72622e31022e53,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722929010343273638,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-t24ww,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
8effbe6c-3107-409e-9c94-dee4103d12b0,},Annotations:map[string]string{io.kubernetes.container.hash: 7604f64,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44b9ff2f36bf90e89da591e552ad006221f4c5a515009116c79504170f19368e,PodSandboxId:cabb1138777e527cb2a559ae6e3b154f442e8d3b8d44cc59ed6a4b9a49ce65ea,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING
,CreatedAt:1722929010265192394,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8379466d-0241-40e4-a262-d762b3cda2eb,},Annotations:map[string]string{io.kubernetes.container.hash: d079fdad,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8f5ae0da505fc7f79225ba7115c83b9e909128bc28df767fce1fc3ba7a01392,PodSandboxId:5bbdb233d391e1127588924ac263113c9aa0148b24fc7113df3ca0891d7e0fdc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CON
TAINER_RUNNING,CreatedAt:1722928998315681833,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-n7jmb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa496bfb-7bf0-4ebc-adbd-36d992d8e279,},Annotations:map[string]string{io.kubernetes.container.hash: db2b7a8c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5a5bdf0b2a62163f6aff6abe864eedc47b461e233989f23ef7770c226763325,PodSandboxId:2ed57fdbf65f1846c200a67b50a1043fe8c70dd79c90d1a1c67e5e0634c8088b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722928994
187870823,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ksm6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 594bc178-8319-4922-9132-69a9c6890b95,},Annotations:map[string]string{io.kubernetes.container.hash: 50a78a81,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ea977593bf7ff53f7419e821deaa1cf5813b4c116730030cd8d417d078be7fa,PodSandboxId:eb00561ba0cb5a9d0f8cd015251b78530e734c81712da3c4aeb67fbd1dc5a274,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172292897749
4204189,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cabf9e912268f41f81e7fabcaf2337ce,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a94703035c76b778779a89b7fc9176c7a956ebec52628aa1720d26bf2a1c31f5,PodSandboxId:33f4dab8a223724c5140e304cb2bb6b2c32716d0818935a0a1f10a3e73c27f55,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722928974564889139,Labels:map[strin
g]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6346700b562b8d3f4c18c66b3813bc28,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27f4fdf6fa77a51e1295c12b459cda687ed6ae68875218e24ba7959fb37814ed,PodSandboxId:5ebdb09ee65d6de5671cacfdefaaebdf7b5b00f25aa61461d39015ab5cb5578b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722928974548891631,Labels:map[string]s
tring{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 080589493a4905d0fb10f52a1a0e25a5,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ffee4f9f096a33fc08f1f6b266ea17ad2181a58a153f7fb1703ded978b23a46,PodSandboxId:e8d8f5e72e2b25ee44de736fd829832c7731bada1325d1781559721e92200f54,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722928974494900073,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f74055e61dcf5f2385b1934a1a8e3b73,},Annotations:map[string]string{io.kubernetes.container.hash: 27fe2ed3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:306b97b37955570f0a18d38ff1afab7bb09e01c47a3418dba2aeed0f00d6052d,PodSandboxId:5055641759bdb7b0e57568f6d5310c72587ea715571f3fff96ff5d200be59dc2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722928974442532867,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernet
es.pod.name: etcd-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65b7cf59bb7c42e61e37dcbebce4eec4,},Annotations:map[string]string{io.kubernetes.container.hash: fa683f08,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3b3079ef-0ef8-4aeb-869f-bfaa576376f0 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	10437bdde2371       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   0                   3f725d083d59e       busybox-fc5497c4f-rml5d
	c60313490ee32       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      8 minutes ago       Running             coredns                   0                   b221107c8052d       coredns-7db6d8ff4d-zscz4
	321504a9bbbfc       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      8 minutes ago       Running             coredns                   0                   4a9fe2c539265       coredns-7db6d8ff4d-t24ww
	44b9ff2f36bf9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      8 minutes ago       Running             storage-provisioner       0                   cabb1138777e5       storage-provisioner
	b8f5ae0da505f       docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3    8 minutes ago       Running             kindnet-cni               0                   5bbdb233d391e       kindnet-n7jmb
	e5a5bdf0b2a62       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      8 minutes ago       Running             kube-proxy                0                   2ed57fdbf65f1       kube-proxy-ksm6v
	9ea977593bf7f       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     8 minutes ago       Running             kube-vip                  0                   eb00561ba0cb5       kube-vip-ha-118173
	a94703035c76b       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      8 minutes ago       Running             kube-controller-manager   0                   33f4dab8a2237       kube-controller-manager-ha-118173
	27f4fdf6fa77a       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      8 minutes ago       Running             kube-scheduler            0                   5ebdb09ee65d6       kube-scheduler-ha-118173
	1ffee4f9f096a       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      8 minutes ago       Running             kube-apiserver            0                   e8d8f5e72e2b2       kube-apiserver-ha-118173
	306b97b379555       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      8 minutes ago       Running             etcd                      0                   5055641759bdb       etcd-ha-118173
	
	
	==> coredns [321504a9bbbfc57376fccab559a6ebc7388670dc0bf123b5f9fd41b1da99adaa] <==
	[INFO] 127.0.0.1:52647 - 6927 "HINFO IN 6474732985926963853.8121263455686486253. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010761994s
	[INFO] 10.244.2.2:37657 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000745056s
	[INFO] 10.244.2.2:34304 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.01427884s
	[INFO] 10.244.1.2:50320 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000621303s
	[INFO] 10.244.1.2:50307 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001920609s
	[INFO] 10.244.0.4:35382 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.001483476s
	[INFO] 10.244.2.2:42914 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003426884s
	[INFO] 10.244.2.2:54544 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002889491s
	[INFO] 10.244.2.2:56241 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000153019s
	[INFO] 10.244.1.2:42127 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000287649s
	[INFO] 10.244.1.2:57726 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001778651s
	[INFO] 10.244.1.2:48193 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000186214s
	[INFO] 10.244.1.2:39745 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001882s
	[INFO] 10.244.0.4:43115 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00007628s
	[INFO] 10.244.0.4:43987 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001861604s
	[INFO] 10.244.0.4:36070 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000175723s
	[INFO] 10.244.0.4:57521 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000070761s
	[INFO] 10.244.2.2:54569 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000133483s
	[INFO] 10.244.2.2:45084 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000078772s
	[INFO] 10.244.0.4:53171 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000115977s
	[INFO] 10.244.0.4:46499 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000151186s
	[INFO] 10.244.1.2:46474 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000192738s
	[INFO] 10.244.1.2:42611 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000163773s
	[INFO] 10.244.0.4:56252 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000104237s
	[INFO] 10.244.0.4:53725 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000119201s
	
	
	==> coredns [c60313490ee322d54c5b3908e628bff77d0ca10249f7253eb9cbd44745a805af] <==
	[INFO] 10.244.2.2:39039 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000306066s
	[INFO] 10.244.1.2:52358 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000125862s
	[INFO] 10.244.1.2:36231 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001558919s
	[INFO] 10.244.1.2:48070 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000161985s
	[INFO] 10.244.1.2:52908 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000230784s
	[INFO] 10.244.0.4:40510 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000107856s
	[INFO] 10.244.0.4:53202 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001964559s
	[INFO] 10.244.0.4:37382 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000120987s
	[INFO] 10.244.0.4:55432 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000068014s
	[INFO] 10.244.2.2:55791 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000121334s
	[INFO] 10.244.2.2:41409 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000170119s
	[INFO] 10.244.1.2:38348 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000159643s
	[INFO] 10.244.1.2:59261 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000120304s
	[INFO] 10.244.1.2:34766 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001388s
	[INFO] 10.244.1.2:51650 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00006659s
	[INFO] 10.244.0.4:60377 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000100376s
	[INFO] 10.244.0.4:36616 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000046708s
	[INFO] 10.244.2.2:40205 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000126264s
	[INFO] 10.244.2.2:43822 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000154718s
	[INFO] 10.244.2.2:55803 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000127422s
	[INFO] 10.244.2.2:36078 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000107543s
	[INFO] 10.244.1.2:57979 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000156818s
	[INFO] 10.244.1.2:46537 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000172879s
	[INFO] 10.244.0.4:53503 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000186753s
	[INFO] 10.244.0.4:59777 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000063489s
	
	
	==> describe nodes <==
	Name:               ha-118173
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-118173
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e92cb06692f5ea1ba801d10d148e5e92e807f9c8
	                    minikube.k8s.io/name=ha-118173
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_06T07_23_01_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 06 Aug 2024 07:22:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-118173
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 06 Aug 2024 07:31:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 06 Aug 2024 07:27:36 +0000   Tue, 06 Aug 2024 07:22:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 06 Aug 2024 07:27:36 +0000   Tue, 06 Aug 2024 07:22:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 06 Aug 2024 07:27:36 +0000   Tue, 06 Aug 2024 07:22:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 06 Aug 2024 07:27:36 +0000   Tue, 06 Aug 2024 07:23:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.164
	  Hostname:    ha-118173
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 bc4f3dc5ed054dcb94a91cb696ade310
	  System UUID:                bc4f3dc5-ed05-4dcb-94a9-1cb696ade310
	  Boot ID:                    34e88bd3-feb2-44fb-9f4d-def7e685eb93
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-rml5d              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m45s
	  kube-system                 coredns-7db6d8ff4d-t24ww             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     8m28s
	  kube-system                 coredns-7db6d8ff4d-zscz4             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     8m28s
	  kube-system                 etcd-ha-118173                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         8m42s
	  kube-system                 kindnet-n7jmb                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      8m28s
	  kube-system                 kube-apiserver-ha-118173             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m41s
	  kube-system                 kube-controller-manager-ha-118173    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m41s
	  kube-system                 kube-proxy-ksm6v                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m28s
	  kube-system                 kube-scheduler-ha-118173             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m43s
	  kube-system                 kube-vip-ha-118173                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m43s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 8m27s  kube-proxy       
	  Normal  NodeHasSufficientMemory  8m48s  kubelet          Node ha-118173 status is now: NodeHasSufficientMemory
	  Normal  Starting                 8m41s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  8m41s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m41s  kubelet          Node ha-118173 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m41s  kubelet          Node ha-118173 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m41s  kubelet          Node ha-118173 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           8m29s  node-controller  Node ha-118173 event: Registered Node ha-118173 in Controller
	  Normal  NodeReady                8m12s  kubelet          Node ha-118173 status is now: NodeReady
	  Normal  RegisteredNode           6m10s  node-controller  Node ha-118173 event: Registered Node ha-118173 in Controller
	  Normal  RegisteredNode           4m57s  node-controller  Node ha-118173 event: Registered Node ha-118173 in Controller
	
	
	Name:               ha-118173-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-118173-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e92cb06692f5ea1ba801d10d148e5e92e807f9c8
	                    minikube.k8s.io/name=ha-118173
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_06T07_25_16_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 06 Aug 2024 07:25:13 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-118173-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 06 Aug 2024 07:28:16 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 06 Aug 2024 07:27:15 +0000   Tue, 06 Aug 2024 07:28:57 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 06 Aug 2024 07:27:15 +0000   Tue, 06 Aug 2024 07:28:57 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 06 Aug 2024 07:27:15 +0000   Tue, 06 Aug 2024 07:28:57 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 06 Aug 2024 07:27:15 +0000   Tue, 06 Aug 2024 07:28:57 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.203
	  Hostname:    ha-118173-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1d472203b4d8448095b6313de6675a72
	  System UUID:                1d472203-b4d8-4480-95b6-313de6675a72
	  Boot ID:                    6ab46e7a-6f1a-4acf-9384-ca4c93b30dfc
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-j9d92                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m45s
	  kube-system                 etcd-ha-118173-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m26s
	  kube-system                 kindnet-z9w6l                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m28s
	  kube-system                 kube-apiserver-ha-118173-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m27s
	  kube-system                 kube-controller-manager-ha-118173-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m27s
	  kube-system                 kube-proxy-k8sq6                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m28s
	  kube-system                 kube-scheduler-ha-118173-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m26s
	  kube-system                 kube-vip-ha-118173-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m25s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  6m28s (x8 over 6m29s)  kubelet          Node ha-118173-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m28s (x8 over 6m29s)  kubelet          Node ha-118173-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m28s (x7 over 6m29s)  kubelet          Node ha-118173-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m28s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m24s                  node-controller  Node ha-118173-m02 event: Registered Node ha-118173-m02 in Controller
	  Normal  RegisteredNode           6m10s                  node-controller  Node ha-118173-m02 event: Registered Node ha-118173-m02 in Controller
	  Normal  RegisteredNode           4m57s                  node-controller  Node ha-118173-m02 event: Registered Node ha-118173-m02 in Controller
	  Normal  NodeNotReady             2m44s                  node-controller  Node ha-118173-m02 status is now: NodeNotReady
	
	
	Name:               ha-118173-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-118173-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e92cb06692f5ea1ba801d10d148e5e92e807f9c8
	                    minikube.k8s.io/name=ha-118173
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_06T07_26_30_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 06 Aug 2024 07:26:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-118173-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 06 Aug 2024 07:31:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 06 Aug 2024 07:27:28 +0000   Tue, 06 Aug 2024 07:26:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 06 Aug 2024 07:27:28 +0000   Tue, 06 Aug 2024 07:26:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 06 Aug 2024 07:27:28 +0000   Tue, 06 Aug 2024 07:26:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 06 Aug 2024 07:27:28 +0000   Tue, 06 Aug 2024 07:26:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.5
	  Hostname:    ha-118173-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 323fad8ca5474069b5e10e092f6ed68a
	  System UUID:                323fad8c-a547-4069-b5e1-0e092f6ed68a
	  Boot ID:                    c712c8de-a26d-41b9-8050-14da15aeb29b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-852mp                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m45s
	  kube-system                 etcd-ha-118173-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m13s
	  kube-system                 kindnet-2bmzh                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m15s
	  kube-system                 kube-apiserver-ha-118173-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m14s
	  kube-system                 kube-controller-manager-ha-118173-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m14s
	  kube-system                 kube-proxy-sw4nj                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m15s
	  kube-system                 kube-scheduler-ha-118173-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m13s
	  kube-system                 kube-vip-ha-118173-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m10s                  kube-proxy       
	  Normal  RegisteredNode           5m15s                  node-controller  Node ha-118173-m03 event: Registered Node ha-118173-m03 in Controller
	  Normal  NodeHasSufficientMemory  5m15s (x8 over 5m15s)  kubelet          Node ha-118173-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m15s (x8 over 5m15s)  kubelet          Node ha-118173-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m15s (x7 over 5m15s)  kubelet          Node ha-118173-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m15s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m11s                  node-controller  Node ha-118173-m03 event: Registered Node ha-118173-m03 in Controller
	  Normal  RegisteredNode           4m58s                  node-controller  Node ha-118173-m03 event: Registered Node ha-118173-m03 in Controller
	
	
	Name:               ha-118173-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-118173-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e92cb06692f5ea1ba801d10d148e5e92e807f9c8
	                    minikube.k8s.io/name=ha-118173
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_06T07_27_39_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 06 Aug 2024 07:27:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-118173-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 06 Aug 2024 07:31:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 06 Aug 2024 07:28:09 +0000   Tue, 06 Aug 2024 07:27:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 06 Aug 2024 07:28:09 +0000   Tue, 06 Aug 2024 07:27:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 06 Aug 2024 07:28:09 +0000   Tue, 06 Aug 2024 07:27:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 06 Aug 2024 07:28:09 +0000   Tue, 06 Aug 2024 07:27:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.107
	  Hostname:    ha-118173-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a0510d6c5dda47be93ffda6ba4fd5419
	  System UUID:                a0510d6c-5dda-47be-93ff-da6ba4fd5419
	  Boot ID:                    8e2de43d-b0eb-44c3-8d61-ce7909ecbc3f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-97v8n       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m4s
	  kube-system                 kube-proxy-q4x6p    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m59s                kube-proxy       
	  Normal  NodeHasSufficientMemory  4m4s (x2 over 4m4s)  kubelet          Node ha-118173-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m4s (x2 over 4m4s)  kubelet          Node ha-118173-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m4s (x2 over 4m4s)  kubelet          Node ha-118173-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m4s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m3s                 node-controller  Node ha-118173-m04 event: Registered Node ha-118173-m04 in Controller
	  Normal  RegisteredNode           4m1s                 node-controller  Node ha-118173-m04 event: Registered Node ha-118173-m04 in Controller
	  Normal  RegisteredNode           4m                   node-controller  Node ha-118173-m04 event: Registered Node ha-118173-m04 in Controller
	  Normal  NodeReady                3m44s                kubelet          Node ha-118173-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Aug 6 07:22] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050778] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040080] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.786324] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.540494] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.635705] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.212519] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.057280] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.055993] systemd-fstab-generator[605]: Ignoring "noauto" option for root device
	[  +0.205541] systemd-fstab-generator[619]: Ignoring "noauto" option for root device
	[  +0.119779] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	[  +0.271137] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[  +4.288191] systemd-fstab-generator[759]: Ignoring "noauto" option for root device
	[  +0.057342] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.007696] systemd-fstab-generator[935]: Ignoring "noauto" option for root device
	[  +1.004774] kauditd_printk_skb: 62 callbacks suppressed
	[  +5.981141] systemd-fstab-generator[1356]: Ignoring "noauto" option for root device
	[  +0.087170] kauditd_printk_skb: 35 callbacks suppressed
	[Aug 6 07:23] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.080054] kauditd_printk_skb: 35 callbacks suppressed
	[Aug 6 07:25] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [306b97b37955570f0a18d38ff1afab7bb09e01c47a3418dba2aeed0f00d6052d] <==
	{"level":"warn","ts":"2024-08-06T07:31:41.898861Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a00153c0a3e6122","from":"6a00153c0a3e6122","remote-peer-id":"9fed855c7863c2a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-06T07:31:41.903885Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a00153c0a3e6122","from":"6a00153c0a3e6122","remote-peer-id":"9fed855c7863c2a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-06T07:31:41.9047Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a00153c0a3e6122","from":"6a00153c0a3e6122","remote-peer-id":"9fed855c7863c2a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-06T07:31:41.917092Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a00153c0a3e6122","from":"6a00153c0a3e6122","remote-peer-id":"9fed855c7863c2a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-06T07:31:41.917998Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a00153c0a3e6122","from":"6a00153c0a3e6122","remote-peer-id":"9fed855c7863c2a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-06T07:31:41.927875Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a00153c0a3e6122","from":"6a00153c0a3e6122","remote-peer-id":"9fed855c7863c2a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-06T07:31:41.937226Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a00153c0a3e6122","from":"6a00153c0a3e6122","remote-peer-id":"9fed855c7863c2a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-06T07:31:41.941347Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a00153c0a3e6122","from":"6a00153c0a3e6122","remote-peer-id":"9fed855c7863c2a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-06T07:31:41.944374Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a00153c0a3e6122","from":"6a00153c0a3e6122","remote-peer-id":"9fed855c7863c2a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-06T07:31:41.951245Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a00153c0a3e6122","from":"6a00153c0a3e6122","remote-peer-id":"9fed855c7863c2a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-06T07:31:41.955276Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a00153c0a3e6122","from":"6a00153c0a3e6122","remote-peer-id":"9fed855c7863c2a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-06T07:31:41.959185Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a00153c0a3e6122","from":"6a00153c0a3e6122","remote-peer-id":"9fed855c7863c2a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-06T07:31:41.992302Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a00153c0a3e6122","from":"6a00153c0a3e6122","remote-peer-id":"9fed855c7863c2a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-06T07:31:42.003911Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a00153c0a3e6122","from":"6a00153c0a3e6122","remote-peer-id":"9fed855c7863c2a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-06T07:31:42.012789Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a00153c0a3e6122","from":"6a00153c0a3e6122","remote-peer-id":"9fed855c7863c2a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-06T07:31:42.023796Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a00153c0a3e6122","from":"6a00153c0a3e6122","remote-peer-id":"9fed855c7863c2a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-06T07:31:42.036864Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a00153c0a3e6122","from":"6a00153c0a3e6122","remote-peer-id":"9fed855c7863c2a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-06T07:31:42.055893Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a00153c0a3e6122","from":"6a00153c0a3e6122","remote-peer-id":"9fed855c7863c2a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-06T07:31:42.063791Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a00153c0a3e6122","from":"6a00153c0a3e6122","remote-peer-id":"9fed855c7863c2a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-06T07:31:42.068107Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a00153c0a3e6122","from":"6a00153c0a3e6122","remote-peer-id":"9fed855c7863c2a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-06T07:31:42.073109Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a00153c0a3e6122","from":"6a00153c0a3e6122","remote-peer-id":"9fed855c7863c2a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-06T07:31:42.083093Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a00153c0a3e6122","from":"6a00153c0a3e6122","remote-peer-id":"9fed855c7863c2a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-06T07:31:42.091398Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a00153c0a3e6122","from":"6a00153c0a3e6122","remote-peer-id":"9fed855c7863c2a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-06T07:31:42.098291Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a00153c0a3e6122","from":"6a00153c0a3e6122","remote-peer-id":"9fed855c7863c2a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-06T07:31:42.155866Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a00153c0a3e6122","from":"6a00153c0a3e6122","remote-peer-id":"9fed855c7863c2a","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 07:31:42 up 9 min,  0 users,  load average: 0.18, 0.32, 0.19
	Linux ha-118173 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [b8f5ae0da505fc7f79225ba7115c83b9e909128bc28df767fce1fc3ba7a01392] <==
	I0806 07:31:09.282259       1 main.go:322] Node ha-118173-m02 has CIDR [10.244.1.0/24] 
	I0806 07:31:19.272151       1 main.go:295] Handling node with IPs: map[192.168.39.164:{}]
	I0806 07:31:19.272256       1 main.go:299] handling current node
	I0806 07:31:19.272301       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0806 07:31:19.272307       1 main.go:322] Node ha-118173-m02 has CIDR [10.244.1.0/24] 
	I0806 07:31:19.272499       1 main.go:295] Handling node with IPs: map[192.168.39.5:{}]
	I0806 07:31:19.272524       1 main.go:322] Node ha-118173-m03 has CIDR [10.244.2.0/24] 
	I0806 07:31:19.272657       1 main.go:295] Handling node with IPs: map[192.168.39.107:{}]
	I0806 07:31:19.272692       1 main.go:322] Node ha-118173-m04 has CIDR [10.244.3.0/24] 
	I0806 07:31:29.271876       1 main.go:295] Handling node with IPs: map[192.168.39.107:{}]
	I0806 07:31:29.271928       1 main.go:322] Node ha-118173-m04 has CIDR [10.244.3.0/24] 
	I0806 07:31:29.272083       1 main.go:295] Handling node with IPs: map[192.168.39.164:{}]
	I0806 07:31:29.272985       1 main.go:299] handling current node
	I0806 07:31:29.273042       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0806 07:31:29.273066       1 main.go:322] Node ha-118173-m02 has CIDR [10.244.1.0/24] 
	I0806 07:31:29.273284       1 main.go:295] Handling node with IPs: map[192.168.39.5:{}]
	I0806 07:31:29.273308       1 main.go:322] Node ha-118173-m03 has CIDR [10.244.2.0/24] 
	I0806 07:31:39.271797       1 main.go:295] Handling node with IPs: map[192.168.39.164:{}]
	I0806 07:31:39.271833       1 main.go:299] handling current node
	I0806 07:31:39.271852       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0806 07:31:39.271857       1 main.go:322] Node ha-118173-m02 has CIDR [10.244.1.0/24] 
	I0806 07:31:39.271990       1 main.go:295] Handling node with IPs: map[192.168.39.5:{}]
	I0806 07:31:39.272014       1 main.go:322] Node ha-118173-m03 has CIDR [10.244.2.0/24] 
	I0806 07:31:39.272162       1 main.go:295] Handling node with IPs: map[192.168.39.107:{}]
	I0806 07:31:39.272170       1 main.go:322] Node ha-118173-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [1ffee4f9f096a33fc08f1f6b266ea17ad2181a58a153f7fb1703ded978b23a46] <==
	I0806 07:22:59.114813       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0806 07:22:59.149016       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.164]
	I0806 07:22:59.151002       1 controller.go:615] quota admission added evaluator for: endpoints
	I0806 07:22:59.173271       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0806 07:22:59.374819       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0806 07:23:00.696250       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0806 07:23:00.737911       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0806 07:23:00.761423       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0806 07:23:13.330094       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0806 07:23:13.583838       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0806 07:27:03.129943       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59664: use of closed network connection
	E0806 07:27:03.330300       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59684: use of closed network connection
	E0806 07:27:03.526011       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59706: use of closed network connection
	E0806 07:27:03.732229       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59720: use of closed network connection
	E0806 07:27:03.918349       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59730: use of closed network connection
	E0806 07:27:04.104013       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59752: use of closed network connection
	E0806 07:27:04.287914       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59770: use of closed network connection
	E0806 07:27:04.498130       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59790: use of closed network connection
	E0806 07:27:04.689488       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59806: use of closed network connection
	E0806 07:27:05.194787       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59844: use of closed network connection
	E0806 07:27:05.380854       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59872: use of closed network connection
	E0806 07:27:05.557717       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59896: use of closed network connection
	E0806 07:27:05.740816       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55210: use of closed network connection
	E0806 07:27:05.917075       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55230: use of closed network connection
	W0806 07:28:29.139362       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.164 192.168.39.5]
	
	
	==> kube-controller-manager [a94703035c76b778779a89b7fc9176c7a956ebec52628aa1720d26bf2a1c31f5] <==
	I0806 07:26:57.272494       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="59.028µs"
	I0806 07:26:57.318015       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="17.119403ms"
	I0806 07:26:57.318438       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="67.238µs"
	I0806 07:26:57.383441       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.446412ms"
	I0806 07:26:57.383608       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="58.39µs"
	I0806 07:26:58.313952       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="68.708µs"
	I0806 07:26:58.329219       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="113.525µs"
	I0806 07:26:58.343952       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="53.183µs"
	I0806 07:26:58.348477       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.841µs"
	I0806 07:26:58.352388       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="68.761µs"
	I0806 07:26:58.369449       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="103.908µs"
	I0806 07:27:00.463853       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.356244ms"
	I0806 07:27:00.463984       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.866µs"
	I0806 07:27:00.645958       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="72.548µs"
	I0806 07:27:02.344534       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.10763ms"
	I0806 07:27:02.344697       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.953µs"
	I0806 07:27:02.662283       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.857433ms"
	I0806 07:27:02.662405       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="63.709µs"
	I0806 07:27:38.505040       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-118173-m04\" does not exist"
	I0806 07:27:38.530464       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-118173-m04" podCIDRs=["10.244.3.0/24"]
	I0806 07:27:42.873398       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-118173-m04"
	I0806 07:27:58.034326       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-118173-m04"
	I0806 07:28:57.904050       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-118173-m04"
	I0806 07:28:58.073064       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="50.758979ms"
	I0806 07:28:58.073278       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.967µs"
	
	
	==> kube-proxy [e5a5bdf0b2a62163f6aff6abe864eedc47b461e233989f23ef7770c226763325] <==
	I0806 07:23:14.558114       1 server_linux.go:69] "Using iptables proxy"
	I0806 07:23:14.576147       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.164"]
	I0806 07:23:14.624208       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0806 07:23:14.624261       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0806 07:23:14.624277       1 server_linux.go:165] "Using iptables Proxier"
	I0806 07:23:14.627661       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0806 07:23:14.627905       1 server.go:872] "Version info" version="v1.30.3"
	I0806 07:23:14.627938       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0806 07:23:14.630373       1 config.go:192] "Starting service config controller"
	I0806 07:23:14.630640       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0806 07:23:14.630690       1 config.go:101] "Starting endpoint slice config controller"
	I0806 07:23:14.630711       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0806 07:23:14.632764       1 config.go:319] "Starting node config controller"
	I0806 07:23:14.632788       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0806 07:23:14.730828       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0806 07:23:14.730891       1 shared_informer.go:320] Caches are synced for service config
	I0806 07:23:14.733864       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [27f4fdf6fa77a51e1295c12b459cda687ed6ae68875218e24ba7959fb37814ed] <==
	W0806 07:22:58.668683       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0806 07:22:58.668848       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0806 07:22:58.886973       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0806 07:22:58.887021       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0806 07:23:01.830478       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0806 07:26:27.185971       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-sw4nj\": pod kube-proxy-sw4nj is already assigned to node \"ha-118173-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-sw4nj" node="ha-118173-m03"
	E0806 07:26:27.186652       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod a75ef027-13c0-49c6-8580-6adb8afa497b(kube-system/kube-proxy-sw4nj) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-sw4nj"
	E0806 07:26:27.186739       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-sw4nj\": pod kube-proxy-sw4nj is already assigned to node \"ha-118173-m03\"" pod="kube-system/kube-proxy-sw4nj"
	I0806 07:26:27.186813       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-sw4nj" node="ha-118173-m03"
	E0806 07:26:27.188161       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-dzls6\": pod kindnet-dzls6 is already assigned to node \"ha-118173-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-dzls6" node="ha-118173-m03"
	E0806 07:26:27.188282       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 3ba3729c-f95a-4c31-8930-69abe66f0aa1(kube-system/kindnet-dzls6) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-dzls6"
	E0806 07:26:27.188414       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-dzls6\": pod kindnet-dzls6 is already assigned to node \"ha-118173-m03\"" pod="kube-system/kindnet-dzls6"
	I0806 07:26:27.188520       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-dzls6" node="ha-118173-m03"
	E0806 07:26:56.860057       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-j9d92\": pod busybox-fc5497c4f-j9d92 is already assigned to node \"ha-118173-m02\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-j9d92" node="ha-118173-m02"
	E0806 07:26:56.861298       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 32b327ab-b417-4a6f-9933-de5b4366d910(default/busybox-fc5497c4f-j9d92) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-j9d92"
	E0806 07:26:56.861407       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-j9d92\": pod busybox-fc5497c4f-j9d92 is already assigned to node \"ha-118173-m02\"" pod="default/busybox-fc5497c4f-j9d92"
	I0806 07:26:56.861533       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-j9d92" node="ha-118173-m02"
	E0806 07:26:58.329378       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-852mp\": pod busybox-fc5497c4f-852mp is already assigned to node \"ha-118173-m03\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-852mp" node="ha-118173-m03"
	E0806 07:26:58.329483       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-852mp\": pod busybox-fc5497c4f-852mp is already assigned to node \"ha-118173-m03\"" pod="default/busybox-fc5497c4f-852mp"
	E0806 07:27:38.588805       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-97v8n\": pod kindnet-97v8n is already assigned to node \"ha-118173-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-97v8n" node="ha-118173-m04"
	E0806 07:27:38.590540       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 9fc002fa-f45d-473d-8dc8-f2ea9ede38f4(kube-system/kindnet-97v8n) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-97v8n"
	E0806 07:27:38.592148       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-97v8n\": pod kindnet-97v8n is already assigned to node \"ha-118173-m04\"" pod="kube-system/kindnet-97v8n"
	I0806 07:27:38.592274       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-97v8n" node="ha-118173-m04"
	E0806 07:27:38.592953       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-zldnr\": pod kube-proxy-zldnr is already assigned to node \"ha-118173-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-zldnr" node="ha-118173-m04"
	E0806 07:27:38.593060       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-zldnr\": pod kube-proxy-zldnr is already assigned to node \"ha-118173-m04\"" pod="kube-system/kube-proxy-zldnr"
	
	
	==> kubelet <==
	Aug 06 07:27:00 ha-118173 kubelet[1363]: E0806 07:27:00.655627    1363 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 06 07:27:00 ha-118173 kubelet[1363]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 06 07:27:00 ha-118173 kubelet[1363]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 06 07:27:00 ha-118173 kubelet[1363]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 06 07:27:00 ha-118173 kubelet[1363]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 06 07:28:00 ha-118173 kubelet[1363]: E0806 07:28:00.650896    1363 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 06 07:28:00 ha-118173 kubelet[1363]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 06 07:28:00 ha-118173 kubelet[1363]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 06 07:28:00 ha-118173 kubelet[1363]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 06 07:28:00 ha-118173 kubelet[1363]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 06 07:29:00 ha-118173 kubelet[1363]: E0806 07:29:00.651876    1363 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 06 07:29:00 ha-118173 kubelet[1363]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 06 07:29:00 ha-118173 kubelet[1363]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 06 07:29:00 ha-118173 kubelet[1363]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 06 07:29:00 ha-118173 kubelet[1363]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 06 07:30:00 ha-118173 kubelet[1363]: E0806 07:30:00.647829    1363 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 06 07:30:00 ha-118173 kubelet[1363]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 06 07:30:00 ha-118173 kubelet[1363]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 06 07:30:00 ha-118173 kubelet[1363]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 06 07:30:00 ha-118173 kubelet[1363]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 06 07:31:00 ha-118173 kubelet[1363]: E0806 07:31:00.654275    1363 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 06 07:31:00 ha-118173 kubelet[1363]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 06 07:31:00 ha-118173 kubelet[1363]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 06 07:31:00 ha-118173 kubelet[1363]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 06 07:31:00 ha-118173 kubelet[1363]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-118173 -n ha-118173
helpers_test.go:261: (dbg) Run:  kubectl --context ha-118173 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (59.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (376.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-118173 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-118173 -v=7 --alsologtostderr
E0806 07:31:49.508902   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/functional-811691/client.crt: no such file or directory
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-118173 -v=7 --alsologtostderr: exit status 82 (2m1.902378272s)

                                                
                                                
-- stdout --
	* Stopping node "ha-118173-m04"  ...
	* Stopping node "ha-118173-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 07:31:43.560708   37882 out.go:291] Setting OutFile to fd 1 ...
	I0806 07:31:43.560955   37882 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 07:31:43.560964   37882 out.go:304] Setting ErrFile to fd 2...
	I0806 07:31:43.560968   37882 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 07:31:43.561188   37882 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19370-13057/.minikube/bin
	I0806 07:31:43.561393   37882 out.go:298] Setting JSON to false
	I0806 07:31:43.561478   37882 mustload.go:65] Loading cluster: ha-118173
	I0806 07:31:43.561802   37882 config.go:182] Loaded profile config "ha-118173": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 07:31:43.561905   37882 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/config.json ...
	I0806 07:31:43.562072   37882 mustload.go:65] Loading cluster: ha-118173
	I0806 07:31:43.562205   37882 config.go:182] Loaded profile config "ha-118173": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 07:31:43.562237   37882 stop.go:39] StopHost: ha-118173-m04
	I0806 07:31:43.562614   37882 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:31:43.562654   37882 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:31:43.579313   37882 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37469
	I0806 07:31:43.579779   37882 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:31:43.580314   37882 main.go:141] libmachine: Using API Version  1
	I0806 07:31:43.580338   37882 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:31:43.580646   37882 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:31:43.583158   37882 out.go:177] * Stopping node "ha-118173-m04"  ...
	I0806 07:31:43.584276   37882 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0806 07:31:43.584308   37882 main.go:141] libmachine: (ha-118173-m04) Calling .DriverName
	I0806 07:31:43.584540   37882 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0806 07:31:43.584571   37882 main.go:141] libmachine: (ha-118173-m04) Calling .GetSSHHostname
	I0806 07:31:43.587429   37882 main.go:141] libmachine: (ha-118173-m04) DBG | domain ha-118173-m04 has defined MAC address 52:54:00:8f:be:12 in network mk-ha-118173
	I0806 07:31:43.587857   37882 main.go:141] libmachine: (ha-118173-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:be:12", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:27:21 +0000 UTC Type:0 Mac:52:54:00:8f:be:12 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:ha-118173-m04 Clientid:01:52:54:00:8f:be:12}
	I0806 07:31:43.587897   37882 main.go:141] libmachine: (ha-118173-m04) DBG | domain ha-118173-m04 has defined IP address 192.168.39.107 and MAC address 52:54:00:8f:be:12 in network mk-ha-118173
	I0806 07:31:43.588163   37882 main.go:141] libmachine: (ha-118173-m04) Calling .GetSSHPort
	I0806 07:31:43.588324   37882 main.go:141] libmachine: (ha-118173-m04) Calling .GetSSHKeyPath
	I0806 07:31:43.588541   37882 main.go:141] libmachine: (ha-118173-m04) Calling .GetSSHUsername
	I0806 07:31:43.588724   37882 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173-m04/id_rsa Username:docker}
	I0806 07:31:43.675876   37882 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0806 07:31:43.729290   37882 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0806 07:31:43.782520   37882 main.go:141] libmachine: Stopping "ha-118173-m04"...
	I0806 07:31:43.782545   37882 main.go:141] libmachine: (ha-118173-m04) Calling .GetState
	I0806 07:31:43.784013   37882 main.go:141] libmachine: (ha-118173-m04) Calling .Stop
	I0806 07:31:43.787276   37882 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 0/120
	I0806 07:31:45.000232   37882 main.go:141] libmachine: (ha-118173-m04) Calling .GetState
	I0806 07:31:45.001631   37882 main.go:141] libmachine: Machine "ha-118173-m04" was stopped.
	I0806 07:31:45.001651   37882 stop.go:75] duration metric: took 1.417385062s to stop
	I0806 07:31:45.001675   37882 stop.go:39] StopHost: ha-118173-m03
	I0806 07:31:45.002079   37882 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:31:45.002117   37882 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:31:45.017249   37882 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37937
	I0806 07:31:45.017684   37882 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:31:45.018177   37882 main.go:141] libmachine: Using API Version  1
	I0806 07:31:45.018203   37882 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:31:45.018516   37882 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:31:45.020509   37882 out.go:177] * Stopping node "ha-118173-m03"  ...
	I0806 07:31:45.021744   37882 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0806 07:31:45.021765   37882 main.go:141] libmachine: (ha-118173-m03) Calling .DriverName
	I0806 07:31:45.021988   37882 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0806 07:31:45.022013   37882 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHHostname
	I0806 07:31:45.024812   37882 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:31:45.025236   37882 main.go:141] libmachine: (ha-118173-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:90:55", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:25:53 +0000 UTC Type:0 Mac:52:54:00:d8:90:55 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-118173-m03 Clientid:01:52:54:00:d8:90:55}
	I0806 07:31:45.025271   37882 main.go:141] libmachine: (ha-118173-m03) DBG | domain ha-118173-m03 has defined IP address 192.168.39.5 and MAC address 52:54:00:d8:90:55 in network mk-ha-118173
	I0806 07:31:45.025417   37882 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHPort
	I0806 07:31:45.025596   37882 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHKeyPath
	I0806 07:31:45.025754   37882 main.go:141] libmachine: (ha-118173-m03) Calling .GetSSHUsername
	I0806 07:31:45.025913   37882 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173-m03/id_rsa Username:docker}
	I0806 07:31:45.108074   37882 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0806 07:31:45.161132   37882 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0806 07:31:45.216325   37882 main.go:141] libmachine: Stopping "ha-118173-m03"...
	I0806 07:31:45.216347   37882 main.go:141] libmachine: (ha-118173-m03) Calling .GetState
	I0806 07:31:45.218033   37882 main.go:141] libmachine: (ha-118173-m03) Calling .Stop
	I0806 07:31:45.221380   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 0/120
	I0806 07:31:46.222709   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 1/120
	I0806 07:31:47.224149   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 2/120
	I0806 07:31:48.226289   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 3/120
	I0806 07:31:49.228415   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 4/120
	I0806 07:31:50.230275   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 5/120
	I0806 07:31:51.232831   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 6/120
	I0806 07:31:52.234584   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 7/120
	I0806 07:31:53.236132   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 8/120
	I0806 07:31:54.237783   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 9/120
	I0806 07:31:55.240152   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 10/120
	I0806 07:31:56.242058   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 11/120
	I0806 07:31:57.243511   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 12/120
	I0806 07:31:58.245600   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 13/120
	I0806 07:31:59.247202   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 14/120
	I0806 07:32:00.248884   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 15/120
	I0806 07:32:01.250881   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 16/120
	I0806 07:32:02.252809   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 17/120
	I0806 07:32:03.254493   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 18/120
	I0806 07:32:04.256062   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 19/120
	I0806 07:32:05.258321   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 20/120
	I0806 07:32:06.259661   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 21/120
	I0806 07:32:07.261213   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 22/120
	I0806 07:32:08.262487   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 23/120
	I0806 07:32:09.264002   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 24/120
	I0806 07:32:10.265901   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 25/120
	I0806 07:32:11.267292   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 26/120
	I0806 07:32:12.268535   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 27/120
	I0806 07:32:13.269905   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 28/120
	I0806 07:32:14.271372   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 29/120
	I0806 07:32:15.273289   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 30/120
	I0806 07:32:16.275057   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 31/120
	I0806 07:32:17.276299   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 32/120
	I0806 07:32:18.278040   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 33/120
	I0806 07:32:19.279195   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 34/120
	I0806 07:32:20.280506   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 35/120
	I0806 07:32:21.281832   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 36/120
	I0806 07:32:22.282968   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 37/120
	I0806 07:32:23.284281   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 38/120
	I0806 07:32:24.285722   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 39/120
	I0806 07:32:25.287538   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 40/120
	I0806 07:32:26.288670   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 41/120
	I0806 07:32:27.290848   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 42/120
	I0806 07:32:28.292260   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 43/120
	I0806 07:32:29.293863   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 44/120
	I0806 07:32:30.295340   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 45/120
	I0806 07:32:31.296813   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 46/120
	I0806 07:32:32.298951   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 47/120
	I0806 07:32:33.301207   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 48/120
	I0806 07:32:34.303050   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 49/120
	I0806 07:32:35.304313   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 50/120
	I0806 07:32:36.305733   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 51/120
	I0806 07:32:37.307017   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 52/120
	I0806 07:32:38.308472   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 53/120
	I0806 07:32:39.309785   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 54/120
	I0806 07:32:40.311373   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 55/120
	I0806 07:32:41.312787   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 56/120
	I0806 07:32:42.314742   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 57/120
	I0806 07:32:43.315988   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 58/120
	I0806 07:32:44.317487   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 59/120
	I0806 07:32:45.319408   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 60/120
	I0806 07:32:46.320909   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 61/120
	I0806 07:32:47.322187   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 62/120
	I0806 07:32:48.323629   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 63/120
	I0806 07:32:49.325148   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 64/120
	I0806 07:32:50.326884   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 65/120
	I0806 07:32:51.328252   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 66/120
	I0806 07:32:52.329721   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 67/120
	I0806 07:32:53.331449   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 68/120
	I0806 07:32:54.333089   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 69/120
	I0806 07:32:55.334850   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 70/120
	I0806 07:32:56.336192   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 71/120
	I0806 07:32:57.337552   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 72/120
	I0806 07:32:58.339067   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 73/120
	I0806 07:32:59.340680   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 74/120
	I0806 07:33:00.342997   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 75/120
	I0806 07:33:01.344456   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 76/120
	I0806 07:33:02.345782   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 77/120
	I0806 07:33:03.347113   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 78/120
	I0806 07:33:04.348556   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 79/120
	I0806 07:33:05.350805   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 80/120
	I0806 07:33:06.352457   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 81/120
	I0806 07:33:07.353933   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 82/120
	I0806 07:33:08.355441   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 83/120
	I0806 07:33:09.357270   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 84/120
	I0806 07:33:10.359058   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 85/120
	I0806 07:33:11.360430   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 86/120
	I0806 07:33:12.361803   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 87/120
	I0806 07:33:13.363084   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 88/120
	I0806 07:33:14.364913   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 89/120
	I0806 07:33:15.366894   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 90/120
	I0806 07:33:16.368216   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 91/120
	I0806 07:33:17.369588   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 92/120
	I0806 07:33:18.370925   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 93/120
	I0806 07:33:19.373094   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 94/120
	I0806 07:33:20.375173   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 95/120
	I0806 07:33:21.376553   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 96/120
	I0806 07:33:22.377883   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 97/120
	I0806 07:33:23.379389   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 98/120
	I0806 07:33:24.380835   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 99/120
	I0806 07:33:25.382884   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 100/120
	I0806 07:33:26.384576   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 101/120
	I0806 07:33:27.386357   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 102/120
	I0806 07:33:28.387833   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 103/120
	I0806 07:33:29.389279   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 104/120
	I0806 07:33:30.391046   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 105/120
	I0806 07:33:31.392697   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 106/120
	I0806 07:33:32.395131   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 107/120
	I0806 07:33:33.396813   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 108/120
	I0806 07:33:34.398370   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 109/120
	I0806 07:33:35.400109   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 110/120
	I0806 07:33:36.401530   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 111/120
	I0806 07:33:37.402884   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 112/120
	I0806 07:33:38.404673   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 113/120
	I0806 07:33:39.406325   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 114/120
	I0806 07:33:40.408166   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 115/120
	I0806 07:33:41.409652   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 116/120
	I0806 07:33:42.411081   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 117/120
	I0806 07:33:43.412624   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 118/120
	I0806 07:33:44.414167   37882 main.go:141] libmachine: (ha-118173-m03) Waiting for machine to stop 119/120
	I0806 07:33:45.415108   37882 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0806 07:33:45.415166   37882 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0806 07:33:45.417163   37882 out.go:177] 
	W0806 07:33:45.418461   37882 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0806 07:33:45.418477   37882 out.go:239] * 
	* 
	W0806 07:33:45.420671   37882 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0806 07:33:45.423489   37882 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-118173 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-118173 --wait=true -v=7 --alsologtostderr
E0806 07:34:13.203749   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/addons-505457/client.crt: no such file or directory
E0806 07:35:36.252332   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/addons-505457/client.crt: no such file or directory
E0806 07:36:21.825047   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/functional-811691/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-118173 --wait=true -v=7 --alsologtostderr: (4m11.755292312s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-118173
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-118173 -n ha-118173
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-118173 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-118173 logs -n 25: (1.991632763s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-118173 cp ha-118173-m03:/home/docker/cp-test.txt                              | ha-118173 | jenkins | v1.33.1 | 06 Aug 24 07:28 UTC | 06 Aug 24 07:28 UTC |
	|         | ha-118173-m02:/home/docker/cp-test_ha-118173-m03_ha-118173-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-118173 ssh -n                                                                 | ha-118173 | jenkins | v1.33.1 | 06 Aug 24 07:28 UTC | 06 Aug 24 07:28 UTC |
	|         | ha-118173-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-118173 ssh -n ha-118173-m02 sudo cat                                          | ha-118173 | jenkins | v1.33.1 | 06 Aug 24 07:28 UTC | 06 Aug 24 07:28 UTC |
	|         | /home/docker/cp-test_ha-118173-m03_ha-118173-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-118173 cp ha-118173-m03:/home/docker/cp-test.txt                              | ha-118173 | jenkins | v1.33.1 | 06 Aug 24 07:28 UTC | 06 Aug 24 07:28 UTC |
	|         | ha-118173-m04:/home/docker/cp-test_ha-118173-m03_ha-118173-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-118173 ssh -n                                                                 | ha-118173 | jenkins | v1.33.1 | 06 Aug 24 07:28 UTC | 06 Aug 24 07:28 UTC |
	|         | ha-118173-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-118173 ssh -n ha-118173-m04 sudo cat                                          | ha-118173 | jenkins | v1.33.1 | 06 Aug 24 07:28 UTC | 06 Aug 24 07:28 UTC |
	|         | /home/docker/cp-test_ha-118173-m03_ha-118173-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-118173 cp testdata/cp-test.txt                                                | ha-118173 | jenkins | v1.33.1 | 06 Aug 24 07:28 UTC | 06 Aug 24 07:28 UTC |
	|         | ha-118173-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-118173 ssh -n                                                                 | ha-118173 | jenkins | v1.33.1 | 06 Aug 24 07:28 UTC | 06 Aug 24 07:28 UTC |
	|         | ha-118173-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-118173 cp ha-118173-m04:/home/docker/cp-test.txt                              | ha-118173 | jenkins | v1.33.1 | 06 Aug 24 07:28 UTC | 06 Aug 24 07:28 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile4204706009/001/cp-test_ha-118173-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-118173 ssh -n                                                                 | ha-118173 | jenkins | v1.33.1 | 06 Aug 24 07:28 UTC | 06 Aug 24 07:28 UTC |
	|         | ha-118173-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-118173 cp ha-118173-m04:/home/docker/cp-test.txt                              | ha-118173 | jenkins | v1.33.1 | 06 Aug 24 07:28 UTC | 06 Aug 24 07:28 UTC |
	|         | ha-118173:/home/docker/cp-test_ha-118173-m04_ha-118173.txt                       |           |         |         |                     |                     |
	| ssh     | ha-118173 ssh -n                                                                 | ha-118173 | jenkins | v1.33.1 | 06 Aug 24 07:28 UTC | 06 Aug 24 07:28 UTC |
	|         | ha-118173-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-118173 ssh -n ha-118173 sudo cat                                              | ha-118173 | jenkins | v1.33.1 | 06 Aug 24 07:28 UTC | 06 Aug 24 07:28 UTC |
	|         | /home/docker/cp-test_ha-118173-m04_ha-118173.txt                                 |           |         |         |                     |                     |
	| cp      | ha-118173 cp ha-118173-m04:/home/docker/cp-test.txt                              | ha-118173 | jenkins | v1.33.1 | 06 Aug 24 07:28 UTC | 06 Aug 24 07:28 UTC |
	|         | ha-118173-m02:/home/docker/cp-test_ha-118173-m04_ha-118173-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-118173 ssh -n                                                                 | ha-118173 | jenkins | v1.33.1 | 06 Aug 24 07:28 UTC | 06 Aug 24 07:28 UTC |
	|         | ha-118173-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-118173 ssh -n ha-118173-m02 sudo cat                                          | ha-118173 | jenkins | v1.33.1 | 06 Aug 24 07:28 UTC | 06 Aug 24 07:28 UTC |
	|         | /home/docker/cp-test_ha-118173-m04_ha-118173-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-118173 cp ha-118173-m04:/home/docker/cp-test.txt                              | ha-118173 | jenkins | v1.33.1 | 06 Aug 24 07:28 UTC | 06 Aug 24 07:28 UTC |
	|         | ha-118173-m03:/home/docker/cp-test_ha-118173-m04_ha-118173-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-118173 ssh -n                                                                 | ha-118173 | jenkins | v1.33.1 | 06 Aug 24 07:28 UTC | 06 Aug 24 07:28 UTC |
	|         | ha-118173-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-118173 ssh -n ha-118173-m03 sudo cat                                          | ha-118173 | jenkins | v1.33.1 | 06 Aug 24 07:28 UTC | 06 Aug 24 07:28 UTC |
	|         | /home/docker/cp-test_ha-118173-m04_ha-118173-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-118173 node stop m02 -v=7                                                     | ha-118173 | jenkins | v1.33.1 | 06 Aug 24 07:28 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-118173 node start m02 -v=7                                                    | ha-118173 | jenkins | v1.33.1 | 06 Aug 24 07:30 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-118173 -v=7                                                           | ha-118173 | jenkins | v1.33.1 | 06 Aug 24 07:31 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-118173 -v=7                                                                | ha-118173 | jenkins | v1.33.1 | 06 Aug 24 07:31 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-118173 --wait=true -v=7                                                    | ha-118173 | jenkins | v1.33.1 | 06 Aug 24 07:33 UTC | 06 Aug 24 07:37 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-118173                                                                | ha-118173 | jenkins | v1.33.1 | 06 Aug 24 07:37 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/06 07:33:45
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0806 07:33:45.467346   38357 out.go:291] Setting OutFile to fd 1 ...
	I0806 07:33:45.467566   38357 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 07:33:45.467575   38357 out.go:304] Setting ErrFile to fd 2...
	I0806 07:33:45.467579   38357 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 07:33:45.467734   38357 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19370-13057/.minikube/bin
	I0806 07:33:45.468237   38357 out.go:298] Setting JSON to false
	I0806 07:33:45.469136   38357 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4571,"bootTime":1722925054,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0806 07:33:45.469198   38357 start.go:139] virtualization: kvm guest
	I0806 07:33:45.471400   38357 out.go:177] * [ha-118173] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0806 07:33:45.472796   38357 notify.go:220] Checking for updates...
	I0806 07:33:45.472826   38357 out.go:177]   - MINIKUBE_LOCATION=19370
	I0806 07:33:45.474217   38357 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0806 07:33:45.475400   38357 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19370-13057/kubeconfig
	I0806 07:33:45.476631   38357 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19370-13057/.minikube
	I0806 07:33:45.477837   38357 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0806 07:33:45.479074   38357 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0806 07:33:45.480834   38357 config.go:182] Loaded profile config "ha-118173": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 07:33:45.480913   38357 driver.go:392] Setting default libvirt URI to qemu:///system
	I0806 07:33:45.481280   38357 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:33:45.481338   38357 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:33:45.496263   38357 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32859
	I0806 07:33:45.496668   38357 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:33:45.497252   38357 main.go:141] libmachine: Using API Version  1
	I0806 07:33:45.497271   38357 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:33:45.497669   38357 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:33:45.497845   38357 main.go:141] libmachine: (ha-118173) Calling .DriverName
	I0806 07:33:45.533359   38357 out.go:177] * Using the kvm2 driver based on existing profile
	I0806 07:33:45.534521   38357 start.go:297] selected driver: kvm2
	I0806 07:33:45.534540   38357 start.go:901] validating driver "kvm2" against &{Name:ha-118173 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.3 ClusterName:ha-118173 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.164 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.203 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.107 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk
:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 07:33:45.534770   38357 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0806 07:33:45.535137   38357 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 07:33:45.535205   38357 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19370-13057/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0806 07:33:45.549609   38357 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0806 07:33:45.550290   38357 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0806 07:33:45.550353   38357 cni.go:84] Creating CNI manager for ""
	I0806 07:33:45.550365   38357 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0806 07:33:45.550415   38357 start.go:340] cluster config:
	{Name:ha-118173 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-118173 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.164 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.203 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.107 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-till
er:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 07:33:45.550554   38357 iso.go:125] acquiring lock: {Name:mke58d71de28babe305c5eb0c31c274e9713bb3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 07:33:45.552440   38357 out.go:177] * Starting "ha-118173" primary control-plane node in "ha-118173" cluster
	I0806 07:33:45.553602   38357 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0806 07:33:45.553654   38357 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0806 07:33:45.553665   38357 cache.go:56] Caching tarball of preloaded images
	I0806 07:33:45.553764   38357 preload.go:172] Found /home/jenkins/minikube-integration/19370-13057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0806 07:33:45.553775   38357 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0806 07:33:45.553901   38357 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/config.json ...
	I0806 07:33:45.554132   38357 start.go:360] acquireMachinesLock for ha-118173: {Name:mkc602ac9c8cc3360dcc0fcb8104303837aaab61 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 07:33:45.554185   38357 start.go:364] duration metric: took 33.322µs to acquireMachinesLock for "ha-118173"
	I0806 07:33:45.554201   38357 start.go:96] Skipping create...Using existing machine configuration
	I0806 07:33:45.554212   38357 fix.go:54] fixHost starting: 
	I0806 07:33:45.554480   38357 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:33:45.554514   38357 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:33:45.568732   38357 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33065
	I0806 07:33:45.569141   38357 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:33:45.569593   38357 main.go:141] libmachine: Using API Version  1
	I0806 07:33:45.569619   38357 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:33:45.570002   38357 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:33:45.570255   38357 main.go:141] libmachine: (ha-118173) Calling .DriverName
	I0806 07:33:45.570417   38357 main.go:141] libmachine: (ha-118173) Calling .GetState
	I0806 07:33:45.572072   38357 fix.go:112] recreateIfNeeded on ha-118173: state=Running err=<nil>
	W0806 07:33:45.572095   38357 fix.go:138] unexpected machine state, will restart: <nil>
	I0806 07:33:45.574308   38357 out.go:177] * Updating the running kvm2 "ha-118173" VM ...
	I0806 07:33:45.575981   38357 machine.go:94] provisionDockerMachine start ...
	I0806 07:33:45.575998   38357 main.go:141] libmachine: (ha-118173) Calling .DriverName
	I0806 07:33:45.576190   38357 main.go:141] libmachine: (ha-118173) Calling .GetSSHHostname
	I0806 07:33:45.578945   38357 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:33:45.579402   38357 main.go:141] libmachine: (ha-118173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:df:f0", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:22:33 +0000 UTC Type:0 Mac:52:54:00:ef:df:f0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-118173 Clientid:01:52:54:00:ef:df:f0}
	I0806 07:33:45.579447   38357 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined IP address 192.168.39.164 and MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:33:45.579601   38357 main.go:141] libmachine: (ha-118173) Calling .GetSSHPort
	I0806 07:33:45.579818   38357 main.go:141] libmachine: (ha-118173) Calling .GetSSHKeyPath
	I0806 07:33:45.579973   38357 main.go:141] libmachine: (ha-118173) Calling .GetSSHKeyPath
	I0806 07:33:45.580108   38357 main.go:141] libmachine: (ha-118173) Calling .GetSSHUsername
	I0806 07:33:45.580325   38357 main.go:141] libmachine: Using SSH client type: native
	I0806 07:33:45.580610   38357 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.164 22 <nil> <nil>}
	I0806 07:33:45.580627   38357 main.go:141] libmachine: About to run SSH command:
	hostname
	I0806 07:33:45.691858   38357 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-118173
	
	I0806 07:33:45.691896   38357 main.go:141] libmachine: (ha-118173) Calling .GetMachineName
	I0806 07:33:45.692164   38357 buildroot.go:166] provisioning hostname "ha-118173"
	I0806 07:33:45.692191   38357 main.go:141] libmachine: (ha-118173) Calling .GetMachineName
	I0806 07:33:45.692438   38357 main.go:141] libmachine: (ha-118173) Calling .GetSSHHostname
	I0806 07:33:45.695554   38357 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:33:45.696024   38357 main.go:141] libmachine: (ha-118173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:df:f0", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:22:33 +0000 UTC Type:0 Mac:52:54:00:ef:df:f0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-118173 Clientid:01:52:54:00:ef:df:f0}
	I0806 07:33:45.696052   38357 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined IP address 192.168.39.164 and MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:33:45.696268   38357 main.go:141] libmachine: (ha-118173) Calling .GetSSHPort
	I0806 07:33:45.696454   38357 main.go:141] libmachine: (ha-118173) Calling .GetSSHKeyPath
	I0806 07:33:45.696610   38357 main.go:141] libmachine: (ha-118173) Calling .GetSSHKeyPath
	I0806 07:33:45.696748   38357 main.go:141] libmachine: (ha-118173) Calling .GetSSHUsername
	I0806 07:33:45.696925   38357 main.go:141] libmachine: Using SSH client type: native
	I0806 07:33:45.697127   38357 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.164 22 <nil> <nil>}
	I0806 07:33:45.697141   38357 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-118173 && echo "ha-118173" | sudo tee /etc/hostname
	I0806 07:33:45.818537   38357 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-118173
	
	I0806 07:33:45.818562   38357 main.go:141] libmachine: (ha-118173) Calling .GetSSHHostname
	I0806 07:33:45.821378   38357 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:33:45.821766   38357 main.go:141] libmachine: (ha-118173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:df:f0", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:22:33 +0000 UTC Type:0 Mac:52:54:00:ef:df:f0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-118173 Clientid:01:52:54:00:ef:df:f0}
	I0806 07:33:45.821793   38357 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined IP address 192.168.39.164 and MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:33:45.822014   38357 main.go:141] libmachine: (ha-118173) Calling .GetSSHPort
	I0806 07:33:45.822255   38357 main.go:141] libmachine: (ha-118173) Calling .GetSSHKeyPath
	I0806 07:33:45.822410   38357 main.go:141] libmachine: (ha-118173) Calling .GetSSHKeyPath
	I0806 07:33:45.822553   38357 main.go:141] libmachine: (ha-118173) Calling .GetSSHUsername
	I0806 07:33:45.822760   38357 main.go:141] libmachine: Using SSH client type: native
	I0806 07:33:45.822951   38357 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.164 22 <nil> <nil>}
	I0806 07:33:45.822973   38357 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-118173' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-118173/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-118173' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0806 07:33:45.925623   38357 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 07:33:45.925656   38357 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19370-13057/.minikube CaCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19370-13057/.minikube}
	I0806 07:33:45.925701   38357 buildroot.go:174] setting up certificates
	I0806 07:33:45.925714   38357 provision.go:84] configureAuth start
	I0806 07:33:45.925730   38357 main.go:141] libmachine: (ha-118173) Calling .GetMachineName
	I0806 07:33:45.926059   38357 main.go:141] libmachine: (ha-118173) Calling .GetIP
	I0806 07:33:45.928816   38357 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:33:45.929242   38357 main.go:141] libmachine: (ha-118173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:df:f0", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:22:33 +0000 UTC Type:0 Mac:52:54:00:ef:df:f0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-118173 Clientid:01:52:54:00:ef:df:f0}
	I0806 07:33:45.929264   38357 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined IP address 192.168.39.164 and MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:33:45.929436   38357 main.go:141] libmachine: (ha-118173) Calling .GetSSHHostname
	I0806 07:33:45.931566   38357 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:33:45.931968   38357 main.go:141] libmachine: (ha-118173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:df:f0", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:22:33 +0000 UTC Type:0 Mac:52:54:00:ef:df:f0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-118173 Clientid:01:52:54:00:ef:df:f0}
	I0806 07:33:45.931995   38357 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined IP address 192.168.39.164 and MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:33:45.932134   38357 provision.go:143] copyHostCerts
	I0806 07:33:45.932161   38357 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem
	I0806 07:33:45.932203   38357 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem, removing ...
	I0806 07:33:45.932212   38357 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem
	I0806 07:33:45.932274   38357 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem (1675 bytes)
	I0806 07:33:45.932400   38357 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem
	I0806 07:33:45.932426   38357 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem, removing ...
	I0806 07:33:45.932436   38357 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem
	I0806 07:33:45.932488   38357 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem (1078 bytes)
	I0806 07:33:45.932557   38357 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem
	I0806 07:33:45.932574   38357 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem, removing ...
	I0806 07:33:45.932580   38357 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem
	I0806 07:33:45.932615   38357 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem (1123 bytes)
	I0806 07:33:45.932706   38357 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem org=jenkins.ha-118173 san=[127.0.0.1 192.168.39.164 ha-118173 localhost minikube]
	I0806 07:33:46.085845   38357 provision.go:177] copyRemoteCerts
	I0806 07:33:46.085898   38357 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0806 07:33:46.085922   38357 main.go:141] libmachine: (ha-118173) Calling .GetSSHHostname
	I0806 07:33:46.088620   38357 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:33:46.089052   38357 main.go:141] libmachine: (ha-118173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:df:f0", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:22:33 +0000 UTC Type:0 Mac:52:54:00:ef:df:f0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-118173 Clientid:01:52:54:00:ef:df:f0}
	I0806 07:33:46.089076   38357 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined IP address 192.168.39.164 and MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:33:46.089267   38357 main.go:141] libmachine: (ha-118173) Calling .GetSSHPort
	I0806 07:33:46.089456   38357 main.go:141] libmachine: (ha-118173) Calling .GetSSHKeyPath
	I0806 07:33:46.089591   38357 main.go:141] libmachine: (ha-118173) Calling .GetSSHUsername
	I0806 07:33:46.089730   38357 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173/id_rsa Username:docker}
	I0806 07:33:46.172141   38357 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0806 07:33:46.172220   38357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0806 07:33:46.204411   38357 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0806 07:33:46.204490   38357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0806 07:33:46.231446   38357 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0806 07:33:46.231531   38357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0806 07:33:46.258197   38357 provision.go:87] duration metric: took 332.468541ms to configureAuth
	I0806 07:33:46.258224   38357 buildroot.go:189] setting minikube options for container-runtime
	I0806 07:33:46.258437   38357 config.go:182] Loaded profile config "ha-118173": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 07:33:46.258517   38357 main.go:141] libmachine: (ha-118173) Calling .GetSSHHostname
	I0806 07:33:46.261313   38357 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:33:46.261784   38357 main.go:141] libmachine: (ha-118173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:df:f0", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:22:33 +0000 UTC Type:0 Mac:52:54:00:ef:df:f0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-118173 Clientid:01:52:54:00:ef:df:f0}
	I0806 07:33:46.261810   38357 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined IP address 192.168.39.164 and MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:33:46.262068   38357 main.go:141] libmachine: (ha-118173) Calling .GetSSHPort
	I0806 07:33:46.262253   38357 main.go:141] libmachine: (ha-118173) Calling .GetSSHKeyPath
	I0806 07:33:46.262436   38357 main.go:141] libmachine: (ha-118173) Calling .GetSSHKeyPath
	I0806 07:33:46.262556   38357 main.go:141] libmachine: (ha-118173) Calling .GetSSHUsername
	I0806 07:33:46.262711   38357 main.go:141] libmachine: Using SSH client type: native
	I0806 07:33:46.262913   38357 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.164 22 <nil> <nil>}
	I0806 07:33:46.262928   38357 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0806 07:35:17.035660   38357 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0806 07:35:17.035757   38357 machine.go:97] duration metric: took 1m31.459757652s to provisionDockerMachine
	I0806 07:35:17.035775   38357 start.go:293] postStartSetup for "ha-118173" (driver="kvm2")
	I0806 07:35:17.035787   38357 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0806 07:35:17.035813   38357 main.go:141] libmachine: (ha-118173) Calling .DriverName
	I0806 07:35:17.036236   38357 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0806 07:35:17.036263   38357 main.go:141] libmachine: (ha-118173) Calling .GetSSHHostname
	I0806 07:35:17.039356   38357 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:35:17.039750   38357 main.go:141] libmachine: (ha-118173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:df:f0", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:22:33 +0000 UTC Type:0 Mac:52:54:00:ef:df:f0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-118173 Clientid:01:52:54:00:ef:df:f0}
	I0806 07:35:17.039778   38357 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined IP address 192.168.39.164 and MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:35:17.039909   38357 main.go:141] libmachine: (ha-118173) Calling .GetSSHPort
	I0806 07:35:17.040119   38357 main.go:141] libmachine: (ha-118173) Calling .GetSSHKeyPath
	I0806 07:35:17.040291   38357 main.go:141] libmachine: (ha-118173) Calling .GetSSHUsername
	I0806 07:35:17.040471   38357 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173/id_rsa Username:docker}
	I0806 07:35:17.125761   38357 ssh_runner.go:195] Run: cat /etc/os-release
	I0806 07:35:17.130478   38357 info.go:137] Remote host: Buildroot 2023.02.9
	I0806 07:35:17.130507   38357 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-13057/.minikube/addons for local assets ...
	I0806 07:35:17.130573   38357 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-13057/.minikube/files for local assets ...
	I0806 07:35:17.130666   38357 filesync.go:149] local asset: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem -> 202462.pem in /etc/ssl/certs
	I0806 07:35:17.130679   38357 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem -> /etc/ssl/certs/202462.pem
	I0806 07:35:17.130806   38357 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0806 07:35:17.141477   38357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem --> /etc/ssl/certs/202462.pem (1708 bytes)
	I0806 07:35:17.167538   38357 start.go:296] duration metric: took 131.749794ms for postStartSetup
	I0806 07:35:17.167588   38357 main.go:141] libmachine: (ha-118173) Calling .DriverName
	I0806 07:35:17.167871   38357 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0806 07:35:17.167895   38357 main.go:141] libmachine: (ha-118173) Calling .GetSSHHostname
	I0806 07:35:17.170565   38357 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:35:17.170968   38357 main.go:141] libmachine: (ha-118173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:df:f0", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:22:33 +0000 UTC Type:0 Mac:52:54:00:ef:df:f0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-118173 Clientid:01:52:54:00:ef:df:f0}
	I0806 07:35:17.170989   38357 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined IP address 192.168.39.164 and MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:35:17.171278   38357 main.go:141] libmachine: (ha-118173) Calling .GetSSHPort
	I0806 07:35:17.171499   38357 main.go:141] libmachine: (ha-118173) Calling .GetSSHKeyPath
	I0806 07:35:17.171671   38357 main.go:141] libmachine: (ha-118173) Calling .GetSSHUsername
	I0806 07:35:17.171839   38357 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173/id_rsa Username:docker}
	W0806 07:35:17.251514   38357 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0806 07:35:17.251545   38357 fix.go:56] duration metric: took 1m31.697334599s for fixHost
	I0806 07:35:17.251568   38357 main.go:141] libmachine: (ha-118173) Calling .GetSSHHostname
	I0806 07:35:17.254278   38357 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:35:17.254864   38357 main.go:141] libmachine: (ha-118173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:df:f0", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:22:33 +0000 UTC Type:0 Mac:52:54:00:ef:df:f0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-118173 Clientid:01:52:54:00:ef:df:f0}
	I0806 07:35:17.254894   38357 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined IP address 192.168.39.164 and MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:35:17.255059   38357 main.go:141] libmachine: (ha-118173) Calling .GetSSHPort
	I0806 07:35:17.255258   38357 main.go:141] libmachine: (ha-118173) Calling .GetSSHKeyPath
	I0806 07:35:17.255420   38357 main.go:141] libmachine: (ha-118173) Calling .GetSSHKeyPath
	I0806 07:35:17.255576   38357 main.go:141] libmachine: (ha-118173) Calling .GetSSHUsername
	I0806 07:35:17.255742   38357 main.go:141] libmachine: Using SSH client type: native
	I0806 07:35:17.255908   38357 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.164 22 <nil> <nil>}
	I0806 07:35:17.255919   38357 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0806 07:35:17.357344   38357 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722929717.331758082
	
	I0806 07:35:17.357370   38357 fix.go:216] guest clock: 1722929717.331758082
	I0806 07:35:17.357379   38357 fix.go:229] Guest: 2024-08-06 07:35:17.331758082 +0000 UTC Remote: 2024-08-06 07:35:17.251554131 +0000 UTC m=+91.818608040 (delta=80.203951ms)
	I0806 07:35:17.357407   38357 fix.go:200] guest clock delta is within tolerance: 80.203951ms
	I0806 07:35:17.357414   38357 start.go:83] releasing machines lock for "ha-118173", held for 1m31.803217825s
	I0806 07:35:17.357445   38357 main.go:141] libmachine: (ha-118173) Calling .DriverName
	I0806 07:35:17.357720   38357 main.go:141] libmachine: (ha-118173) Calling .GetIP
	I0806 07:35:17.360535   38357 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:35:17.361021   38357 main.go:141] libmachine: (ha-118173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:df:f0", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:22:33 +0000 UTC Type:0 Mac:52:54:00:ef:df:f0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-118173 Clientid:01:52:54:00:ef:df:f0}
	I0806 07:35:17.361059   38357 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined IP address 192.168.39.164 and MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:35:17.361186   38357 main.go:141] libmachine: (ha-118173) Calling .DriverName
	I0806 07:35:17.361707   38357 main.go:141] libmachine: (ha-118173) Calling .DriverName
	I0806 07:35:17.361922   38357 main.go:141] libmachine: (ha-118173) Calling .DriverName
	I0806 07:35:17.362010   38357 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0806 07:35:17.362046   38357 main.go:141] libmachine: (ha-118173) Calling .GetSSHHostname
	I0806 07:35:17.362167   38357 ssh_runner.go:195] Run: cat /version.json
	I0806 07:35:17.362189   38357 main.go:141] libmachine: (ha-118173) Calling .GetSSHHostname
	I0806 07:35:17.364758   38357 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:35:17.365017   38357 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:35:17.365228   38357 main.go:141] libmachine: (ha-118173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:df:f0", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:22:33 +0000 UTC Type:0 Mac:52:54:00:ef:df:f0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-118173 Clientid:01:52:54:00:ef:df:f0}
	I0806 07:35:17.365252   38357 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined IP address 192.168.39.164 and MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:35:17.365394   38357 main.go:141] libmachine: (ha-118173) Calling .GetSSHPort
	I0806 07:35:17.365584   38357 main.go:141] libmachine: (ha-118173) Calling .GetSSHKeyPath
	I0806 07:35:17.365610   38357 main.go:141] libmachine: (ha-118173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:df:f0", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:22:33 +0000 UTC Type:0 Mac:52:54:00:ef:df:f0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-118173 Clientid:01:52:54:00:ef:df:f0}
	I0806 07:35:17.365630   38357 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined IP address 192.168.39.164 and MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:35:17.365759   38357 main.go:141] libmachine: (ha-118173) Calling .GetSSHUsername
	I0806 07:35:17.365777   38357 main.go:141] libmachine: (ha-118173) Calling .GetSSHPort
	I0806 07:35:17.365956   38357 main.go:141] libmachine: (ha-118173) Calling .GetSSHKeyPath
	I0806 07:35:17.365948   38357 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173/id_rsa Username:docker}
	I0806 07:35:17.366101   38357 main.go:141] libmachine: (ha-118173) Calling .GetSSHUsername
	I0806 07:35:17.366265   38357 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173/id_rsa Username:docker}
	I0806 07:35:17.462544   38357 ssh_runner.go:195] Run: systemctl --version
	I0806 07:35:17.469182   38357 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0806 07:35:17.633974   38357 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0806 07:35:17.639966   38357 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0806 07:35:17.640037   38357 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0806 07:35:17.650644   38357 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0806 07:35:17.650669   38357 start.go:495] detecting cgroup driver to use...
	I0806 07:35:17.650720   38357 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0806 07:35:17.668995   38357 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 07:35:17.683527   38357 docker.go:217] disabling cri-docker service (if available) ...
	I0806 07:35:17.683595   38357 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0806 07:35:17.697733   38357 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0806 07:35:17.711623   38357 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0806 07:35:17.864404   38357 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0806 07:35:18.019491   38357 docker.go:233] disabling docker service ...
	I0806 07:35:18.019569   38357 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0806 07:35:18.039798   38357 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0806 07:35:18.054343   38357 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0806 07:35:18.216040   38357 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0806 07:35:18.367823   38357 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0806 07:35:18.383521   38357 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 07:35:18.403241   38357 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0806 07:35:18.403306   38357 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 07:35:18.414585   38357 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0806 07:35:18.414650   38357 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 07:35:18.426318   38357 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 07:35:18.439585   38357 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 07:35:18.459522   38357 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0806 07:35:18.472016   38357 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 07:35:18.483076   38357 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 07:35:18.493817   38357 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 07:35:18.504710   38357 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0806 07:35:18.514491   38357 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0806 07:35:18.525143   38357 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 07:35:18.673520   38357 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0806 07:35:21.823024   38357 ssh_runner.go:235] Completed: sudo systemctl restart crio: (3.149467699s)
	I0806 07:35:21.823059   38357 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0806 07:35:21.823106   38357 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0806 07:35:21.828570   38357 start.go:563] Will wait 60s for crictl version
	I0806 07:35:21.828621   38357 ssh_runner.go:195] Run: which crictl
	I0806 07:35:21.832515   38357 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0806 07:35:21.873857   38357 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0806 07:35:21.873957   38357 ssh_runner.go:195] Run: crio --version
	I0806 07:35:21.903415   38357 ssh_runner.go:195] Run: crio --version
	I0806 07:35:21.935451   38357 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0806 07:35:21.936978   38357 main.go:141] libmachine: (ha-118173) Calling .GetIP
	I0806 07:35:21.939664   38357 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:35:21.940047   38357 main.go:141] libmachine: (ha-118173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:df:f0", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:22:33 +0000 UTC Type:0 Mac:52:54:00:ef:df:f0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-118173 Clientid:01:52:54:00:ef:df:f0}
	I0806 07:35:21.940073   38357 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined IP address 192.168.39.164 and MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:35:21.940325   38357 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0806 07:35:21.945201   38357 kubeadm.go:883] updating cluster {Name:ha-118173 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-118173 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.164 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.203 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.107 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Moun
tGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0806 07:35:21.945336   38357 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0806 07:35:21.945377   38357 ssh_runner.go:195] Run: sudo crictl images --output json
	I0806 07:35:21.991401   38357 crio.go:514] all images are preloaded for cri-o runtime.
	I0806 07:35:21.991423   38357 crio.go:433] Images already preloaded, skipping extraction
	I0806 07:35:21.991469   38357 ssh_runner.go:195] Run: sudo crictl images --output json
	I0806 07:35:22.026394   38357 crio.go:514] all images are preloaded for cri-o runtime.
	I0806 07:35:22.026416   38357 cache_images.go:84] Images are preloaded, skipping loading
	I0806 07:35:22.026429   38357 kubeadm.go:934] updating node { 192.168.39.164 8443 v1.30.3 crio true true} ...
	I0806 07:35:22.026524   38357 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-118173 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.164
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-118173 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0806 07:35:22.026584   38357 ssh_runner.go:195] Run: crio config
	I0806 07:35:22.080962   38357 cni.go:84] Creating CNI manager for ""
	I0806 07:35:22.080988   38357 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0806 07:35:22.081002   38357 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0806 07:35:22.081022   38357 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.164 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-118173 NodeName:ha-118173 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.164"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.164 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0806 07:35:22.081159   38357 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.164
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-118173"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.164
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.164"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0806 07:35:22.081179   38357 kube-vip.go:115] generating kube-vip config ...
	I0806 07:35:22.081217   38357 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0806 07:35:22.093063   38357 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0806 07:35:22.093192   38357 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0806 07:35:22.093251   38357 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0806 07:35:22.103362   38357 binaries.go:44] Found k8s binaries, skipping transfer
	I0806 07:35:22.103430   38357 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0806 07:35:22.113353   38357 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0806 07:35:22.131867   38357 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0806 07:35:22.150407   38357 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0806 07:35:22.168132   38357 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0806 07:35:22.187227   38357 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0806 07:35:22.191533   38357 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 07:35:22.337122   38357 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 07:35:22.352452   38357 certs.go:68] Setting up /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173 for IP: 192.168.39.164
	I0806 07:35:22.352477   38357 certs.go:194] generating shared ca certs ...
	I0806 07:35:22.352497   38357 certs.go:226] acquiring lock for ca certs: {Name:mkf5c5aed91f4108054d9f2c742cad66c86afbed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 07:35:22.352673   38357 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.key
	I0806 07:35:22.352732   38357 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.key
	I0806 07:35:22.352744   38357 certs.go:256] generating profile certs ...
	I0806 07:35:22.352824   38357 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/client.key
	I0806 07:35:22.352856   38357 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.key.823ff5c5
	I0806 07:35:22.352878   38357 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.crt.823ff5c5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.164 192.168.39.203 192.168.39.5 192.168.39.254]
	I0806 07:35:22.402836   38357 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.crt.823ff5c5 ...
	I0806 07:35:22.402866   38357 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.crt.823ff5c5: {Name:mk10452176194ba763aea9cd13c71a0060a5137e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 07:35:22.403059   38357 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.key.823ff5c5 ...
	I0806 07:35:22.403074   38357 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.key.823ff5c5: {Name:mka4751260bc100f0254a0c37919473fbb7620e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 07:35:22.403153   38357 certs.go:381] copying /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.crt.823ff5c5 -> /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.crt
	I0806 07:35:22.403317   38357 certs.go:385] copying /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.key.823ff5c5 -> /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.key
	I0806 07:35:22.403440   38357 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/proxy-client.key
	I0806 07:35:22.403453   38357 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0806 07:35:22.403464   38357 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0806 07:35:22.403477   38357 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0806 07:35:22.403490   38357 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0806 07:35:22.403502   38357 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0806 07:35:22.403514   38357 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0806 07:35:22.403526   38357 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0806 07:35:22.403545   38357 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0806 07:35:22.403592   38357 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246.pem (1338 bytes)
	W0806 07:35:22.403621   38357 certs.go:480] ignoring /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246_empty.pem, impossibly tiny 0 bytes
	I0806 07:35:22.403631   38357 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem (1675 bytes)
	I0806 07:35:22.403653   38357 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem (1078 bytes)
	I0806 07:35:22.403674   38357 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem (1123 bytes)
	I0806 07:35:22.403695   38357 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem (1675 bytes)
	I0806 07:35:22.403733   38357 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem (1708 bytes)
	I0806 07:35:22.403758   38357 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem -> /usr/share/ca-certificates/202462.pem
	I0806 07:35:22.403771   38357 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0806 07:35:22.403783   38357 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246.pem -> /usr/share/ca-certificates/20246.pem
	I0806 07:35:22.404445   38357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0806 07:35:22.430728   38357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0806 07:35:22.455555   38357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0806 07:35:22.488278   38357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0806 07:35:22.513565   38357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0806 07:35:22.537842   38357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0806 07:35:22.562306   38357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0806 07:35:22.587839   38357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0806 07:35:22.612558   38357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem --> /usr/share/ca-certificates/202462.pem (1708 bytes)
	I0806 07:35:22.636942   38357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0806 07:35:22.661625   38357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246.pem --> /usr/share/ca-certificates/20246.pem (1338 bytes)
	I0806 07:35:22.687476   38357 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0806 07:35:22.704757   38357 ssh_runner.go:195] Run: openssl version
	I0806 07:35:22.711003   38357 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202462.pem && ln -fs /usr/share/ca-certificates/202462.pem /etc/ssl/certs/202462.pem"
	I0806 07:35:22.722027   38357 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202462.pem
	I0806 07:35:22.726876   38357 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  6 07:17 /usr/share/ca-certificates/202462.pem
	I0806 07:35:22.726945   38357 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202462.pem
	I0806 07:35:22.732736   38357 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202462.pem /etc/ssl/certs/3ec20f2e.0"
	I0806 07:35:22.742029   38357 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0806 07:35:22.753041   38357 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0806 07:35:22.757734   38357 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  6 07:07 /usr/share/ca-certificates/minikubeCA.pem
	I0806 07:35:22.757789   38357 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0806 07:35:22.763617   38357 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0806 07:35:22.773073   38357 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20246.pem && ln -fs /usr/share/ca-certificates/20246.pem /etc/ssl/certs/20246.pem"
	I0806 07:35:22.784531   38357 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20246.pem
	I0806 07:35:22.789149   38357 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  6 07:17 /usr/share/ca-certificates/20246.pem
	I0806 07:35:22.789210   38357 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20246.pem
	I0806 07:35:22.794992   38357 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20246.pem /etc/ssl/certs/51391683.0"
	I0806 07:35:22.804777   38357 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0806 07:35:22.809636   38357 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0806 07:35:22.815243   38357 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0806 07:35:22.821287   38357 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0806 07:35:22.827720   38357 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0806 07:35:22.833987   38357 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0806 07:35:22.839844   38357 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0806 07:35:22.845612   38357 kubeadm.go:392] StartCluster: {Name:ha-118173 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-118173 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.164 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.203 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.107 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpo
d:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 07:35:22.845718   38357 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0806 07:35:22.845783   38357 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0806 07:35:22.884064   38357 cri.go:89] found id: "eb1e79eb99228d6fe83fe651604e921a5222c01142de9b1d0ef720d196d2123e"
	I0806 07:35:22.884089   38357 cri.go:89] found id: "1bcbed1ca910c67d94f626f7cc508ef7083e003b7d773b1d56376460d37ff9f1"
	I0806 07:35:22.884094   38357 cri.go:89] found id: "86a0d083825b04f1714abda9141cd4a47e060c739d1e5b2e94159e8d4abcd7bd"
	I0806 07:35:22.884097   38357 cri.go:89] found id: "a69ec6c6d89d5362418d48ac0b094fbaf9c00e1489003b9b151de097f482da34"
	I0806 07:35:22.884100   38357 cri.go:89] found id: "c60313490ee322d54c5b3908e628bff77d0ca10249f7253eb9cbd44745a805af"
	I0806 07:35:22.884103   38357 cri.go:89] found id: "321504a9bbbfc57376fccab559a6ebc7388670dc0bf123b5f9fd41b1da99adaa"
	I0806 07:35:22.884105   38357 cri.go:89] found id: "b8f5ae0da505fc7f79225ba7115c83b9e909128bc28df767fce1fc3ba7a01392"
	I0806 07:35:22.884108   38357 cri.go:89] found id: "e5a5bdf0b2a62163f6aff6abe864eedc47b461e233989f23ef7770c226763325"
	I0806 07:35:22.884110   38357 cri.go:89] found id: "9ea977593bf7ff53f7419e821deaa1cf5813b4c116730030cd8d417d078be7fa"
	I0806 07:35:22.884116   38357 cri.go:89] found id: "a94703035c76b778779a89b7fc9176c7a956ebec52628aa1720d26bf2a1c31f5"
	I0806 07:35:22.884118   38357 cri.go:89] found id: "27f4fdf6fa77a51e1295c12b459cda687ed6ae68875218e24ba7959fb37814ed"
	I0806 07:35:22.884122   38357 cri.go:89] found id: "1ffee4f9f096a33fc08f1f6b266ea17ad2181a58a153f7fb1703ded978b23a46"
	I0806 07:35:22.884124   38357 cri.go:89] found id: "306b97b37955570f0a18d38ff1afab7bb09e01c47a3418dba2aeed0f00d6052d"
	I0806 07:35:22.884127   38357 cri.go:89] found id: ""
	I0806 07:35:22.884170   38357 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 06 07:37:57 ha-118173 crio[3786]: time="2024-08-06 07:37:57.926621803Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722929877926526701,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154770,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c2e23077-0ec8-4a84-b7dd-393884b12a20 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 07:37:57 ha-118173 crio[3786]: time="2024-08-06 07:37:57.927285936Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5d33cafa-c463-4891-9f4d-06a7ee3ca319 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 07:37:57 ha-118173 crio[3786]: time="2024-08-06 07:37:57.927363450Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5d33cafa-c463-4891-9f4d-06a7ee3ca319 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 07:37:57 ha-118173 crio[3786]: time="2024-08-06 07:37:57.928234355Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:99f58b84fab60e2b606761b8ed13c48d86197a37ec557d278c6aeff4b215f6b9,PodSandboxId:9d298728c707a71c61f7e9c12d81847924d5702a23547701f7d70396538b9702,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722929782657022675,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8379466d-0241-40e4-a262-d762b3cda2eb,},Annotations:map[string]string{io.kubernetes.container.hash: d079fdad,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ebb3ef02d9872250e0c4614f3552d20ba04a677b81c274d9cf9061a86d7c9ce,PodSandboxId:e774276d1db20b1d0af6107ff83ecf4ba4a28e6ce7cd5d48f1425a9085296c5c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722929768628612088,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f74055e61dcf5f2385b1934a1a8e3b73,},Annotations:map[string]string{io.kubernetes.container.hash: 27fe2ed3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8e645fa59f15ec29730259bc834efd2b4740bd0cf9d98973d43ac2f3e60f8eb,PodSandboxId:3f467e52d9b47ff5ac03b4a454e0ce36266c74302644c05d81e42b312eca5b9b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722929765627015845,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6346700b562b8d3f4c18c66b3813bc28,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb461c0c1a90ce934f5cc90096bf3ebadcbb030c865637c1b0d6f60c3134e0bb,PodSandboxId:c18a977f42d3413f04444d2f24aa487786b86d9e625780e6e1111c13f46b4c81,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722929761946284425,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-rml5d,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4a82c0ca-1596-4dfe-a2f9-3a34f22480e5,},Annotations:map[string]string{io.kubernetes.container.hash: 950719af,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d631bee7d098005cab603ace8514dea77a9fa74686bb0be29ea68d9a592c3e9c,PodSandboxId:3cf78ca258a5c291aeb00bdf810e1bbed94619b29783b0ec29e87984818cd2ff,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722929741194070005,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e550e7b49e50c28915c957cd8c1df9d,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c038e3b4d8c535621dabf1ffa68a5bb717de411bccf8772e655b2c06282edf3,PodSandboxId:9d298728c707a71c61f7e9c12d81847924d5702a23547701f7d70396538b9702,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722929729209432391,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8379466d-0241-40e4-a262-d762b3cda2eb,},Annotations:map[string]string{io.kubernetes.container.hash: d079fdad,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcf0ac6e03c21e2b64b008ca3dcd8293a980e393216a3103acf1a60f8ef1dc3f,PodSandboxId:c1b179a473da688998355788cf02f16bea7187f30019e6786b025cc877ed72fa,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722929728492501519,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ksm6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 594bc178-8319-4922-9132-69a9c6890b95,},Annotations:map[string]string{io.kubernetes.container.hash: 50a78a81,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:9529a894fdd4b09b00a809dc3ad326ff8ab57eba9ae9438200069348b4426b19,PodSandboxId:4a0b68a5db12d84fa3e54810b85090c13072ad782e230aa867e47298ad248fc6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722929728872421792,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zscz4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef588a32-ac32-4e38-926e-6ee7a662a399,},Annotations:map[string]string{io.kubernetes.container.hash: 38e96eed,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e67330d376d276df974f78a481179786f12a17135fd068411ad63be616869e1e,PodSandboxId:0e8087802602b09a2d0f605ce70c425e402b5a10e4f1497b2eb139342a335f5a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_RUNNING,CreatedAt:1722929728633687734,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-n7jmb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa496bfb-7bf0-4ebc-adbd-36d992d8e279,},Annotations:map[string]string{io.kubernetes.container.hash: db2b7a8c,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8c7c569af258485ebf98698eba57ddc482e2819c27995c4da66803dc7a84de4,PodSandboxId:52b914be0c4a1b45fddcf2b0b959430fced7a213686ea1009a6c37f2a5a399fd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722929728711874905,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 080589493a4905d0fb10f52a1a0e25a5,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ea3f1672f91dabee2d9f8605538a8f3683ca4391c2c9deac554eacfddd243bb,PodSandboxId:6a8f5ef8d3fb5f340f9569eb21fa91020e3b44226daa521ebed7eafd2c5aac8c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722929728725790024,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-t24ww,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8effbe6c-3107-409e-9c94-dee4103d12b0,},Annotations:map[string]string{io.kubernetes.container.hash: 7604f64,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0280e428feb3f49d39e63996a569fe8099b785732ee02450df33eb25a8a39c88,PodSandboxId:3f467e52d9b47ff5ac03b4a454e0ce36266c74302644c05d81e42b312eca5b9b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722929728590957011,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-118173,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: 6346700b562b8d3f4c18c66b3813bc28,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dd16c152a6e76f97a41f8091857757846cf00e2b45fbebefd70bc7191f71d6d,PodSandboxId:e774276d1db20b1d0af6107ff83ecf4ba4a28e6ce7cd5d48f1425a9085296c5c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722929728444127688,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: f74055e61dcf5f2385b1934a1a8e3b73,},Annotations:map[string]string{io.kubernetes.container.hash: 27fe2ed3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eae811e3e34734c78cc8f46ad9c12e0bde5ad445281f50029620e65e67b92b11,PodSandboxId:46116e50c4c43a8f8143460932ea15d87c3499f60948ed31f758cc62c7baae81,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722929728393775474,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65b7cf59bb7c42e61e37dcbebce4eec4,},Anno
tations:map[string]string{io.kubernetes.container.hash: fa683f08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10437bdde23713cae1b222edffa34be0e9583f43a83c01f10aae555f0dd7542e,PodSandboxId:3f725d083d59ecb8b65c8b09466e36c8b01dd5742fa8d97e108de4a49f47de04,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722929221668285640,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-rml5d,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4a82c0ca-1596-4dfe-a2f9-3a34f22480e5,},Annota
tions:map[string]string{io.kubernetes.container.hash: 950719af,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c60313490ee322d54c5b3908e628bff77d0ca10249f7253eb9cbd44745a805af,PodSandboxId:b221107c8052d9be54b23c0ee2e882f4adc57ade5813f27e7dfc4e4f71006663,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722929010372427230,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zscz4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef588a32-ac32-4e38-926e-6ee7a662a399,},Annotations:map[string]string{io.kuber
netes.container.hash: 38e96eed,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:321504a9bbbfc57376fccab559a6ebc7388670dc0bf123b5f9fd41b1da99adaa,PodSandboxId:4a9fe2c539265caf233fd5f3778ebc3e6092ce121ddf647e3f72622e31022e53,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722929010343638533,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-7db6d8ff4d-t24ww,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8effbe6c-3107-409e-9c94-dee4103d12b0,},Annotations:map[string]string{io.kubernetes.container.hash: 7604f64,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8f5ae0da505fc7f79225ba7115c83b9e909128bc28df767fce1fc3ba7a01392,PodSandboxId:5bbdb233d391e1127588924ac263113c9aa0148b24fc7113df3ca0891d7e0fdc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]st
ring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_EXITED,CreatedAt:1722928998315769494,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-n7jmb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa496bfb-7bf0-4ebc-adbd-36d992d8e279,},Annotations:map[string]string{io.kubernetes.container.hash: db2b7a8c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5a5bdf0b2a62163f6aff6abe864eedc47b461e233989f23ef7770c226763325,PodSandboxId:2ed57fdbf65f1846c200a67b50a1043fe8c70dd79c90d1a1c67e5e0634c8088b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHa
ndler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722928994187880481,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ksm6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 594bc178-8319-4922-9132-69a9c6890b95,},Annotations:map[string]string{io.kubernetes.container.hash: 50a78a81,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27f4fdf6fa77a51e1295c12b459cda687ed6ae68875218e24ba7959fb37814ed,PodSandboxId:5ebdb09ee65d6de5671cacfdefaaebdf7b5b00f25aa61461d39015ab5cb5578b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b767
22eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722928974549211585,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 080589493a4905d0fb10f52a1a0e25a5,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:306b97b37955570f0a18d38ff1afab7bb09e01c47a3418dba2aeed0f00d6052d,PodSandboxId:5055641759bdb7b0e57568f6d5310c72587ea715571f3fff96ff5d200be59dc2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0c
fd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722928974442938254,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65b7cf59bb7c42e61e37dcbebce4eec4,},Annotations:map[string]string{io.kubernetes.container.hash: fa683f08,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5d33cafa-c463-4891-9f4d-06a7ee3ca319 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 07:37:57 ha-118173 crio[3786]: time="2024-08-06 07:37:57.976246026Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=717a78f9-4c05-4231-b85b-0cf2f1ab6baf name=/runtime.v1.RuntimeService/Version
	Aug 06 07:37:57 ha-118173 crio[3786]: time="2024-08-06 07:37:57.976515863Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=717a78f9-4c05-4231-b85b-0cf2f1ab6baf name=/runtime.v1.RuntimeService/Version
	Aug 06 07:37:57 ha-118173 crio[3786]: time="2024-08-06 07:37:57.978044570Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b3eb7a37-c2b4-4b5b-94c6-26a2a7730809 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 07:37:57 ha-118173 crio[3786]: time="2024-08-06 07:37:57.979002831Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722929877978835905,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154770,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b3eb7a37-c2b4-4b5b-94c6-26a2a7730809 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 07:37:57 ha-118173 crio[3786]: time="2024-08-06 07:37:57.980312658Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8acf2da2-c502-46be-8e6d-6f88fdb5ed1d name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 07:37:57 ha-118173 crio[3786]: time="2024-08-06 07:37:57.980392535Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8acf2da2-c502-46be-8e6d-6f88fdb5ed1d name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 07:37:57 ha-118173 crio[3786]: time="2024-08-06 07:37:57.981088682Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:99f58b84fab60e2b606761b8ed13c48d86197a37ec557d278c6aeff4b215f6b9,PodSandboxId:9d298728c707a71c61f7e9c12d81847924d5702a23547701f7d70396538b9702,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722929782657022675,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8379466d-0241-40e4-a262-d762b3cda2eb,},Annotations:map[string]string{io.kubernetes.container.hash: d079fdad,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ebb3ef02d9872250e0c4614f3552d20ba04a677b81c274d9cf9061a86d7c9ce,PodSandboxId:e774276d1db20b1d0af6107ff83ecf4ba4a28e6ce7cd5d48f1425a9085296c5c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722929768628612088,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f74055e61dcf5f2385b1934a1a8e3b73,},Annotations:map[string]string{io.kubernetes.container.hash: 27fe2ed3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8e645fa59f15ec29730259bc834efd2b4740bd0cf9d98973d43ac2f3e60f8eb,PodSandboxId:3f467e52d9b47ff5ac03b4a454e0ce36266c74302644c05d81e42b312eca5b9b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722929765627015845,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6346700b562b8d3f4c18c66b3813bc28,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb461c0c1a90ce934f5cc90096bf3ebadcbb030c865637c1b0d6f60c3134e0bb,PodSandboxId:c18a977f42d3413f04444d2f24aa487786b86d9e625780e6e1111c13f46b4c81,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722929761946284425,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-rml5d,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4a82c0ca-1596-4dfe-a2f9-3a34f22480e5,},Annotations:map[string]string{io.kubernetes.container.hash: 950719af,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d631bee7d098005cab603ace8514dea77a9fa74686bb0be29ea68d9a592c3e9c,PodSandboxId:3cf78ca258a5c291aeb00bdf810e1bbed94619b29783b0ec29e87984818cd2ff,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722929741194070005,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e550e7b49e50c28915c957cd8c1df9d,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c038e3b4d8c535621dabf1ffa68a5bb717de411bccf8772e655b2c06282edf3,PodSandboxId:9d298728c707a71c61f7e9c12d81847924d5702a23547701f7d70396538b9702,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722929729209432391,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8379466d-0241-40e4-a262-d762b3cda2eb,},Annotations:map[string]string{io.kubernetes.container.hash: d079fdad,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcf0ac6e03c21e2b64b008ca3dcd8293a980e393216a3103acf1a60f8ef1dc3f,PodSandboxId:c1b179a473da688998355788cf02f16bea7187f30019e6786b025cc877ed72fa,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722929728492501519,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ksm6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 594bc178-8319-4922-9132-69a9c6890b95,},Annotations:map[string]string{io.kubernetes.container.hash: 50a78a81,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:9529a894fdd4b09b00a809dc3ad326ff8ab57eba9ae9438200069348b4426b19,PodSandboxId:4a0b68a5db12d84fa3e54810b85090c13072ad782e230aa867e47298ad248fc6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722929728872421792,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zscz4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef588a32-ac32-4e38-926e-6ee7a662a399,},Annotations:map[string]string{io.kubernetes.container.hash: 38e96eed,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e67330d376d276df974f78a481179786f12a17135fd068411ad63be616869e1e,PodSandboxId:0e8087802602b09a2d0f605ce70c425e402b5a10e4f1497b2eb139342a335f5a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_RUNNING,CreatedAt:1722929728633687734,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-n7jmb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa496bfb-7bf0-4ebc-adbd-36d992d8e279,},Annotations:map[string]string{io.kubernetes.container.hash: db2b7a8c,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8c7c569af258485ebf98698eba57ddc482e2819c27995c4da66803dc7a84de4,PodSandboxId:52b914be0c4a1b45fddcf2b0b959430fced7a213686ea1009a6c37f2a5a399fd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722929728711874905,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 080589493a4905d0fb10f52a1a0e25a5,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ea3f1672f91dabee2d9f8605538a8f3683ca4391c2c9deac554eacfddd243bb,PodSandboxId:6a8f5ef8d3fb5f340f9569eb21fa91020e3b44226daa521ebed7eafd2c5aac8c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722929728725790024,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-t24ww,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8effbe6c-3107-409e-9c94-dee4103d12b0,},Annotations:map[string]string{io.kubernetes.container.hash: 7604f64,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0280e428feb3f49d39e63996a569fe8099b785732ee02450df33eb25a8a39c88,PodSandboxId:3f467e52d9b47ff5ac03b4a454e0ce36266c74302644c05d81e42b312eca5b9b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722929728590957011,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-118173,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: 6346700b562b8d3f4c18c66b3813bc28,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dd16c152a6e76f97a41f8091857757846cf00e2b45fbebefd70bc7191f71d6d,PodSandboxId:e774276d1db20b1d0af6107ff83ecf4ba4a28e6ce7cd5d48f1425a9085296c5c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722929728444127688,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: f74055e61dcf5f2385b1934a1a8e3b73,},Annotations:map[string]string{io.kubernetes.container.hash: 27fe2ed3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eae811e3e34734c78cc8f46ad9c12e0bde5ad445281f50029620e65e67b92b11,PodSandboxId:46116e50c4c43a8f8143460932ea15d87c3499f60948ed31f758cc62c7baae81,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722929728393775474,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65b7cf59bb7c42e61e37dcbebce4eec4,},Anno
tations:map[string]string{io.kubernetes.container.hash: fa683f08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10437bdde23713cae1b222edffa34be0e9583f43a83c01f10aae555f0dd7542e,PodSandboxId:3f725d083d59ecb8b65c8b09466e36c8b01dd5742fa8d97e108de4a49f47de04,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722929221668285640,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-rml5d,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4a82c0ca-1596-4dfe-a2f9-3a34f22480e5,},Annota
tions:map[string]string{io.kubernetes.container.hash: 950719af,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c60313490ee322d54c5b3908e628bff77d0ca10249f7253eb9cbd44745a805af,PodSandboxId:b221107c8052d9be54b23c0ee2e882f4adc57ade5813f27e7dfc4e4f71006663,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722929010372427230,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zscz4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef588a32-ac32-4e38-926e-6ee7a662a399,},Annotations:map[string]string{io.kuber
netes.container.hash: 38e96eed,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:321504a9bbbfc57376fccab559a6ebc7388670dc0bf123b5f9fd41b1da99adaa,PodSandboxId:4a9fe2c539265caf233fd5f3778ebc3e6092ce121ddf647e3f72622e31022e53,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722929010343638533,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-7db6d8ff4d-t24ww,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8effbe6c-3107-409e-9c94-dee4103d12b0,},Annotations:map[string]string{io.kubernetes.container.hash: 7604f64,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8f5ae0da505fc7f79225ba7115c83b9e909128bc28df767fce1fc3ba7a01392,PodSandboxId:5bbdb233d391e1127588924ac263113c9aa0148b24fc7113df3ca0891d7e0fdc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]st
ring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_EXITED,CreatedAt:1722928998315769494,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-n7jmb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa496bfb-7bf0-4ebc-adbd-36d992d8e279,},Annotations:map[string]string{io.kubernetes.container.hash: db2b7a8c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5a5bdf0b2a62163f6aff6abe864eedc47b461e233989f23ef7770c226763325,PodSandboxId:2ed57fdbf65f1846c200a67b50a1043fe8c70dd79c90d1a1c67e5e0634c8088b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHa
ndler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722928994187880481,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ksm6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 594bc178-8319-4922-9132-69a9c6890b95,},Annotations:map[string]string{io.kubernetes.container.hash: 50a78a81,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27f4fdf6fa77a51e1295c12b459cda687ed6ae68875218e24ba7959fb37814ed,PodSandboxId:5ebdb09ee65d6de5671cacfdefaaebdf7b5b00f25aa61461d39015ab5cb5578b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b767
22eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722928974549211585,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 080589493a4905d0fb10f52a1a0e25a5,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:306b97b37955570f0a18d38ff1afab7bb09e01c47a3418dba2aeed0f00d6052d,PodSandboxId:5055641759bdb7b0e57568f6d5310c72587ea715571f3fff96ff5d200be59dc2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0c
fd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722928974442938254,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65b7cf59bb7c42e61e37dcbebce4eec4,},Annotations:map[string]string{io.kubernetes.container.hash: fa683f08,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8acf2da2-c502-46be-8e6d-6f88fdb5ed1d name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 07:37:58 ha-118173 crio[3786]: time="2024-08-06 07:37:58.030862901Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=259fa9d5-3dd5-429f-8093-5b3288e97a53 name=/runtime.v1.RuntimeService/Version
	Aug 06 07:37:58 ha-118173 crio[3786]: time="2024-08-06 07:37:58.031200979Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=259fa9d5-3dd5-429f-8093-5b3288e97a53 name=/runtime.v1.RuntimeService/Version
	Aug 06 07:37:58 ha-118173 crio[3786]: time="2024-08-06 07:37:58.033193339Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ea814c77-5573-4375-8766-a760e4acae2a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 07:37:58 ha-118173 crio[3786]: time="2024-08-06 07:37:58.033697424Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722929878033674991,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154770,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ea814c77-5573-4375-8766-a760e4acae2a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 07:37:58 ha-118173 crio[3786]: time="2024-08-06 07:37:58.034662797Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3fe85acb-36ad-4d09-b1fa-4c3128067db8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 07:37:58 ha-118173 crio[3786]: time="2024-08-06 07:37:58.034721327Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3fe85acb-36ad-4d09-b1fa-4c3128067db8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 07:37:58 ha-118173 crio[3786]: time="2024-08-06 07:37:58.035161806Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:99f58b84fab60e2b606761b8ed13c48d86197a37ec557d278c6aeff4b215f6b9,PodSandboxId:9d298728c707a71c61f7e9c12d81847924d5702a23547701f7d70396538b9702,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722929782657022675,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8379466d-0241-40e4-a262-d762b3cda2eb,},Annotations:map[string]string{io.kubernetes.container.hash: d079fdad,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ebb3ef02d9872250e0c4614f3552d20ba04a677b81c274d9cf9061a86d7c9ce,PodSandboxId:e774276d1db20b1d0af6107ff83ecf4ba4a28e6ce7cd5d48f1425a9085296c5c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722929768628612088,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f74055e61dcf5f2385b1934a1a8e3b73,},Annotations:map[string]string{io.kubernetes.container.hash: 27fe2ed3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8e645fa59f15ec29730259bc834efd2b4740bd0cf9d98973d43ac2f3e60f8eb,PodSandboxId:3f467e52d9b47ff5ac03b4a454e0ce36266c74302644c05d81e42b312eca5b9b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722929765627015845,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6346700b562b8d3f4c18c66b3813bc28,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb461c0c1a90ce934f5cc90096bf3ebadcbb030c865637c1b0d6f60c3134e0bb,PodSandboxId:c18a977f42d3413f04444d2f24aa487786b86d9e625780e6e1111c13f46b4c81,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722929761946284425,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-rml5d,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4a82c0ca-1596-4dfe-a2f9-3a34f22480e5,},Annotations:map[string]string{io.kubernetes.container.hash: 950719af,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d631bee7d098005cab603ace8514dea77a9fa74686bb0be29ea68d9a592c3e9c,PodSandboxId:3cf78ca258a5c291aeb00bdf810e1bbed94619b29783b0ec29e87984818cd2ff,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722929741194070005,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e550e7b49e50c28915c957cd8c1df9d,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c038e3b4d8c535621dabf1ffa68a5bb717de411bccf8772e655b2c06282edf3,PodSandboxId:9d298728c707a71c61f7e9c12d81847924d5702a23547701f7d70396538b9702,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722929729209432391,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8379466d-0241-40e4-a262-d762b3cda2eb,},Annotations:map[string]string{io.kubernetes.container.hash: d079fdad,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcf0ac6e03c21e2b64b008ca3dcd8293a980e393216a3103acf1a60f8ef1dc3f,PodSandboxId:c1b179a473da688998355788cf02f16bea7187f30019e6786b025cc877ed72fa,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722929728492501519,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ksm6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 594bc178-8319-4922-9132-69a9c6890b95,},Annotations:map[string]string{io.kubernetes.container.hash: 50a78a81,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:9529a894fdd4b09b00a809dc3ad326ff8ab57eba9ae9438200069348b4426b19,PodSandboxId:4a0b68a5db12d84fa3e54810b85090c13072ad782e230aa867e47298ad248fc6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722929728872421792,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zscz4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef588a32-ac32-4e38-926e-6ee7a662a399,},Annotations:map[string]string{io.kubernetes.container.hash: 38e96eed,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e67330d376d276df974f78a481179786f12a17135fd068411ad63be616869e1e,PodSandboxId:0e8087802602b09a2d0f605ce70c425e402b5a10e4f1497b2eb139342a335f5a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_RUNNING,CreatedAt:1722929728633687734,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-n7jmb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa496bfb-7bf0-4ebc-adbd-36d992d8e279,},Annotations:map[string]string{io.kubernetes.container.hash: db2b7a8c,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8c7c569af258485ebf98698eba57ddc482e2819c27995c4da66803dc7a84de4,PodSandboxId:52b914be0c4a1b45fddcf2b0b959430fced7a213686ea1009a6c37f2a5a399fd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722929728711874905,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 080589493a4905d0fb10f52a1a0e25a5,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ea3f1672f91dabee2d9f8605538a8f3683ca4391c2c9deac554eacfddd243bb,PodSandboxId:6a8f5ef8d3fb5f340f9569eb21fa91020e3b44226daa521ebed7eafd2c5aac8c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722929728725790024,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-t24ww,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8effbe6c-3107-409e-9c94-dee4103d12b0,},Annotations:map[string]string{io.kubernetes.container.hash: 7604f64,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0280e428feb3f49d39e63996a569fe8099b785732ee02450df33eb25a8a39c88,PodSandboxId:3f467e52d9b47ff5ac03b4a454e0ce36266c74302644c05d81e42b312eca5b9b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722929728590957011,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-118173,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: 6346700b562b8d3f4c18c66b3813bc28,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dd16c152a6e76f97a41f8091857757846cf00e2b45fbebefd70bc7191f71d6d,PodSandboxId:e774276d1db20b1d0af6107ff83ecf4ba4a28e6ce7cd5d48f1425a9085296c5c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722929728444127688,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: f74055e61dcf5f2385b1934a1a8e3b73,},Annotations:map[string]string{io.kubernetes.container.hash: 27fe2ed3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eae811e3e34734c78cc8f46ad9c12e0bde5ad445281f50029620e65e67b92b11,PodSandboxId:46116e50c4c43a8f8143460932ea15d87c3499f60948ed31f758cc62c7baae81,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722929728393775474,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65b7cf59bb7c42e61e37dcbebce4eec4,},Anno
tations:map[string]string{io.kubernetes.container.hash: fa683f08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10437bdde23713cae1b222edffa34be0e9583f43a83c01f10aae555f0dd7542e,PodSandboxId:3f725d083d59ecb8b65c8b09466e36c8b01dd5742fa8d97e108de4a49f47de04,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722929221668285640,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-rml5d,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4a82c0ca-1596-4dfe-a2f9-3a34f22480e5,},Annota
tions:map[string]string{io.kubernetes.container.hash: 950719af,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c60313490ee322d54c5b3908e628bff77d0ca10249f7253eb9cbd44745a805af,PodSandboxId:b221107c8052d9be54b23c0ee2e882f4adc57ade5813f27e7dfc4e4f71006663,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722929010372427230,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zscz4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef588a32-ac32-4e38-926e-6ee7a662a399,},Annotations:map[string]string{io.kuber
netes.container.hash: 38e96eed,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:321504a9bbbfc57376fccab559a6ebc7388670dc0bf123b5f9fd41b1da99adaa,PodSandboxId:4a9fe2c539265caf233fd5f3778ebc3e6092ce121ddf647e3f72622e31022e53,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722929010343638533,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-7db6d8ff4d-t24ww,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8effbe6c-3107-409e-9c94-dee4103d12b0,},Annotations:map[string]string{io.kubernetes.container.hash: 7604f64,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8f5ae0da505fc7f79225ba7115c83b9e909128bc28df767fce1fc3ba7a01392,PodSandboxId:5bbdb233d391e1127588924ac263113c9aa0148b24fc7113df3ca0891d7e0fdc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]st
ring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_EXITED,CreatedAt:1722928998315769494,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-n7jmb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa496bfb-7bf0-4ebc-adbd-36d992d8e279,},Annotations:map[string]string{io.kubernetes.container.hash: db2b7a8c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5a5bdf0b2a62163f6aff6abe864eedc47b461e233989f23ef7770c226763325,PodSandboxId:2ed57fdbf65f1846c200a67b50a1043fe8c70dd79c90d1a1c67e5e0634c8088b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHa
ndler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722928994187880481,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ksm6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 594bc178-8319-4922-9132-69a9c6890b95,},Annotations:map[string]string{io.kubernetes.container.hash: 50a78a81,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27f4fdf6fa77a51e1295c12b459cda687ed6ae68875218e24ba7959fb37814ed,PodSandboxId:5ebdb09ee65d6de5671cacfdefaaebdf7b5b00f25aa61461d39015ab5cb5578b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b767
22eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722928974549211585,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 080589493a4905d0fb10f52a1a0e25a5,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:306b97b37955570f0a18d38ff1afab7bb09e01c47a3418dba2aeed0f00d6052d,PodSandboxId:5055641759bdb7b0e57568f6d5310c72587ea715571f3fff96ff5d200be59dc2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0c
fd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722928974442938254,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65b7cf59bb7c42e61e37dcbebce4eec4,},Annotations:map[string]string{io.kubernetes.container.hash: fa683f08,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3fe85acb-36ad-4d09-b1fa-4c3128067db8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 07:37:58 ha-118173 crio[3786]: time="2024-08-06 07:37:58.083626104Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=637bdb7d-6cfd-4ebc-be3e-0aa6e14a4798 name=/runtime.v1.RuntimeService/Version
	Aug 06 07:37:58 ha-118173 crio[3786]: time="2024-08-06 07:37:58.083834630Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=637bdb7d-6cfd-4ebc-be3e-0aa6e14a4798 name=/runtime.v1.RuntimeService/Version
	Aug 06 07:37:58 ha-118173 crio[3786]: time="2024-08-06 07:37:58.086506167Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9261fd14-af4b-45b1-8402-1e515bceeb29 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 07:37:58 ha-118173 crio[3786]: time="2024-08-06 07:37:58.087058902Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722929878086997782,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154770,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9261fd14-af4b-45b1-8402-1e515bceeb29 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 07:37:58 ha-118173 crio[3786]: time="2024-08-06 07:37:58.088017631Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7edbe0b5-943e-4465-879f-d817bfabc92e name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 07:37:58 ha-118173 crio[3786]: time="2024-08-06 07:37:58.088096687Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7edbe0b5-943e-4465-879f-d817bfabc92e name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 07:37:58 ha-118173 crio[3786]: time="2024-08-06 07:37:58.088531989Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:99f58b84fab60e2b606761b8ed13c48d86197a37ec557d278c6aeff4b215f6b9,PodSandboxId:9d298728c707a71c61f7e9c12d81847924d5702a23547701f7d70396538b9702,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722929782657022675,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8379466d-0241-40e4-a262-d762b3cda2eb,},Annotations:map[string]string{io.kubernetes.container.hash: d079fdad,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ebb3ef02d9872250e0c4614f3552d20ba04a677b81c274d9cf9061a86d7c9ce,PodSandboxId:e774276d1db20b1d0af6107ff83ecf4ba4a28e6ce7cd5d48f1425a9085296c5c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722929768628612088,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f74055e61dcf5f2385b1934a1a8e3b73,},Annotations:map[string]string{io.kubernetes.container.hash: 27fe2ed3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8e645fa59f15ec29730259bc834efd2b4740bd0cf9d98973d43ac2f3e60f8eb,PodSandboxId:3f467e52d9b47ff5ac03b4a454e0ce36266c74302644c05d81e42b312eca5b9b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722929765627015845,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6346700b562b8d3f4c18c66b3813bc28,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb461c0c1a90ce934f5cc90096bf3ebadcbb030c865637c1b0d6f60c3134e0bb,PodSandboxId:c18a977f42d3413f04444d2f24aa487786b86d9e625780e6e1111c13f46b4c81,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722929761946284425,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-rml5d,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4a82c0ca-1596-4dfe-a2f9-3a34f22480e5,},Annotations:map[string]string{io.kubernetes.container.hash: 950719af,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d631bee7d098005cab603ace8514dea77a9fa74686bb0be29ea68d9a592c3e9c,PodSandboxId:3cf78ca258a5c291aeb00bdf810e1bbed94619b29783b0ec29e87984818cd2ff,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722929741194070005,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e550e7b49e50c28915c957cd8c1df9d,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c038e3b4d8c535621dabf1ffa68a5bb717de411bccf8772e655b2c06282edf3,PodSandboxId:9d298728c707a71c61f7e9c12d81847924d5702a23547701f7d70396538b9702,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722929729209432391,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8379466d-0241-40e4-a262-d762b3cda2eb,},Annotations:map[string]string{io.kubernetes.container.hash: d079fdad,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcf0ac6e03c21e2b64b008ca3dcd8293a980e393216a3103acf1a60f8ef1dc3f,PodSandboxId:c1b179a473da688998355788cf02f16bea7187f30019e6786b025cc877ed72fa,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722929728492501519,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ksm6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 594bc178-8319-4922-9132-69a9c6890b95,},Annotations:map[string]string{io.kubernetes.container.hash: 50a78a81,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:9529a894fdd4b09b00a809dc3ad326ff8ab57eba9ae9438200069348b4426b19,PodSandboxId:4a0b68a5db12d84fa3e54810b85090c13072ad782e230aa867e47298ad248fc6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722929728872421792,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zscz4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef588a32-ac32-4e38-926e-6ee7a662a399,},Annotations:map[string]string{io.kubernetes.container.hash: 38e96eed,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e67330d376d276df974f78a481179786f12a17135fd068411ad63be616869e1e,PodSandboxId:0e8087802602b09a2d0f605ce70c425e402b5a10e4f1497b2eb139342a335f5a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_RUNNING,CreatedAt:1722929728633687734,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-n7jmb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa496bfb-7bf0-4ebc-adbd-36d992d8e279,},Annotations:map[string]string{io.kubernetes.container.hash: db2b7a8c,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8c7c569af258485ebf98698eba57ddc482e2819c27995c4da66803dc7a84de4,PodSandboxId:52b914be0c4a1b45fddcf2b0b959430fced7a213686ea1009a6c37f2a5a399fd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722929728711874905,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 080589493a4905d0fb10f52a1a0e25a5,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ea3f1672f91dabee2d9f8605538a8f3683ca4391c2c9deac554eacfddd243bb,PodSandboxId:6a8f5ef8d3fb5f340f9569eb21fa91020e3b44226daa521ebed7eafd2c5aac8c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722929728725790024,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-t24ww,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8effbe6c-3107-409e-9c94-dee4103d12b0,},Annotations:map[string]string{io.kubernetes.container.hash: 7604f64,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0280e428feb3f49d39e63996a569fe8099b785732ee02450df33eb25a8a39c88,PodSandboxId:3f467e52d9b47ff5ac03b4a454e0ce36266c74302644c05d81e42b312eca5b9b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722929728590957011,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-118173,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: 6346700b562b8d3f4c18c66b3813bc28,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dd16c152a6e76f97a41f8091857757846cf00e2b45fbebefd70bc7191f71d6d,PodSandboxId:e774276d1db20b1d0af6107ff83ecf4ba4a28e6ce7cd5d48f1425a9085296c5c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722929728444127688,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: f74055e61dcf5f2385b1934a1a8e3b73,},Annotations:map[string]string{io.kubernetes.container.hash: 27fe2ed3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eae811e3e34734c78cc8f46ad9c12e0bde5ad445281f50029620e65e67b92b11,PodSandboxId:46116e50c4c43a8f8143460932ea15d87c3499f60948ed31f758cc62c7baae81,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722929728393775474,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65b7cf59bb7c42e61e37dcbebce4eec4,},Anno
tations:map[string]string{io.kubernetes.container.hash: fa683f08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10437bdde23713cae1b222edffa34be0e9583f43a83c01f10aae555f0dd7542e,PodSandboxId:3f725d083d59ecb8b65c8b09466e36c8b01dd5742fa8d97e108de4a49f47de04,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722929221668285640,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-rml5d,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4a82c0ca-1596-4dfe-a2f9-3a34f22480e5,},Annota
tions:map[string]string{io.kubernetes.container.hash: 950719af,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c60313490ee322d54c5b3908e628bff77d0ca10249f7253eb9cbd44745a805af,PodSandboxId:b221107c8052d9be54b23c0ee2e882f4adc57ade5813f27e7dfc4e4f71006663,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722929010372427230,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zscz4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef588a32-ac32-4e38-926e-6ee7a662a399,},Annotations:map[string]string{io.kuber
netes.container.hash: 38e96eed,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:321504a9bbbfc57376fccab559a6ebc7388670dc0bf123b5f9fd41b1da99adaa,PodSandboxId:4a9fe2c539265caf233fd5f3778ebc3e6092ce121ddf647e3f72622e31022e53,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722929010343638533,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-7db6d8ff4d-t24ww,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8effbe6c-3107-409e-9c94-dee4103d12b0,},Annotations:map[string]string{io.kubernetes.container.hash: 7604f64,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8f5ae0da505fc7f79225ba7115c83b9e909128bc28df767fce1fc3ba7a01392,PodSandboxId:5bbdb233d391e1127588924ac263113c9aa0148b24fc7113df3ca0891d7e0fdc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]st
ring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_EXITED,CreatedAt:1722928998315769494,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-n7jmb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa496bfb-7bf0-4ebc-adbd-36d992d8e279,},Annotations:map[string]string{io.kubernetes.container.hash: db2b7a8c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5a5bdf0b2a62163f6aff6abe864eedc47b461e233989f23ef7770c226763325,PodSandboxId:2ed57fdbf65f1846c200a67b50a1043fe8c70dd79c90d1a1c67e5e0634c8088b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHa
ndler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722928994187880481,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ksm6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 594bc178-8319-4922-9132-69a9c6890b95,},Annotations:map[string]string{io.kubernetes.container.hash: 50a78a81,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27f4fdf6fa77a51e1295c12b459cda687ed6ae68875218e24ba7959fb37814ed,PodSandboxId:5ebdb09ee65d6de5671cacfdefaaebdf7b5b00f25aa61461d39015ab5cb5578b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b767
22eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722928974549211585,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 080589493a4905d0fb10f52a1a0e25a5,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:306b97b37955570f0a18d38ff1afab7bb09e01c47a3418dba2aeed0f00d6052d,PodSandboxId:5055641759bdb7b0e57568f6d5310c72587ea715571f3fff96ff5d200be59dc2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0c
fd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722928974442938254,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65b7cf59bb7c42e61e37dcbebce4eec4,},Annotations:map[string]string{io.kubernetes.container.hash: fa683f08,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7edbe0b5-943e-4465-879f-d817bfabc92e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	99f58b84fab60       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       4                   9d298728c707a       storage-provisioner
	7ebb3ef02d987       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      About a minute ago   Running             kube-apiserver            3                   e774276d1db20       kube-apiserver-ha-118173
	d8e645fa59f15       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      About a minute ago   Running             kube-controller-manager   2                   3f467e52d9b47       kube-controller-manager-ha-118173
	bb461c0c1a90c       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   c18a977f42d34       busybox-fc5497c4f-rml5d
	d631bee7d0980       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      2 minutes ago        Running             kube-vip                  0                   3cf78ca258a5c       kube-vip-ha-118173
	4c038e3b4d8c5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago        Exited              storage-provisioner       3                   9d298728c707a       storage-provisioner
	9529a894fdd4b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   4a0b68a5db12d       coredns-7db6d8ff4d-zscz4
	6ea3f1672f91d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   6a8f5ef8d3fb5       coredns-7db6d8ff4d-t24ww
	f8c7c569af258       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      2 minutes ago        Running             kube-scheduler            1                   52b914be0c4a1       kube-scheduler-ha-118173
	e67330d376d27       917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557                                      2 minutes ago        Running             kindnet-cni               1                   0e8087802602b       kindnet-n7jmb
	0280e428feb3f       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      2 minutes ago        Exited              kube-controller-manager   1                   3f467e52d9b47       kube-controller-manager-ha-118173
	bcf0ac6e03c21       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      2 minutes ago        Running             kube-proxy                1                   c1b179a473da6       kube-proxy-ksm6v
	2dd16c152a6e7       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      2 minutes ago        Exited              kube-apiserver            2                   e774276d1db20       kube-apiserver-ha-118173
	eae811e3e3473       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      2 minutes ago        Running             etcd                      1                   46116e50c4c43       etcd-ha-118173
	10437bdde2371       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   10 minutes ago       Exited              busybox                   0                   3f725d083d59e       busybox-fc5497c4f-rml5d
	c60313490ee32       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      14 minutes ago       Exited              coredns                   0                   b221107c8052d       coredns-7db6d8ff4d-zscz4
	321504a9bbbfc       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      14 minutes ago       Exited              coredns                   0                   4a9fe2c539265       coredns-7db6d8ff4d-t24ww
	b8f5ae0da505f       docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3    14 minutes ago       Exited              kindnet-cni               0                   5bbdb233d391e       kindnet-n7jmb
	e5a5bdf0b2a62       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      14 minutes ago       Exited              kube-proxy                0                   2ed57fdbf65f1       kube-proxy-ksm6v
	27f4fdf6fa77a       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      15 minutes ago       Exited              kube-scheduler            0                   5ebdb09ee65d6       kube-scheduler-ha-118173
	306b97b379555       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      15 minutes ago       Exited              etcd                      0                   5055641759bdb       etcd-ha-118173
	
	
	==> coredns [321504a9bbbfc57376fccab559a6ebc7388670dc0bf123b5f9fd41b1da99adaa] <==
	[INFO] 10.244.1.2:50320 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000621303s
	[INFO] 10.244.1.2:50307 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001920609s
	[INFO] 10.244.0.4:35382 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.001483476s
	[INFO] 10.244.2.2:42914 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003426884s
	[INFO] 10.244.2.2:54544 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002889491s
	[INFO] 10.244.2.2:56241 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000153019s
	[INFO] 10.244.1.2:42127 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000287649s
	[INFO] 10.244.1.2:57726 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001778651s
	[INFO] 10.244.1.2:48193 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000186214s
	[INFO] 10.244.1.2:39745 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001882s
	[INFO] 10.244.0.4:43115 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00007628s
	[INFO] 10.244.0.4:43987 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001861604s
	[INFO] 10.244.0.4:36070 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000175723s
	[INFO] 10.244.0.4:57521 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000070761s
	[INFO] 10.244.2.2:54569 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000133483s
	[INFO] 10.244.2.2:45084 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000078772s
	[INFO] 10.244.0.4:53171 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000115977s
	[INFO] 10.244.0.4:46499 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000151186s
	[INFO] 10.244.1.2:46474 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000192738s
	[INFO] 10.244.1.2:42611 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000163773s
	[INFO] 10.244.0.4:56252 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000104237s
	[INFO] 10.244.0.4:53725 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000119201s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=2003&timeout=8m1s&timeoutSeconds=481&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	
	
	==> coredns [6ea3f1672f91dabee2d9f8605538a8f3683ca4391c2c9deac554eacfddd243bb] <==
	[INFO] plugin/kubernetes: Trace[2007680759]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (06-Aug-2024 07:35:35.106) (total time: 10001ms):
	Trace[2007680759]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (07:35:45.107)
	Trace[2007680759]: [10.001373038s] [10.001373038s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:40116->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[987296960]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (06-Aug-2024 07:35:43.676) (total time: 10206ms):
	Trace[987296960]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:40116->10.96.0.1:443: read: connection reset by peer 10206ms (07:35:53.882)
	Trace[987296960]: [10.206694555s] [10.206694555s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:40116->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [9529a894fdd4b09b00a809dc3ad326ff8ab57eba9ae9438200069348b4426b19] <==
	Trace[2091369959]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (07:35:43.988)
	Trace[2091369959]: [10.000899279s] [10.000899279s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:42182->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:42182->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:39732->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:39732->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:42184->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[558389846]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (06-Aug-2024 07:35:43.774) (total time: 10109ms):
	Trace[558389846]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:42184->10.96.0.1:443: read: connection reset by peer 10109ms (07:35:53.883)
	Trace[558389846]: [10.109294055s] [10.109294055s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:42184->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [c60313490ee322d54c5b3908e628bff77d0ca10249f7253eb9cbd44745a805af] <==
	[INFO] 10.244.0.4:40510 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000107856s
	[INFO] 10.244.0.4:53202 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001964559s
	[INFO] 10.244.0.4:37382 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000120987s
	[INFO] 10.244.0.4:55432 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000068014s
	[INFO] 10.244.2.2:55791 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000121334s
	[INFO] 10.244.2.2:41409 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000170119s
	[INFO] 10.244.1.2:38348 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000159643s
	[INFO] 10.244.1.2:59261 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000120304s
	[INFO] 10.244.1.2:34766 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001388s
	[INFO] 10.244.1.2:51650 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00006659s
	[INFO] 10.244.0.4:60377 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000100376s
	[INFO] 10.244.0.4:36616 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000046708s
	[INFO] 10.244.2.2:40205 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000126264s
	[INFO] 10.244.2.2:43822 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000154718s
	[INFO] 10.244.2.2:55803 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000127422s
	[INFO] 10.244.2.2:36078 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000107543s
	[INFO] 10.244.1.2:57979 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000156818s
	[INFO] 10.244.1.2:46537 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000172879s
	[INFO] 10.244.0.4:53503 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000186753s
	[INFO] 10.244.0.4:59777 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000063489s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	
	
	==> describe nodes <==
	Name:               ha-118173
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-118173
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e92cb06692f5ea1ba801d10d148e5e92e807f9c8
	                    minikube.k8s.io/name=ha-118173
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_06T07_23_01_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 06 Aug 2024 07:22:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-118173
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 06 Aug 2024 07:37:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 06 Aug 2024 07:36:11 +0000   Tue, 06 Aug 2024 07:22:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 06 Aug 2024 07:36:11 +0000   Tue, 06 Aug 2024 07:22:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 06 Aug 2024 07:36:11 +0000   Tue, 06 Aug 2024 07:22:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 06 Aug 2024 07:36:11 +0000   Tue, 06 Aug 2024 07:23:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.164
	  Hostname:    ha-118173
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 bc4f3dc5ed054dcb94a91cb696ade310
	  System UUID:                bc4f3dc5-ed05-4dcb-94a9-1cb696ade310
	  Boot ID:                    34e88bd3-feb2-44fb-9f4d-def7e685eb93
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-rml5d              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 coredns-7db6d8ff4d-t24ww             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 coredns-7db6d8ff4d-zscz4             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 etcd-ha-118173                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-n7jmb                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      14m
	  kube-system                 kube-apiserver-ha-118173             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-ha-118173    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-ksm6v                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-ha-118173             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-vip-ha-118173                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         73s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 106s                   kube-proxy       
	  Normal   Starting                 14m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  15m                    kubelet          Node ha-118173 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 14m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  14m                    kubelet          Node ha-118173 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    14m                    kubelet          Node ha-118173 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     14m                    kubelet          Node ha-118173 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           14m                    node-controller  Node ha-118173 event: Registered Node ha-118173 in Controller
	  Normal   NodeReady                14m                    kubelet          Node ha-118173 status is now: NodeReady
	  Normal   RegisteredNode           12m                    node-controller  Node ha-118173 event: Registered Node ha-118173 in Controller
	  Normal   RegisteredNode           11m                    node-controller  Node ha-118173 event: Registered Node ha-118173 in Controller
	  Warning  ContainerGCFailed        2m58s (x2 over 3m58s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           100s                   node-controller  Node ha-118173 event: Registered Node ha-118173 in Controller
	  Normal   RegisteredNode           96s                    node-controller  Node ha-118173 event: Registered Node ha-118173 in Controller
	  Normal   RegisteredNode           30s                    node-controller  Node ha-118173 event: Registered Node ha-118173 in Controller
	
	
	Name:               ha-118173-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-118173-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e92cb06692f5ea1ba801d10d148e5e92e807f9c8
	                    minikube.k8s.io/name=ha-118173
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_06T07_25_16_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 06 Aug 2024 07:25:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-118173-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 06 Aug 2024 07:37:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 06 Aug 2024 07:36:55 +0000   Tue, 06 Aug 2024 07:36:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 06 Aug 2024 07:36:55 +0000   Tue, 06 Aug 2024 07:36:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 06 Aug 2024 07:36:55 +0000   Tue, 06 Aug 2024 07:36:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 06 Aug 2024 07:36:55 +0000   Tue, 06 Aug 2024 07:36:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.203
	  Hostname:    ha-118173-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1d472203b4d8448095b6313de6675a72
	  System UUID:                1d472203-b4d8-4480-95b6-313de6675a72
	  Boot ID:                    66643318-fea9-4928-bcf3-6d8a09896d20
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-j9d92                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 etcd-ha-118173-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-z9w6l                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-apiserver-ha-118173-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-ha-118173-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-k8sq6                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-ha-118173-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-vip-ha-118173-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 101s                   kube-proxy       
	  Normal  Starting                 12m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)      kubelet          Node ha-118173-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)      kubelet          Node ha-118173-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)      kubelet          Node ha-118173-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12m                    node-controller  Node ha-118173-m02 event: Registered Node ha-118173-m02 in Controller
	  Normal  RegisteredNode           12m                    node-controller  Node ha-118173-m02 event: Registered Node ha-118173-m02 in Controller
	  Normal  RegisteredNode           11m                    node-controller  Node ha-118173-m02 event: Registered Node ha-118173-m02 in Controller
	  Normal  NodeNotReady             9m1s                   node-controller  Node ha-118173-m02 status is now: NodeNotReady
	  Normal  Starting                 2m12s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m12s (x8 over 2m12s)  kubelet          Node ha-118173-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m12s (x8 over 2m12s)  kubelet          Node ha-118173-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m12s (x7 over 2m12s)  kubelet          Node ha-118173-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m12s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           100s                   node-controller  Node ha-118173-m02 event: Registered Node ha-118173-m02 in Controller
	  Normal  RegisteredNode           96s                    node-controller  Node ha-118173-m02 event: Registered Node ha-118173-m02 in Controller
	  Normal  RegisteredNode           30s                    node-controller  Node ha-118173-m02 event: Registered Node ha-118173-m02 in Controller
	
	
	Name:               ha-118173-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-118173-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e92cb06692f5ea1ba801d10d148e5e92e807f9c8
	                    minikube.k8s.io/name=ha-118173
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_06T07_26_30_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 06 Aug 2024 07:26:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-118173-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 06 Aug 2024 07:37:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 06 Aug 2024 07:37:33 +0000   Tue, 06 Aug 2024 07:37:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 06 Aug 2024 07:37:33 +0000   Tue, 06 Aug 2024 07:37:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 06 Aug 2024 07:37:33 +0000   Tue, 06 Aug 2024 07:37:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 06 Aug 2024 07:37:33 +0000   Tue, 06 Aug 2024 07:37:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.5
	  Hostname:    ha-118173-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 323fad8ca5474069b5e10e092f6ed68a
	  System UUID:                323fad8c-a547-4069-b5e1-0e092f6ed68a
	  Boot ID:                    ccb9ef14-2cb4-4254-be1a-08343e04e010
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-852mp                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 etcd-ha-118173-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 kindnet-2bmzh                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      11m
	  kube-system                 kube-apiserver-ha-118173-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-controller-manager-ha-118173-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-proxy-sw4nj                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-scheduler-ha-118173-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-vip-ha-118173-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 38s                kube-proxy       
	  Normal   RegisteredNode           11m                node-controller  Node ha-118173-m03 event: Registered Node ha-118173-m03 in Controller
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ha-118173-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node ha-118173-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node ha-118173-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           11m                node-controller  Node ha-118173-m03 event: Registered Node ha-118173-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-118173-m03 event: Registered Node ha-118173-m03 in Controller
	  Normal   RegisteredNode           100s               node-controller  Node ha-118173-m03 event: Registered Node ha-118173-m03 in Controller
	  Normal   RegisteredNode           96s                node-controller  Node ha-118173-m03 event: Registered Node ha-118173-m03 in Controller
	  Normal   NodeNotReady             60s                node-controller  Node ha-118173-m03 status is now: NodeNotReady
	  Normal   Starting                 55s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  55s                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 55s                kubelet          Node ha-118173-m03 has been rebooted, boot id: ccb9ef14-2cb4-4254-be1a-08343e04e010
	  Normal   NodeHasSufficientMemory  55s (x2 over 55s)  kubelet          Node ha-118173-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    55s (x2 over 55s)  kubelet          Node ha-118173-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     55s (x2 over 55s)  kubelet          Node ha-118173-m03 status is now: NodeHasSufficientPID
	  Normal   NodeReady                55s                kubelet          Node ha-118173-m03 status is now: NodeReady
	  Normal   RegisteredNode           30s                node-controller  Node ha-118173-m03 event: Registered Node ha-118173-m03 in Controller
	
	
	Name:               ha-118173-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-118173-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e92cb06692f5ea1ba801d10d148e5e92e807f9c8
	                    minikube.k8s.io/name=ha-118173
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_06T07_27_39_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 06 Aug 2024 07:27:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-118173-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 06 Aug 2024 07:37:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 06 Aug 2024 07:37:50 +0000   Tue, 06 Aug 2024 07:37:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 06 Aug 2024 07:37:50 +0000   Tue, 06 Aug 2024 07:37:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 06 Aug 2024 07:37:50 +0000   Tue, 06 Aug 2024 07:37:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 06 Aug 2024 07:37:50 +0000   Tue, 06 Aug 2024 07:37:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.107
	  Hostname:    ha-118173-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a0510d6c5dda47be93ffda6ba4fd5419
	  System UUID:                a0510d6c-5dda-47be-93ff-da6ba4fd5419
	  Boot ID:                    c2e1f570-28d9-485d-a15a-e0d3f781139a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-97v8n       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-proxy-q4x6p    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 4s                 kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   NodeHasSufficientMemory  10m (x2 over 10m)  kubelet          Node ha-118173-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x2 over 10m)  kubelet          Node ha-118173-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x2 over 10m)  kubelet          Node ha-118173-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10m                node-controller  Node ha-118173-m04 event: Registered Node ha-118173-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-118173-m04 event: Registered Node ha-118173-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-118173-m04 event: Registered Node ha-118173-m04 in Controller
	  Normal   NodeReady                10m                kubelet          Node ha-118173-m04 status is now: NodeReady
	  Normal   RegisteredNode           100s               node-controller  Node ha-118173-m04 event: Registered Node ha-118173-m04 in Controller
	  Normal   RegisteredNode           96s                node-controller  Node ha-118173-m04 event: Registered Node ha-118173-m04 in Controller
	  Normal   NodeNotReady             60s                node-controller  Node ha-118173-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           30s                node-controller  Node ha-118173-m04 event: Registered Node ha-118173-m04 in Controller
	  Normal   Starting                 8s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  8s (x2 over 8s)    kubelet          Node ha-118173-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8s (x2 over 8s)    kubelet          Node ha-118173-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8s (x2 over 8s)    kubelet          Node ha-118173-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 8s                 kubelet          Node ha-118173-m04 has been rebooted, boot id: c2e1f570-28d9-485d-a15a-e0d3f781139a
	  Normal   NodeReady                8s                 kubelet          Node ha-118173-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +9.212519] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.057280] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.055993] systemd-fstab-generator[605]: Ignoring "noauto" option for root device
	[  +0.205541] systemd-fstab-generator[619]: Ignoring "noauto" option for root device
	[  +0.119779] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	[  +0.271137] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[  +4.288191] systemd-fstab-generator[759]: Ignoring "noauto" option for root device
	[  +0.057342] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.007696] systemd-fstab-generator[935]: Ignoring "noauto" option for root device
	[  +1.004774] kauditd_printk_skb: 62 callbacks suppressed
	[  +5.981141] systemd-fstab-generator[1356]: Ignoring "noauto" option for root device
	[  +0.087170] kauditd_printk_skb: 35 callbacks suppressed
	[Aug 6 07:23] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.080054] kauditd_printk_skb: 35 callbacks suppressed
	[Aug 6 07:25] kauditd_printk_skb: 24 callbacks suppressed
	[Aug 6 07:32] kauditd_printk_skb: 1 callbacks suppressed
	[Aug 6 07:35] systemd-fstab-generator[3702]: Ignoring "noauto" option for root device
	[  +0.148178] systemd-fstab-generator[3714]: Ignoring "noauto" option for root device
	[  +0.200114] systemd-fstab-generator[3728]: Ignoring "noauto" option for root device
	[  +0.158270] systemd-fstab-generator[3740]: Ignoring "noauto" option for root device
	[  +0.303387] systemd-fstab-generator[3768]: Ignoring "noauto" option for root device
	[  +3.658124] systemd-fstab-generator[3873]: Ignoring "noauto" option for root device
	[  +5.822187] kauditd_printk_skb: 122 callbacks suppressed
	[ +13.107609] kauditd_printk_skb: 86 callbacks suppressed
	[Aug 6 07:36] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [306b97b37955570f0a18d38ff1afab7bb09e01c47a3418dba2aeed0f00d6052d] <==
	2024/08/06 07:33:46 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/08/06 07:33:46 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/08/06 07:33:46 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/08/06 07:33:46 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-06T07:33:46.466511Z","caller":"etcdserver/v3_server.go:897","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":6999316365728003832,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-08-06T07:33:46.530482Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.164:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-06T07:33:46.530523Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.164:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-06T07:33:46.530683Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"6a00153c0a3e6122","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-08-06T07:33:46.530894Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"9fed855c7863c2a"}
	{"level":"info","ts":"2024-08-06T07:33:46.530918Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"9fed855c7863c2a"}
	{"level":"info","ts":"2024-08-06T07:33:46.530944Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"9fed855c7863c2a"}
	{"level":"info","ts":"2024-08-06T07:33:46.531049Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"6a00153c0a3e6122","remote-peer-id":"9fed855c7863c2a"}
	{"level":"info","ts":"2024-08-06T07:33:46.531103Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6a00153c0a3e6122","remote-peer-id":"9fed855c7863c2a"}
	{"level":"info","ts":"2024-08-06T07:33:46.531135Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"6a00153c0a3e6122","remote-peer-id":"9fed855c7863c2a"}
	{"level":"info","ts":"2024-08-06T07:33:46.531146Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"9fed855c7863c2a"}
	{"level":"info","ts":"2024-08-06T07:33:46.531152Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"7c092d3c19282747"}
	{"level":"info","ts":"2024-08-06T07:33:46.53116Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"7c092d3c19282747"}
	{"level":"info","ts":"2024-08-06T07:33:46.531209Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"7c092d3c19282747"}
	{"level":"info","ts":"2024-08-06T07:33:46.531319Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"6a00153c0a3e6122","remote-peer-id":"7c092d3c19282747"}
	{"level":"info","ts":"2024-08-06T07:33:46.531412Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6a00153c0a3e6122","remote-peer-id":"7c092d3c19282747"}
	{"level":"info","ts":"2024-08-06T07:33:46.53149Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"6a00153c0a3e6122","remote-peer-id":"7c092d3c19282747"}
	{"level":"info","ts":"2024-08-06T07:33:46.531532Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"7c092d3c19282747"}
	{"level":"info","ts":"2024-08-06T07:33:46.535372Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.164:2380"}
	{"level":"info","ts":"2024-08-06T07:33:46.535602Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.164:2380"}
	{"level":"info","ts":"2024-08-06T07:33:46.535667Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-118173","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.164:2380"],"advertise-client-urls":["https://192.168.39.164:2379"]}
	
	
	==> etcd [eae811e3e34734c78cc8f46ad9c12e0bde5ad445281f50029620e65e67b92b11] <==
	{"level":"warn","ts":"2024-08-06T07:36:58.28399Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a00153c0a3e6122","from":"6a00153c0a3e6122","remote-peer-id":"7c092d3c19282747","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-06T07:36:58.383717Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a00153c0a3e6122","from":"6a00153c0a3e6122","remote-peer-id":"7c092d3c19282747","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-06T07:36:58.39003Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a00153c0a3e6122","from":"6a00153c0a3e6122","remote-peer-id":"7c092d3c19282747","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-06T07:36:59.412794Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"7c092d3c19282747","rtt":"0s","error":"dial tcp 192.168.39.5:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-08-06T07:36:59.413076Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"7c092d3c19282747","rtt":"0s","error":"dial tcp 192.168.39.5:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-08-06T07:37:02.132624Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.5:2380/version","remote-member-id":"7c092d3c19282747","error":"Get \"https://192.168.39.5:2380/version\": dial tcp 192.168.39.5:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-06T07:37:02.132689Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"7c092d3c19282747","error":"Get \"https://192.168.39.5:2380/version\": dial tcp 192.168.39.5:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-06T07:37:04.413398Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"7c092d3c19282747","rtt":"0s","error":"dial tcp 192.168.39.5:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-06T07:37:04.413528Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"7c092d3c19282747","rtt":"0s","error":"dial tcp 192.168.39.5:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-06T07:37:06.135644Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.5:2380/version","remote-member-id":"7c092d3c19282747","error":"Get \"https://192.168.39.5:2380/version\": dial tcp 192.168.39.5:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-06T07:37:06.13571Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"7c092d3c19282747","error":"Get \"https://192.168.39.5:2380/version\": dial tcp 192.168.39.5:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-06T07:37:09.414413Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"7c092d3c19282747","rtt":"0s","error":"dial tcp 192.168.39.5:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-06T07:37:09.414643Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"7c092d3c19282747","rtt":"0s","error":"dial tcp 192.168.39.5:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-06T07:37:10.138246Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.5:2380/version","remote-member-id":"7c092d3c19282747","error":"Get \"https://192.168.39.5:2380/version\": dial tcp 192.168.39.5:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-06T07:37:10.138356Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"7c092d3c19282747","error":"Get \"https://192.168.39.5:2380/version\": dial tcp 192.168.39.5:2380: connect: connection refused"}
	{"level":"info","ts":"2024-08-06T07:37:10.171955Z","caller":"traceutil/trace.go:171","msg":"trace[548804337] transaction","detail":"{read_only:false; response_revision:2522; number_of_response:1; }","duration":"113.141959ms","start":"2024-08-06T07:37:10.058791Z","end":"2024-08-06T07:37:10.171933Z","steps":["trace[548804337] 'process raft request'  (duration: 113.046471ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-06T07:37:10.724791Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"6a00153c0a3e6122","to":"7c092d3c19282747","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-08-06T07:37:10.724946Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"7c092d3c19282747"}
	{"level":"info","ts":"2024-08-06T07:37:10.725005Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"6a00153c0a3e6122","remote-peer-id":"7c092d3c19282747"}
	{"level":"info","ts":"2024-08-06T07:37:10.730872Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6a00153c0a3e6122","remote-peer-id":"7c092d3c19282747"}
	{"level":"info","ts":"2024-08-06T07:37:10.736248Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"6a00153c0a3e6122","remote-peer-id":"7c092d3c19282747"}
	{"level":"info","ts":"2024-08-06T07:37:10.738053Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"6a00153c0a3e6122","to":"7c092d3c19282747","stream-type":"stream Message"}
	{"level":"info","ts":"2024-08-06T07:37:10.738182Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"6a00153c0a3e6122","remote-peer-id":"7c092d3c19282747"}
	{"level":"info","ts":"2024-08-06T07:37:11.732994Z","caller":"traceutil/trace.go:171","msg":"trace[491905090] transaction","detail":"{read_only:false; response_revision:2530; number_of_response:1; }","duration":"115.088451ms","start":"2024-08-06T07:37:11.617855Z","end":"2024-08-06T07:37:11.732943Z","steps":["trace[491905090] 'process raft request'  (duration: 114.9643ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-06T07:37:14.672959Z","caller":"traceutil/trace.go:171","msg":"trace[459309595] transaction","detail":"{read_only:false; response_revision:2542; number_of_response:1; }","duration":"147.393069ms","start":"2024-08-06T07:37:14.525535Z","end":"2024-08-06T07:37:14.672928Z","steps":["trace[459309595] 'process raft request'  (duration: 142.306514ms)"],"step_count":1}
	
	
	==> kernel <==
	 07:37:58 up 15 min,  0 users,  load average: 0.27, 0.51, 0.34
	Linux ha-118173 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [b8f5ae0da505fc7f79225ba7115c83b9e909128bc28df767fce1fc3ba7a01392] <==
	I0806 07:33:19.272783       1 main.go:295] Handling node with IPs: map[192.168.39.164:{}]
	I0806 07:33:19.272985       1 main.go:299] handling current node
	I0806 07:33:19.273051       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0806 07:33:19.273071       1 main.go:322] Node ha-118173-m02 has CIDR [10.244.1.0/24] 
	I0806 07:33:19.273255       1 main.go:295] Handling node with IPs: map[192.168.39.5:{}]
	I0806 07:33:19.273279       1 main.go:322] Node ha-118173-m03 has CIDR [10.244.2.0/24] 
	I0806 07:33:19.273350       1 main.go:295] Handling node with IPs: map[192.168.39.107:{}]
	I0806 07:33:19.273369       1 main.go:322] Node ha-118173-m04 has CIDR [10.244.3.0/24] 
	I0806 07:33:29.275367       1 main.go:295] Handling node with IPs: map[192.168.39.164:{}]
	I0806 07:33:29.275414       1 main.go:299] handling current node
	I0806 07:33:29.275429       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0806 07:33:29.275434       1 main.go:322] Node ha-118173-m02 has CIDR [10.244.1.0/24] 
	I0806 07:33:29.275664       1 main.go:295] Handling node with IPs: map[192.168.39.5:{}]
	I0806 07:33:29.275695       1 main.go:322] Node ha-118173-m03 has CIDR [10.244.2.0/24] 
	I0806 07:33:29.275759       1 main.go:295] Handling node with IPs: map[192.168.39.107:{}]
	I0806 07:33:29.275765       1 main.go:322] Node ha-118173-m04 has CIDR [10.244.3.0/24] 
	I0806 07:33:39.271928       1 main.go:295] Handling node with IPs: map[192.168.39.5:{}]
	I0806 07:33:39.272031       1 main.go:322] Node ha-118173-m03 has CIDR [10.244.2.0/24] 
	I0806 07:33:39.272297       1 main.go:295] Handling node with IPs: map[192.168.39.107:{}]
	I0806 07:33:39.272348       1 main.go:322] Node ha-118173-m04 has CIDR [10.244.3.0/24] 
	I0806 07:33:39.272420       1 main.go:295] Handling node with IPs: map[192.168.39.164:{}]
	I0806 07:33:39.272439       1 main.go:299] handling current node
	I0806 07:33:39.272461       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0806 07:33:39.272479       1 main.go:322] Node ha-118173-m02 has CIDR [10.244.1.0/24] 
	E0806 07:33:39.674227       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=2011&timeout=5m45s&timeoutSeconds=345&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=7, ErrCode=NO_ERROR, debug=""
	
	
	==> kindnet [e67330d376d276df974f78a481179786f12a17135fd068411ad63be616869e1e] <==
	I0806 07:37:19.899737       1 main.go:322] Node ha-118173-m03 has CIDR [10.244.2.0/24] 
	I0806 07:37:29.898693       1 main.go:295] Handling node with IPs: map[192.168.39.107:{}]
	I0806 07:37:29.898741       1 main.go:322] Node ha-118173-m04 has CIDR [10.244.3.0/24] 
	I0806 07:37:29.898869       1 main.go:295] Handling node with IPs: map[192.168.39.164:{}]
	I0806 07:37:29.898876       1 main.go:299] handling current node
	I0806 07:37:29.898887       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0806 07:37:29.898891       1 main.go:322] Node ha-118173-m02 has CIDR [10.244.1.0/24] 
	I0806 07:37:29.898976       1 main.go:295] Handling node with IPs: map[192.168.39.5:{}]
	I0806 07:37:29.899000       1 main.go:322] Node ha-118173-m03 has CIDR [10.244.2.0/24] 
	I0806 07:37:39.901987       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0806 07:37:39.902039       1 main.go:322] Node ha-118173-m02 has CIDR [10.244.1.0/24] 
	I0806 07:37:39.902173       1 main.go:295] Handling node with IPs: map[192.168.39.5:{}]
	I0806 07:37:39.902194       1 main.go:322] Node ha-118173-m03 has CIDR [10.244.2.0/24] 
	I0806 07:37:39.902246       1 main.go:295] Handling node with IPs: map[192.168.39.107:{}]
	I0806 07:37:39.902267       1 main.go:322] Node ha-118173-m04 has CIDR [10.244.3.0/24] 
	I0806 07:37:39.902378       1 main.go:295] Handling node with IPs: map[192.168.39.164:{}]
	I0806 07:37:39.902402       1 main.go:299] handling current node
	I0806 07:37:49.906971       1 main.go:295] Handling node with IPs: map[192.168.39.107:{}]
	I0806 07:37:49.907299       1 main.go:322] Node ha-118173-m04 has CIDR [10.244.3.0/24] 
	I0806 07:37:49.907791       1 main.go:295] Handling node with IPs: map[192.168.39.164:{}]
	I0806 07:37:49.907874       1 main.go:299] handling current node
	I0806 07:37:49.907905       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0806 07:37:49.907930       1 main.go:322] Node ha-118173-m02 has CIDR [10.244.1.0/24] 
	I0806 07:37:49.908195       1 main.go:295] Handling node with IPs: map[192.168.39.5:{}]
	I0806 07:37:49.908280       1 main.go:322] Node ha-118173-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [2dd16c152a6e76f97a41f8091857757846cf00e2b45fbebefd70bc7191f71d6d] <==
	I0806 07:35:29.440717       1 options.go:221] external host was not specified, using 192.168.39.164
	I0806 07:35:29.445460       1 server.go:148] Version: v1.30.3
	I0806 07:35:29.445612       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0806 07:35:30.090974       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0806 07:35:30.106678       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0806 07:35:30.114996       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0806 07:35:30.115034       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0806 07:35:30.115299       1 instance.go:299] Using reconciler: lease
	W0806 07:35:50.088335       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0806 07:35:50.090452       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0806 07:35:50.118761       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0806 07:35:50.118921       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [7ebb3ef02d9872250e0c4614f3552d20ba04a677b81c274d9cf9061a86d7c9ce] <==
	I0806 07:36:10.315347       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0806 07:36:10.342879       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0806 07:36:10.342915       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0806 07:36:10.389931       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0806 07:36:10.395898       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0806 07:36:10.395943       1 policy_source.go:224] refreshing policies
	I0806 07:36:10.410176       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0806 07:36:10.415443       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0806 07:36:10.416772       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0806 07:36:10.418085       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0806 07:36:10.416791       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0806 07:36:10.416951       1 shared_informer.go:320] Caches are synced for configmaps
	I0806 07:36:10.424294       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0806 07:36:10.431393       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	W0806 07:36:10.435092       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.203 192.168.39.5]
	I0806 07:36:10.436780       1 controller.go:615] quota admission added evaluator for: endpoints
	I0806 07:36:10.443986       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0806 07:36:10.444232       1 aggregator.go:165] initial CRD sync complete...
	I0806 07:36:10.444331       1 autoregister_controller.go:141] Starting autoregister controller
	I0806 07:36:10.444473       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0806 07:36:10.444624       1 cache.go:39] Caches are synced for autoregister controller
	I0806 07:36:10.454487       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0806 07:36:10.458176       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0806 07:36:11.321080       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0806 07:36:11.675689       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.164 192.168.39.203 192.168.39.5]
	
	
	==> kube-controller-manager [0280e428feb3f49d39e63996a569fe8099b785732ee02450df33eb25a8a39c88] <==
	I0806 07:35:30.038688       1 serving.go:380] Generated self-signed cert in-memory
	I0806 07:35:30.388338       1 controllermanager.go:189] "Starting" version="v1.30.3"
	I0806 07:35:30.388438       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0806 07:35:30.392013       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0806 07:35:30.392884       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0806 07:35:30.393097       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0806 07:35:30.393211       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0806 07:35:51.125837       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.164:8443/healthz\": dial tcp 192.168.39.164:8443: connect: connection refused"
	
	
	==> kube-controller-manager [d8e645fa59f15ec29730259bc834efd2b4740bd0cf9d98973d43ac2f3e60f8eb] <==
	I0806 07:36:22.954857       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0806 07:36:22.955911       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0806 07:36:22.957621       1 shared_informer.go:320] Caches are synced for endpoint
	I0806 07:36:22.971749       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0806 07:36:23.009827       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0806 07:36:23.112704       1 shared_informer.go:320] Caches are synced for resource quota
	I0806 07:36:23.127649       1 shared_informer.go:320] Caches are synced for HPA
	I0806 07:36:23.134059       1 shared_informer.go:320] Caches are synced for resource quota
	I0806 07:36:23.566096       1 shared_informer.go:320] Caches are synced for garbage collector
	I0806 07:36:23.592884       1 shared_informer.go:320] Caches are synced for garbage collector
	I0806 07:36:23.592935       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0806 07:36:30.132030       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-rh4wq EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-rh4wq\": the object has been modified; please apply your changes to the latest version and try again"
	I0806 07:36:30.132440       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"c5ebbc7e-2d4a-4e5b-b078-8a3d2e10d031", APIVersion:"v1", ResourceVersion:"254", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-rh4wq EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-rh4wq": the object has been modified; please apply your changes to the latest version and try again
	I0806 07:36:30.153475       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="64.0005ms"
	I0806 07:36:30.153675       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="57.533µs"
	I0806 07:36:40.105024       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="18.111719ms"
	I0806 07:36:40.106180       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="144.122µs"
	I0806 07:36:40.110999       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-rh4wq EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-rh4wq\": the object has been modified; please apply your changes to the latest version and try again"
	I0806 07:36:40.111258       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"c5ebbc7e-2d4a-4e5b-b078-8a3d2e10d031", APIVersion:"v1", ResourceVersion:"254", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-rh4wq EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-rh4wq": the object has been modified; please apply your changes to the latest version and try again
	I0806 07:36:58.452971       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="17.504109ms"
	I0806 07:36:58.454153       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="151.4µs"
	I0806 07:37:04.391972       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="93.373µs"
	I0806 07:37:22.947749       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.774087ms"
	I0806 07:37:22.948313       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="227.74µs"
	I0806 07:37:50.634307       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-118173-m04"
	
	
	==> kube-proxy [bcf0ac6e03c21e2b64b008ca3dcd8293a980e393216a3103acf1a60f8ef1dc3f] <==
	I0806 07:35:30.437901       1 server_linux.go:69] "Using iptables proxy"
	E0806 07:35:31.867482       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-118173\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0806 07:35:34.938013       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-118173\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0806 07:35:38.010294       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-118173\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0806 07:35:44.153963       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-118173\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0806 07:35:53.371962       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-118173\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0806 07:36:11.802035       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-118173\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0806 07:36:11.802120       1 server.go:1032] "Can't determine this node's IP, assuming loopback; if this is incorrect, please set the --bind-address flag"
	I0806 07:36:11.889144       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0806 07:36:11.889275       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0806 07:36:11.889311       1 server_linux.go:165] "Using iptables Proxier"
	I0806 07:36:11.901455       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0806 07:36:11.901878       1 server.go:872] "Version info" version="v1.30.3"
	I0806 07:36:11.902148       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0806 07:36:11.903775       1 config.go:192] "Starting service config controller"
	I0806 07:36:11.903832       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0806 07:36:11.903872       1 config.go:101] "Starting endpoint slice config controller"
	I0806 07:36:11.903889       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0806 07:36:11.904788       1 config.go:319] "Starting node config controller"
	I0806 07:36:11.904831       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0806 07:36:12.003982       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0806 07:36:12.004063       1 shared_informer.go:320] Caches are synced for service config
	I0806 07:36:12.008108       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [e5a5bdf0b2a62163f6aff6abe864eedc47b461e233989f23ef7770c226763325] <==
	W0806 07:32:24.090367       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-118173&resourceVersion=2011": dial tcp 192.168.39.254:8443: connect: no route to host
	E0806 07:32:24.090626       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-118173&resourceVersion=2011": dial tcp 192.168.39.254:8443: connect: no route to host
	E0806 07:32:24.090752       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2003": dial tcp 192.168.39.254:8443: connect: no route to host
	W0806 07:32:30.489974       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2003": dial tcp 192.168.39.254:8443: connect: no route to host
	E0806 07:32:30.490040       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2003": dial tcp 192.168.39.254:8443: connect: no route to host
	W0806 07:32:30.490112       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-118173&resourceVersion=2011": dial tcp 192.168.39.254:8443: connect: no route to host
	E0806 07:32:30.490159       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-118173&resourceVersion=2011": dial tcp 192.168.39.254:8443: connect: no route to host
	W0806 07:32:30.490314       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1976": dial tcp 192.168.39.254:8443: connect: no route to host
	E0806 07:32:30.490511       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1976": dial tcp 192.168.39.254:8443: connect: no route to host
	W0806 07:32:42.906164       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-118173&resourceVersion=2011": dial tcp 192.168.39.254:8443: connect: no route to host
	W0806 07:32:42.907236       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2003": dial tcp 192.168.39.254:8443: connect: no route to host
	E0806 07:32:42.907331       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2003": dial tcp 192.168.39.254:8443: connect: no route to host
	W0806 07:32:42.907493       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1976": dial tcp 192.168.39.254:8443: connect: no route to host
	E0806 07:32:42.907542       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1976": dial tcp 192.168.39.254:8443: connect: no route to host
	E0806 07:32:42.907603       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-118173&resourceVersion=2011": dial tcp 192.168.39.254:8443: connect: no route to host
	W0806 07:32:58.267364       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2003": dial tcp 192.168.39.254:8443: connect: no route to host
	E0806 07:32:58.267537       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2003": dial tcp 192.168.39.254:8443: connect: no route to host
	W0806 07:33:01.339818       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-118173&resourceVersion=2011": dial tcp 192.168.39.254:8443: connect: no route to host
	E0806 07:33:01.339916       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-118173&resourceVersion=2011": dial tcp 192.168.39.254:8443: connect: no route to host
	W0806 07:33:04.411257       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1976": dial tcp 192.168.39.254:8443: connect: no route to host
	E0806 07:33:04.411326       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1976": dial tcp 192.168.39.254:8443: connect: no route to host
	W0806 07:33:32.059472       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-118173&resourceVersion=2011": dial tcp 192.168.39.254:8443: connect: no route to host
	E0806 07:33:32.060366       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-118173&resourceVersion=2011": dial tcp 192.168.39.254:8443: connect: no route to host
	W0806 07:33:44.347617       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1976": dial tcp 192.168.39.254:8443: connect: no route to host
	E0806 07:33:44.347737       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1976": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-scheduler [27f4fdf6fa77a51e1295c12b459cda687ed6ae68875218e24ba7959fb37814ed] <==
	W0806 07:33:42.235539       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0806 07:33:42.235630       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0806 07:33:42.240813       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0806 07:33:42.240915       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0806 07:33:42.414364       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0806 07:33:42.414453       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0806 07:33:42.605282       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0806 07:33:42.605374       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0806 07:33:42.708045       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0806 07:33:42.708181       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0806 07:33:42.786890       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0806 07:33:42.787018       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0806 07:33:42.800242       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0806 07:33:42.800407       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0806 07:33:42.830828       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0806 07:33:42.830934       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0806 07:33:42.890127       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0806 07:33:42.890234       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0806 07:33:42.955486       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0806 07:33:42.955595       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0806 07:33:43.488770       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0806 07:33:43.488816       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0806 07:33:46.179357       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0806 07:33:46.179397       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0806 07:33:46.374456       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [f8c7c569af258485ebf98698eba57ddc482e2819c27995c4da66803dc7a84de4] <==
	W0806 07:36:01.117086       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.164:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.164:8443: connect: connection refused
	E0806 07:36:01.117242       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.164:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.164:8443: connect: connection refused
	W0806 07:36:05.651441       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.164:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.164:8443: connect: connection refused
	E0806 07:36:05.651479       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.164:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.164:8443: connect: connection refused
	W0806 07:36:06.849476       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.164:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.164:8443: connect: connection refused
	E0806 07:36:06.849543       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.164:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.164:8443: connect: connection refused
	W0806 07:36:07.196986       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.164:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.164:8443: connect: connection refused
	E0806 07:36:07.197093       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.164:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.164:8443: connect: connection refused
	W0806 07:36:07.859339       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.164:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.164:8443: connect: connection refused
	E0806 07:36:07.859393       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.164:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.164:8443: connect: connection refused
	W0806 07:36:07.993368       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.164:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.164:8443: connect: connection refused
	E0806 07:36:07.993495       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.164:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.164:8443: connect: connection refused
	W0806 07:36:08.234202       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.164:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.164:8443: connect: connection refused
	E0806 07:36:08.234254       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.164:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.164:8443: connect: connection refused
	W0806 07:36:08.503693       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.164:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.164:8443: connect: connection refused
	E0806 07:36:08.503763       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.164:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.164:8443: connect: connection refused
	W0806 07:36:10.347323       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0806 07:36:10.350096       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0806 07:36:10.349956       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0806 07:36:10.350210       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0806 07:36:10.350012       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0806 07:36:10.350285       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0806 07:36:10.350060       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0806 07:36:10.350332       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0806 07:36:22.132414       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 06 07:36:02 ha-118173 kubelet[1363]: I0806 07:36:02.586080    1363 status_manager.go:853] "Failed to get status for pod" podUID="8effbe6c-3107-409e-9c94-dee4103d12b0" pod="kube-system/coredns-7db6d8ff4d-t24ww" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-t24ww\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Aug 06 07:36:02 ha-118173 kubelet[1363]: E0806 07:36:02.586048    1363 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/events/kube-apiserver-ha-118173.17e91342f8bc4c23\": dial tcp 192.168.39.254:8443: connect: no route to host" event="&Event{ObjectMeta:{kube-apiserver-ha-118173.17e91342f8bc4c23  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ha-118173,UID:f74055e61dcf5f2385b1934a1a8e3b73,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ha-118173,},FirstTimestamp:2024-08-06 07:31:50.807784483 +0000 UTC m=+530.333360600,LastTimestamp:2024-08-06 07:31:54.840853543 +0000 UTC m=+534.366429672,Count:2,Type:Warning,EventTime:0001-01-01 0
0:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-118173,}"
	Aug 06 07:36:05 ha-118173 kubelet[1363]: I0806 07:36:05.616387    1363 scope.go:117] "RemoveContainer" containerID="0280e428feb3f49d39e63996a569fe8099b785732ee02450df33eb25a8a39c88"
	Aug 06 07:36:05 ha-118173 kubelet[1363]: W0806 07:36:05.659043    1363 reflector.go:547] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)kube-root-ca.crt&resourceVersion=2038": dial tcp 192.168.39.254:8443: connect: no route to host
	Aug 06 07:36:05 ha-118173 kubelet[1363]: E0806 07:36:05.659151    1363 reflector.go:150] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)kube-root-ca.crt&resourceVersion=2038": dial tcp 192.168.39.254:8443: connect: no route to host
	Aug 06 07:36:05 ha-118173 kubelet[1363]: I0806 07:36:05.659248    1363 status_manager.go:853] "Failed to get status for pod" podUID="8379466d-0241-40e4-a262-d762b3cda2eb" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Aug 06 07:36:08 ha-118173 kubelet[1363]: I0806 07:36:08.617112    1363 scope.go:117] "RemoveContainer" containerID="2dd16c152a6e76f97a41f8091857757846cf00e2b45fbebefd70bc7191f71d6d"
	Aug 06 07:36:08 ha-118173 kubelet[1363]: E0806 07:36:08.729974    1363 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-118173\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-118173?resourceVersion=0&timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Aug 06 07:36:08 ha-118173 kubelet[1363]: I0806 07:36:08.729992    1363 status_manager.go:853] "Failed to get status for pod" podUID="4a82c0ca-1596-4dfe-a2f9-3a34f22480e5" pod="default/busybox-fc5497c4f-rml5d" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/pods/busybox-fc5497c4f-rml5d\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Aug 06 07:36:11 ha-118173 kubelet[1363]: I0806 07:36:11.617054    1363 scope.go:117] "RemoveContainer" containerID="4c038e3b4d8c535621dabf1ffa68a5bb717de411bccf8772e655b2c06282edf3"
	Aug 06 07:36:11 ha-118173 kubelet[1363]: E0806 07:36:11.617269    1363 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(8379466d-0241-40e4-a262-d762b3cda2eb)\"" pod="kube-system/storage-provisioner" podUID="8379466d-0241-40e4-a262-d762b3cda2eb"
	Aug 06 07:36:11 ha-118173 kubelet[1363]: E0806 07:36:11.801937    1363 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-118173?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host" interval="7s"
	Aug 06 07:36:11 ha-118173 kubelet[1363]: I0806 07:36:11.802024    1363 status_manager.go:853] "Failed to get status for pod" podUID="4e550e7b49e50c28915c957cd8c1df9d" pod="kube-system/kube-vip-ha-118173" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-vip-ha-118173\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Aug 06 07:36:11 ha-118173 kubelet[1363]: E0806 07:36:11.803020    1363 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-118173\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-118173?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Aug 06 07:36:11 ha-118173 kubelet[1363]: W0806 07:36:11.803741    1363 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=1952": dial tcp 192.168.39.254:8443: connect: no route to host
	Aug 06 07:36:11 ha-118173 kubelet[1363]: E0806 07:36:11.803802    1363 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=1952": dial tcp 192.168.39.254:8443: connect: no route to host
	Aug 06 07:36:16 ha-118173 kubelet[1363]: I0806 07:36:16.680890    1363 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-fc5497c4f-rml5d" podStartSLOduration=557.933328795 podStartE2EDuration="9m20.680857278s" podCreationTimestamp="2024-08-06 07:26:56 +0000 UTC" firstStartedPulling="2024-08-06 07:26:58.899899769 +0000 UTC m=+238.425475882" lastFinishedPulling="2024-08-06 07:27:01.647428245 +0000 UTC m=+241.173004365" observedRunningTime="2024-08-06 07:27:02.645036339 +0000 UTC m=+242.170612471" watchObservedRunningTime="2024-08-06 07:36:16.680857278 +0000 UTC m=+796.206433411"
	Aug 06 07:36:22 ha-118173 kubelet[1363]: I0806 07:36:22.616883    1363 scope.go:117] "RemoveContainer" containerID="4c038e3b4d8c535621dabf1ffa68a5bb717de411bccf8772e655b2c06282edf3"
	Aug 06 07:36:45 ha-118173 kubelet[1363]: I0806 07:36:45.616744    1363 kubelet.go:1917] "Trying to delete pod" pod="kube-system/kube-vip-ha-118173" podUID="594d77d2-83e6-420d-8f0a-f54e9fd704f3"
	Aug 06 07:36:45 ha-118173 kubelet[1363]: I0806 07:36:45.634831    1363 kubelet.go:1922] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-118173"
	Aug 06 07:37:00 ha-118173 kubelet[1363]: E0806 07:37:00.649180    1363 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 06 07:37:00 ha-118173 kubelet[1363]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 06 07:37:00 ha-118173 kubelet[1363]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 06 07:37:00 ha-118173 kubelet[1363]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 06 07:37:00 ha-118173 kubelet[1363]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0806 07:37:57.587444   39693 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19370-13057/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-118173 -n ha-118173
helpers_test.go:261: (dbg) Run:  kubectl --context ha-118173 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (376.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (141.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-118173 stop -v=7 --alsologtostderr
E0806 07:39:13.203525   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/addons-505457/client.crt: no such file or directory
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-118173 stop -v=7 --alsologtostderr: exit status 82 (2m0.467934315s)

                                                
                                                
-- stdout --
	* Stopping node "ha-118173-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 07:38:17.534707   40098 out.go:291] Setting OutFile to fd 1 ...
	I0806 07:38:17.534941   40098 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 07:38:17.534954   40098 out.go:304] Setting ErrFile to fd 2...
	I0806 07:38:17.534961   40098 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 07:38:17.535161   40098 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19370-13057/.minikube/bin
	I0806 07:38:17.535421   40098 out.go:298] Setting JSON to false
	I0806 07:38:17.535495   40098 mustload.go:65] Loading cluster: ha-118173
	I0806 07:38:17.535839   40098 config.go:182] Loaded profile config "ha-118173": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 07:38:17.535917   40098 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/config.json ...
	I0806 07:38:17.536086   40098 mustload.go:65] Loading cluster: ha-118173
	I0806 07:38:17.536217   40098 config.go:182] Loaded profile config "ha-118173": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 07:38:17.536245   40098 stop.go:39] StopHost: ha-118173-m04
	I0806 07:38:17.536635   40098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:38:17.536686   40098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:38:17.551838   40098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40355
	I0806 07:38:17.552457   40098 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:38:17.553003   40098 main.go:141] libmachine: Using API Version  1
	I0806 07:38:17.553028   40098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:38:17.553379   40098 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:38:17.555649   40098 out.go:177] * Stopping node "ha-118173-m04"  ...
	I0806 07:38:17.556993   40098 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0806 07:38:17.557032   40098 main.go:141] libmachine: (ha-118173-m04) Calling .DriverName
	I0806 07:38:17.557271   40098 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0806 07:38:17.557295   40098 main.go:141] libmachine: (ha-118173-m04) Calling .GetSSHHostname
	I0806 07:38:17.559843   40098 main.go:141] libmachine: (ha-118173-m04) DBG | domain ha-118173-m04 has defined MAC address 52:54:00:8f:be:12 in network mk-ha-118173
	I0806 07:38:17.560200   40098 main.go:141] libmachine: (ha-118173-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:be:12", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:37:44 +0000 UTC Type:0 Mac:52:54:00:8f:be:12 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:ha-118173-m04 Clientid:01:52:54:00:8f:be:12}
	I0806 07:38:17.560230   40098 main.go:141] libmachine: (ha-118173-m04) DBG | domain ha-118173-m04 has defined IP address 192.168.39.107 and MAC address 52:54:00:8f:be:12 in network mk-ha-118173
	I0806 07:38:17.560346   40098 main.go:141] libmachine: (ha-118173-m04) Calling .GetSSHPort
	I0806 07:38:17.560525   40098 main.go:141] libmachine: (ha-118173-m04) Calling .GetSSHKeyPath
	I0806 07:38:17.560688   40098 main.go:141] libmachine: (ha-118173-m04) Calling .GetSSHUsername
	I0806 07:38:17.560866   40098 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173-m04/id_rsa Username:docker}
	I0806 07:38:17.644213   40098 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0806 07:38:17.698790   40098 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0806 07:38:17.751775   40098 main.go:141] libmachine: Stopping "ha-118173-m04"...
	I0806 07:38:17.751806   40098 main.go:141] libmachine: (ha-118173-m04) Calling .GetState
	I0806 07:38:17.753318   40098 main.go:141] libmachine: (ha-118173-m04) Calling .Stop
	I0806 07:38:17.756885   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 0/120
	I0806 07:38:18.759039   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 1/120
	I0806 07:38:19.760666   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 2/120
	I0806 07:38:20.762009   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 3/120
	I0806 07:38:21.763653   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 4/120
	I0806 07:38:22.765540   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 5/120
	I0806 07:38:23.766879   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 6/120
	I0806 07:38:24.768254   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 7/120
	I0806 07:38:25.769731   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 8/120
	I0806 07:38:26.771202   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 9/120
	I0806 07:38:27.772962   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 10/120
	I0806 07:38:28.774482   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 11/120
	I0806 07:38:29.775872   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 12/120
	I0806 07:38:30.777517   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 13/120
	I0806 07:38:31.779531   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 14/120
	I0806 07:38:32.781006   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 15/120
	I0806 07:38:33.782870   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 16/120
	I0806 07:38:34.784232   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 17/120
	I0806 07:38:35.786520   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 18/120
	I0806 07:38:36.788061   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 19/120
	I0806 07:38:37.790047   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 20/120
	I0806 07:38:38.791608   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 21/120
	I0806 07:38:39.793065   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 22/120
	I0806 07:38:40.794618   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 23/120
	I0806 07:38:41.796017   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 24/120
	I0806 07:38:42.797884   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 25/120
	I0806 07:38:43.799257   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 26/120
	I0806 07:38:44.800567   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 27/120
	I0806 07:38:45.801947   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 28/120
	I0806 07:38:46.804688   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 29/120
	I0806 07:38:47.807101   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 30/120
	I0806 07:38:48.808356   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 31/120
	I0806 07:38:49.809878   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 32/120
	I0806 07:38:50.811693   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 33/120
	I0806 07:38:51.813577   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 34/120
	I0806 07:38:52.815404   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 35/120
	I0806 07:38:53.816754   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 36/120
	I0806 07:38:54.818145   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 37/120
	I0806 07:38:55.819512   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 38/120
	I0806 07:38:56.820804   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 39/120
	I0806 07:38:57.822321   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 40/120
	I0806 07:38:58.823617   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 41/120
	I0806 07:38:59.825628   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 42/120
	I0806 07:39:00.827040   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 43/120
	I0806 07:39:01.828424   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 44/120
	I0806 07:39:02.829882   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 45/120
	I0806 07:39:03.831326   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 46/120
	I0806 07:39:04.832667   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 47/120
	I0806 07:39:05.835012   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 48/120
	I0806 07:39:06.836322   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 49/120
	I0806 07:39:07.838619   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 50/120
	I0806 07:39:08.840268   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 51/120
	I0806 07:39:09.841845   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 52/120
	I0806 07:39:10.843329   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 53/120
	I0806 07:39:11.844652   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 54/120
	I0806 07:39:12.846780   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 55/120
	I0806 07:39:13.848071   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 56/120
	I0806 07:39:14.849872   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 57/120
	I0806 07:39:15.851425   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 58/120
	I0806 07:39:16.853727   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 59/120
	I0806 07:39:17.855925   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 60/120
	I0806 07:39:18.857420   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 61/120
	I0806 07:39:19.858942   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 62/120
	I0806 07:39:20.860578   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 63/120
	I0806 07:39:21.862276   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 64/120
	I0806 07:39:22.864284   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 65/120
	I0806 07:39:23.866189   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 66/120
	I0806 07:39:24.867663   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 67/120
	I0806 07:39:25.869069   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 68/120
	I0806 07:39:26.870609   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 69/120
	I0806 07:39:27.872732   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 70/120
	I0806 07:39:28.874189   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 71/120
	I0806 07:39:29.875409   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 72/120
	I0806 07:39:30.877248   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 73/120
	I0806 07:39:31.879139   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 74/120
	I0806 07:39:32.881142   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 75/120
	I0806 07:39:33.882565   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 76/120
	I0806 07:39:34.884280   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 77/120
	I0806 07:39:35.885717   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 78/120
	I0806 07:39:36.886944   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 79/120
	I0806 07:39:37.889154   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 80/120
	I0806 07:39:38.891290   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 81/120
	I0806 07:39:39.892734   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 82/120
	I0806 07:39:40.894828   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 83/120
	I0806 07:39:41.896049   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 84/120
	I0806 07:39:42.897973   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 85/120
	I0806 07:39:43.899612   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 86/120
	I0806 07:39:44.901338   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 87/120
	I0806 07:39:45.903013   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 88/120
	I0806 07:39:46.904900   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 89/120
	I0806 07:39:47.906946   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 90/120
	I0806 07:39:48.908192   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 91/120
	I0806 07:39:49.909536   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 92/120
	I0806 07:39:50.910875   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 93/120
	I0806 07:39:51.912073   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 94/120
	I0806 07:39:52.914182   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 95/120
	I0806 07:39:53.915664   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 96/120
	I0806 07:39:54.917111   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 97/120
	I0806 07:39:55.918926   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 98/120
	I0806 07:39:56.920257   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 99/120
	I0806 07:39:57.922033   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 100/120
	I0806 07:39:58.923397   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 101/120
	I0806 07:39:59.924687   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 102/120
	I0806 07:40:00.926299   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 103/120
	I0806 07:40:01.928603   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 104/120
	I0806 07:40:02.930380   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 105/120
	I0806 07:40:03.931728   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 106/120
	I0806 07:40:04.932972   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 107/120
	I0806 07:40:05.935053   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 108/120
	I0806 07:40:06.936496   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 109/120
	I0806 07:40:07.938558   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 110/120
	I0806 07:40:08.940177   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 111/120
	I0806 07:40:09.941510   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 112/120
	I0806 07:40:10.942986   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 113/120
	I0806 07:40:11.944496   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 114/120
	I0806 07:40:12.946552   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 115/120
	I0806 07:40:13.947895   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 116/120
	I0806 07:40:14.949196   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 117/120
	I0806 07:40:15.950571   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 118/120
	I0806 07:40:16.952031   40098 main.go:141] libmachine: (ha-118173-m04) Waiting for machine to stop 119/120
	I0806 07:40:17.952802   40098 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0806 07:40:17.952874   40098 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0806 07:40:17.954593   40098 out.go:177] 
	W0806 07:40:17.956148   40098 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0806 07:40:17.956168   40098 out.go:239] * 
	* 
	W0806 07:40:17.958666   40098 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0806 07:40:17.960065   40098 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-118173 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-118173 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-118173 status -v=7 --alsologtostderr: exit status 3 (18.846842593s)

                                                
                                                
-- stdout --
	ha-118173
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-118173-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-118173-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 07:40:18.007893   40532 out.go:291] Setting OutFile to fd 1 ...
	I0806 07:40:18.008021   40532 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 07:40:18.008032   40532 out.go:304] Setting ErrFile to fd 2...
	I0806 07:40:18.008037   40532 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 07:40:18.008212   40532 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19370-13057/.minikube/bin
	I0806 07:40:18.008413   40532 out.go:298] Setting JSON to false
	I0806 07:40:18.008436   40532 mustload.go:65] Loading cluster: ha-118173
	I0806 07:40:18.008473   40532 notify.go:220] Checking for updates...
	I0806 07:40:18.008887   40532 config.go:182] Loaded profile config "ha-118173": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 07:40:18.008908   40532 status.go:255] checking status of ha-118173 ...
	I0806 07:40:18.009373   40532 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:40:18.009426   40532 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:40:18.033277   40532 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38687
	I0806 07:40:18.033774   40532 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:40:18.034391   40532 main.go:141] libmachine: Using API Version  1
	I0806 07:40:18.034417   40532 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:40:18.034787   40532 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:40:18.035268   40532 main.go:141] libmachine: (ha-118173) Calling .GetState
	I0806 07:40:18.037091   40532 status.go:330] ha-118173 host status = "Running" (err=<nil>)
	I0806 07:40:18.037112   40532 host.go:66] Checking if "ha-118173" exists ...
	I0806 07:40:18.037390   40532 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:40:18.037433   40532 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:40:18.052210   40532 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42059
	I0806 07:40:18.052677   40532 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:40:18.053105   40532 main.go:141] libmachine: Using API Version  1
	I0806 07:40:18.053123   40532 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:40:18.053442   40532 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:40:18.053623   40532 main.go:141] libmachine: (ha-118173) Calling .GetIP
	I0806 07:40:18.056413   40532 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:40:18.056788   40532 main.go:141] libmachine: (ha-118173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:df:f0", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:22:33 +0000 UTC Type:0 Mac:52:54:00:ef:df:f0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-118173 Clientid:01:52:54:00:ef:df:f0}
	I0806 07:40:18.056818   40532 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined IP address 192.168.39.164 and MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:40:18.056985   40532 host.go:66] Checking if "ha-118173" exists ...
	I0806 07:40:18.057362   40532 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:40:18.057406   40532 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:40:18.072677   40532 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37551
	I0806 07:40:18.073096   40532 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:40:18.073580   40532 main.go:141] libmachine: Using API Version  1
	I0806 07:40:18.073602   40532 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:40:18.073937   40532 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:40:18.074146   40532 main.go:141] libmachine: (ha-118173) Calling .DriverName
	I0806 07:40:18.074337   40532 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0806 07:40:18.074366   40532 main.go:141] libmachine: (ha-118173) Calling .GetSSHHostname
	I0806 07:40:18.077201   40532 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:40:18.077617   40532 main.go:141] libmachine: (ha-118173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:df:f0", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:22:33 +0000 UTC Type:0 Mac:52:54:00:ef:df:f0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-118173 Clientid:01:52:54:00:ef:df:f0}
	I0806 07:40:18.077653   40532 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined IP address 192.168.39.164 and MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:40:18.077800   40532 main.go:141] libmachine: (ha-118173) Calling .GetSSHPort
	I0806 07:40:18.077982   40532 main.go:141] libmachine: (ha-118173) Calling .GetSSHKeyPath
	I0806 07:40:18.078130   40532 main.go:141] libmachine: (ha-118173) Calling .GetSSHUsername
	I0806 07:40:18.078284   40532 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173/id_rsa Username:docker}
	I0806 07:40:18.161973   40532 ssh_runner.go:195] Run: systemctl --version
	I0806 07:40:18.169377   40532 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 07:40:18.187740   40532 kubeconfig.go:125] found "ha-118173" server: "https://192.168.39.254:8443"
	I0806 07:40:18.187768   40532 api_server.go:166] Checking apiserver status ...
	I0806 07:40:18.187799   40532 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 07:40:18.206007   40532 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5079/cgroup
	W0806 07:40:18.218698   40532 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5079/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0806 07:40:18.218755   40532 ssh_runner.go:195] Run: ls
	I0806 07:40:18.223503   40532 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0806 07:40:18.228069   40532 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0806 07:40:18.228095   40532 status.go:422] ha-118173 apiserver status = Running (err=<nil>)
	I0806 07:40:18.228105   40532 status.go:257] ha-118173 status: &{Name:ha-118173 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0806 07:40:18.228120   40532 status.go:255] checking status of ha-118173-m02 ...
	I0806 07:40:18.228485   40532 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:40:18.228524   40532 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:40:18.243023   40532 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43111
	I0806 07:40:18.243475   40532 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:40:18.243931   40532 main.go:141] libmachine: Using API Version  1
	I0806 07:40:18.243959   40532 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:40:18.244261   40532 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:40:18.244468   40532 main.go:141] libmachine: (ha-118173-m02) Calling .GetState
	I0806 07:40:18.246088   40532 status.go:330] ha-118173-m02 host status = "Running" (err=<nil>)
	I0806 07:40:18.246117   40532 host.go:66] Checking if "ha-118173-m02" exists ...
	I0806 07:40:18.246408   40532 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:40:18.246442   40532 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:40:18.262594   40532 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33005
	I0806 07:40:18.263044   40532 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:40:18.263539   40532 main.go:141] libmachine: Using API Version  1
	I0806 07:40:18.263566   40532 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:40:18.263948   40532 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:40:18.264166   40532 main.go:141] libmachine: (ha-118173-m02) Calling .GetIP
	I0806 07:40:18.266991   40532 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:40:18.267411   40532 main.go:141] libmachine: (ha-118173-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:e9:38", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:35:34 +0000 UTC Type:0 Mac:52:54:00:b4:e9:38 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-118173-m02 Clientid:01:52:54:00:b4:e9:38}
	I0806 07:40:18.267444   40532 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined IP address 192.168.39.203 and MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:40:18.267567   40532 host.go:66] Checking if "ha-118173-m02" exists ...
	I0806 07:40:18.268024   40532 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:40:18.268067   40532 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:40:18.282665   40532 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41719
	I0806 07:40:18.283061   40532 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:40:18.283453   40532 main.go:141] libmachine: Using API Version  1
	I0806 07:40:18.283474   40532 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:40:18.283751   40532 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:40:18.283988   40532 main.go:141] libmachine: (ha-118173-m02) Calling .DriverName
	I0806 07:40:18.284160   40532 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0806 07:40:18.284183   40532 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHHostname
	I0806 07:40:18.287139   40532 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:40:18.287580   40532 main.go:141] libmachine: (ha-118173-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:e9:38", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:35:34 +0000 UTC Type:0 Mac:52:54:00:b4:e9:38 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-118173-m02 Clientid:01:52:54:00:b4:e9:38}
	I0806 07:40:18.287610   40532 main.go:141] libmachine: (ha-118173-m02) DBG | domain ha-118173-m02 has defined IP address 192.168.39.203 and MAC address 52:54:00:b4:e9:38 in network mk-ha-118173
	I0806 07:40:18.287757   40532 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHPort
	I0806 07:40:18.287934   40532 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHKeyPath
	I0806 07:40:18.288161   40532 main.go:141] libmachine: (ha-118173-m02) Calling .GetSSHUsername
	I0806 07:40:18.288310   40532 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173-m02/id_rsa Username:docker}
	I0806 07:40:18.369285   40532 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 07:40:18.389755   40532 kubeconfig.go:125] found "ha-118173" server: "https://192.168.39.254:8443"
	I0806 07:40:18.389802   40532 api_server.go:166] Checking apiserver status ...
	I0806 07:40:18.389863   40532 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 07:40:18.407277   40532 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1533/cgroup
	W0806 07:40:18.418409   40532 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1533/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0806 07:40:18.418459   40532 ssh_runner.go:195] Run: ls
	I0806 07:40:18.423121   40532 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0806 07:40:18.427405   40532 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0806 07:40:18.427430   40532 status.go:422] ha-118173-m02 apiserver status = Running (err=<nil>)
	I0806 07:40:18.427438   40532 status.go:257] ha-118173-m02 status: &{Name:ha-118173-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0806 07:40:18.427453   40532 status.go:255] checking status of ha-118173-m04 ...
	I0806 07:40:18.427789   40532 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:40:18.427838   40532 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:40:18.442466   40532 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46655
	I0806 07:40:18.442864   40532 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:40:18.443299   40532 main.go:141] libmachine: Using API Version  1
	I0806 07:40:18.443322   40532 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:40:18.443628   40532 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:40:18.443783   40532 main.go:141] libmachine: (ha-118173-m04) Calling .GetState
	I0806 07:40:18.445143   40532 status.go:330] ha-118173-m04 host status = "Running" (err=<nil>)
	I0806 07:40:18.445159   40532 host.go:66] Checking if "ha-118173-m04" exists ...
	I0806 07:40:18.445539   40532 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:40:18.445579   40532 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:40:18.460328   40532 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46135
	I0806 07:40:18.460842   40532 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:40:18.461385   40532 main.go:141] libmachine: Using API Version  1
	I0806 07:40:18.461409   40532 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:40:18.461737   40532 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:40:18.461957   40532 main.go:141] libmachine: (ha-118173-m04) Calling .GetIP
	I0806 07:40:18.465101   40532 main.go:141] libmachine: (ha-118173-m04) DBG | domain ha-118173-m04 has defined MAC address 52:54:00:8f:be:12 in network mk-ha-118173
	I0806 07:40:18.465619   40532 main.go:141] libmachine: (ha-118173-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:be:12", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:37:44 +0000 UTC Type:0 Mac:52:54:00:8f:be:12 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:ha-118173-m04 Clientid:01:52:54:00:8f:be:12}
	I0806 07:40:18.465644   40532 main.go:141] libmachine: (ha-118173-m04) DBG | domain ha-118173-m04 has defined IP address 192.168.39.107 and MAC address 52:54:00:8f:be:12 in network mk-ha-118173
	I0806 07:40:18.465896   40532 host.go:66] Checking if "ha-118173-m04" exists ...
	I0806 07:40:18.466291   40532 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:40:18.466354   40532 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:40:18.482076   40532 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34203
	I0806 07:40:18.482479   40532 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:40:18.482903   40532 main.go:141] libmachine: Using API Version  1
	I0806 07:40:18.482923   40532 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:40:18.483228   40532 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:40:18.483404   40532 main.go:141] libmachine: (ha-118173-m04) Calling .DriverName
	I0806 07:40:18.483558   40532 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0806 07:40:18.483573   40532 main.go:141] libmachine: (ha-118173-m04) Calling .GetSSHHostname
	I0806 07:40:18.486336   40532 main.go:141] libmachine: (ha-118173-m04) DBG | domain ha-118173-m04 has defined MAC address 52:54:00:8f:be:12 in network mk-ha-118173
	I0806 07:40:18.486748   40532 main.go:141] libmachine: (ha-118173-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:be:12", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:37:44 +0000 UTC Type:0 Mac:52:54:00:8f:be:12 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:ha-118173-m04 Clientid:01:52:54:00:8f:be:12}
	I0806 07:40:18.486781   40532 main.go:141] libmachine: (ha-118173-m04) DBG | domain ha-118173-m04 has defined IP address 192.168.39.107 and MAC address 52:54:00:8f:be:12 in network mk-ha-118173
	I0806 07:40:18.486924   40532 main.go:141] libmachine: (ha-118173-m04) Calling .GetSSHPort
	I0806 07:40:18.487093   40532 main.go:141] libmachine: (ha-118173-m04) Calling .GetSSHKeyPath
	I0806 07:40:18.487237   40532 main.go:141] libmachine: (ha-118173-m04) Calling .GetSSHUsername
	I0806 07:40:18.487414   40532 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173-m04/id_rsa Username:docker}
	W0806 07:40:36.808564   40532 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.107:22: connect: no route to host
	W0806 07:40:36.808676   40532 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.107:22: connect: no route to host
	E0806 07:40:36.808699   40532 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.107:22: connect: no route to host
	I0806 07:40:36.808707   40532 status.go:257] ha-118173-m04 status: &{Name:ha-118173-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0806 07:40:36.808722   40532 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.107:22: connect: no route to host

                                                
                                                
** /stderr **
ha_test.go:540: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-118173 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-118173 -n ha-118173
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-118173 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-118173 logs -n 25: (1.692938371s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-118173 ssh -n ha-118173-m02 sudo cat                                          | ha-118173 | jenkins | v1.33.1 | 06 Aug 24 07:28 UTC | 06 Aug 24 07:28 UTC |
	|         | /home/docker/cp-test_ha-118173-m03_ha-118173-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-118173 cp ha-118173-m03:/home/docker/cp-test.txt                              | ha-118173 | jenkins | v1.33.1 | 06 Aug 24 07:28 UTC | 06 Aug 24 07:28 UTC |
	|         | ha-118173-m04:/home/docker/cp-test_ha-118173-m03_ha-118173-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-118173 ssh -n                                                                 | ha-118173 | jenkins | v1.33.1 | 06 Aug 24 07:28 UTC | 06 Aug 24 07:28 UTC |
	|         | ha-118173-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-118173 ssh -n ha-118173-m04 sudo cat                                          | ha-118173 | jenkins | v1.33.1 | 06 Aug 24 07:28 UTC | 06 Aug 24 07:28 UTC |
	|         | /home/docker/cp-test_ha-118173-m03_ha-118173-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-118173 cp testdata/cp-test.txt                                                | ha-118173 | jenkins | v1.33.1 | 06 Aug 24 07:28 UTC | 06 Aug 24 07:28 UTC |
	|         | ha-118173-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-118173 ssh -n                                                                 | ha-118173 | jenkins | v1.33.1 | 06 Aug 24 07:28 UTC | 06 Aug 24 07:28 UTC |
	|         | ha-118173-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-118173 cp ha-118173-m04:/home/docker/cp-test.txt                              | ha-118173 | jenkins | v1.33.1 | 06 Aug 24 07:28 UTC | 06 Aug 24 07:28 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile4204706009/001/cp-test_ha-118173-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-118173 ssh -n                                                                 | ha-118173 | jenkins | v1.33.1 | 06 Aug 24 07:28 UTC | 06 Aug 24 07:28 UTC |
	|         | ha-118173-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-118173 cp ha-118173-m04:/home/docker/cp-test.txt                              | ha-118173 | jenkins | v1.33.1 | 06 Aug 24 07:28 UTC | 06 Aug 24 07:28 UTC |
	|         | ha-118173:/home/docker/cp-test_ha-118173-m04_ha-118173.txt                       |           |         |         |                     |                     |
	| ssh     | ha-118173 ssh -n                                                                 | ha-118173 | jenkins | v1.33.1 | 06 Aug 24 07:28 UTC | 06 Aug 24 07:28 UTC |
	|         | ha-118173-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-118173 ssh -n ha-118173 sudo cat                                              | ha-118173 | jenkins | v1.33.1 | 06 Aug 24 07:28 UTC | 06 Aug 24 07:28 UTC |
	|         | /home/docker/cp-test_ha-118173-m04_ha-118173.txt                                 |           |         |         |                     |                     |
	| cp      | ha-118173 cp ha-118173-m04:/home/docker/cp-test.txt                              | ha-118173 | jenkins | v1.33.1 | 06 Aug 24 07:28 UTC | 06 Aug 24 07:28 UTC |
	|         | ha-118173-m02:/home/docker/cp-test_ha-118173-m04_ha-118173-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-118173 ssh -n                                                                 | ha-118173 | jenkins | v1.33.1 | 06 Aug 24 07:28 UTC | 06 Aug 24 07:28 UTC |
	|         | ha-118173-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-118173 ssh -n ha-118173-m02 sudo cat                                          | ha-118173 | jenkins | v1.33.1 | 06 Aug 24 07:28 UTC | 06 Aug 24 07:28 UTC |
	|         | /home/docker/cp-test_ha-118173-m04_ha-118173-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-118173 cp ha-118173-m04:/home/docker/cp-test.txt                              | ha-118173 | jenkins | v1.33.1 | 06 Aug 24 07:28 UTC | 06 Aug 24 07:28 UTC |
	|         | ha-118173-m03:/home/docker/cp-test_ha-118173-m04_ha-118173-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-118173 ssh -n                                                                 | ha-118173 | jenkins | v1.33.1 | 06 Aug 24 07:28 UTC | 06 Aug 24 07:28 UTC |
	|         | ha-118173-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-118173 ssh -n ha-118173-m03 sudo cat                                          | ha-118173 | jenkins | v1.33.1 | 06 Aug 24 07:28 UTC | 06 Aug 24 07:28 UTC |
	|         | /home/docker/cp-test_ha-118173-m04_ha-118173-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-118173 node stop m02 -v=7                                                     | ha-118173 | jenkins | v1.33.1 | 06 Aug 24 07:28 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-118173 node start m02 -v=7                                                    | ha-118173 | jenkins | v1.33.1 | 06 Aug 24 07:30 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-118173 -v=7                                                           | ha-118173 | jenkins | v1.33.1 | 06 Aug 24 07:31 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-118173 -v=7                                                                | ha-118173 | jenkins | v1.33.1 | 06 Aug 24 07:31 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-118173 --wait=true -v=7                                                    | ha-118173 | jenkins | v1.33.1 | 06 Aug 24 07:33 UTC | 06 Aug 24 07:37 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-118173                                                                | ha-118173 | jenkins | v1.33.1 | 06 Aug 24 07:37 UTC |                     |
	| node    | ha-118173 node delete m03 -v=7                                                   | ha-118173 | jenkins | v1.33.1 | 06 Aug 24 07:37 UTC | 06 Aug 24 07:38 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-118173 stop -v=7                                                              | ha-118173 | jenkins | v1.33.1 | 06 Aug 24 07:38 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/06 07:33:45
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0806 07:33:45.467346   38357 out.go:291] Setting OutFile to fd 1 ...
	I0806 07:33:45.467566   38357 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 07:33:45.467575   38357 out.go:304] Setting ErrFile to fd 2...
	I0806 07:33:45.467579   38357 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 07:33:45.467734   38357 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19370-13057/.minikube/bin
	I0806 07:33:45.468237   38357 out.go:298] Setting JSON to false
	I0806 07:33:45.469136   38357 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4571,"bootTime":1722925054,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0806 07:33:45.469198   38357 start.go:139] virtualization: kvm guest
	I0806 07:33:45.471400   38357 out.go:177] * [ha-118173] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0806 07:33:45.472796   38357 notify.go:220] Checking for updates...
	I0806 07:33:45.472826   38357 out.go:177]   - MINIKUBE_LOCATION=19370
	I0806 07:33:45.474217   38357 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0806 07:33:45.475400   38357 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19370-13057/kubeconfig
	I0806 07:33:45.476631   38357 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19370-13057/.minikube
	I0806 07:33:45.477837   38357 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0806 07:33:45.479074   38357 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0806 07:33:45.480834   38357 config.go:182] Loaded profile config "ha-118173": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 07:33:45.480913   38357 driver.go:392] Setting default libvirt URI to qemu:///system
	I0806 07:33:45.481280   38357 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:33:45.481338   38357 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:33:45.496263   38357 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32859
	I0806 07:33:45.496668   38357 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:33:45.497252   38357 main.go:141] libmachine: Using API Version  1
	I0806 07:33:45.497271   38357 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:33:45.497669   38357 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:33:45.497845   38357 main.go:141] libmachine: (ha-118173) Calling .DriverName
	I0806 07:33:45.533359   38357 out.go:177] * Using the kvm2 driver based on existing profile
	I0806 07:33:45.534521   38357 start.go:297] selected driver: kvm2
	I0806 07:33:45.534540   38357 start.go:901] validating driver "kvm2" against &{Name:ha-118173 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.3 ClusterName:ha-118173 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.164 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.203 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.107 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk
:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 07:33:45.534770   38357 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0806 07:33:45.535137   38357 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 07:33:45.535205   38357 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19370-13057/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0806 07:33:45.549609   38357 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0806 07:33:45.550290   38357 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0806 07:33:45.550353   38357 cni.go:84] Creating CNI manager for ""
	I0806 07:33:45.550365   38357 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0806 07:33:45.550415   38357 start.go:340] cluster config:
	{Name:ha-118173 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-118173 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.164 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.203 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.107 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-till
er:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 07:33:45.550554   38357 iso.go:125] acquiring lock: {Name:mke58d71de28babe305c5eb0c31c274e9713bb3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 07:33:45.552440   38357 out.go:177] * Starting "ha-118173" primary control-plane node in "ha-118173" cluster
	I0806 07:33:45.553602   38357 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0806 07:33:45.553654   38357 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0806 07:33:45.553665   38357 cache.go:56] Caching tarball of preloaded images
	I0806 07:33:45.553764   38357 preload.go:172] Found /home/jenkins/minikube-integration/19370-13057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0806 07:33:45.553775   38357 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0806 07:33:45.553901   38357 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/config.json ...
	I0806 07:33:45.554132   38357 start.go:360] acquireMachinesLock for ha-118173: {Name:mkc602ac9c8cc3360dcc0fcb8104303837aaab61 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 07:33:45.554185   38357 start.go:364] duration metric: took 33.322µs to acquireMachinesLock for "ha-118173"
	I0806 07:33:45.554201   38357 start.go:96] Skipping create...Using existing machine configuration
	I0806 07:33:45.554212   38357 fix.go:54] fixHost starting: 
	I0806 07:33:45.554480   38357 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:33:45.554514   38357 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:33:45.568732   38357 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33065
	I0806 07:33:45.569141   38357 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:33:45.569593   38357 main.go:141] libmachine: Using API Version  1
	I0806 07:33:45.569619   38357 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:33:45.570002   38357 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:33:45.570255   38357 main.go:141] libmachine: (ha-118173) Calling .DriverName
	I0806 07:33:45.570417   38357 main.go:141] libmachine: (ha-118173) Calling .GetState
	I0806 07:33:45.572072   38357 fix.go:112] recreateIfNeeded on ha-118173: state=Running err=<nil>
	W0806 07:33:45.572095   38357 fix.go:138] unexpected machine state, will restart: <nil>
	I0806 07:33:45.574308   38357 out.go:177] * Updating the running kvm2 "ha-118173" VM ...
	I0806 07:33:45.575981   38357 machine.go:94] provisionDockerMachine start ...
	I0806 07:33:45.575998   38357 main.go:141] libmachine: (ha-118173) Calling .DriverName
	I0806 07:33:45.576190   38357 main.go:141] libmachine: (ha-118173) Calling .GetSSHHostname
	I0806 07:33:45.578945   38357 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:33:45.579402   38357 main.go:141] libmachine: (ha-118173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:df:f0", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:22:33 +0000 UTC Type:0 Mac:52:54:00:ef:df:f0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-118173 Clientid:01:52:54:00:ef:df:f0}
	I0806 07:33:45.579447   38357 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined IP address 192.168.39.164 and MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:33:45.579601   38357 main.go:141] libmachine: (ha-118173) Calling .GetSSHPort
	I0806 07:33:45.579818   38357 main.go:141] libmachine: (ha-118173) Calling .GetSSHKeyPath
	I0806 07:33:45.579973   38357 main.go:141] libmachine: (ha-118173) Calling .GetSSHKeyPath
	I0806 07:33:45.580108   38357 main.go:141] libmachine: (ha-118173) Calling .GetSSHUsername
	I0806 07:33:45.580325   38357 main.go:141] libmachine: Using SSH client type: native
	I0806 07:33:45.580610   38357 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.164 22 <nil> <nil>}
	I0806 07:33:45.580627   38357 main.go:141] libmachine: About to run SSH command:
	hostname
	I0806 07:33:45.691858   38357 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-118173
	
	I0806 07:33:45.691896   38357 main.go:141] libmachine: (ha-118173) Calling .GetMachineName
	I0806 07:33:45.692164   38357 buildroot.go:166] provisioning hostname "ha-118173"
	I0806 07:33:45.692191   38357 main.go:141] libmachine: (ha-118173) Calling .GetMachineName
	I0806 07:33:45.692438   38357 main.go:141] libmachine: (ha-118173) Calling .GetSSHHostname
	I0806 07:33:45.695554   38357 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:33:45.696024   38357 main.go:141] libmachine: (ha-118173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:df:f0", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:22:33 +0000 UTC Type:0 Mac:52:54:00:ef:df:f0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-118173 Clientid:01:52:54:00:ef:df:f0}
	I0806 07:33:45.696052   38357 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined IP address 192.168.39.164 and MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:33:45.696268   38357 main.go:141] libmachine: (ha-118173) Calling .GetSSHPort
	I0806 07:33:45.696454   38357 main.go:141] libmachine: (ha-118173) Calling .GetSSHKeyPath
	I0806 07:33:45.696610   38357 main.go:141] libmachine: (ha-118173) Calling .GetSSHKeyPath
	I0806 07:33:45.696748   38357 main.go:141] libmachine: (ha-118173) Calling .GetSSHUsername
	I0806 07:33:45.696925   38357 main.go:141] libmachine: Using SSH client type: native
	I0806 07:33:45.697127   38357 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.164 22 <nil> <nil>}
	I0806 07:33:45.697141   38357 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-118173 && echo "ha-118173" | sudo tee /etc/hostname
	I0806 07:33:45.818537   38357 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-118173
	
	I0806 07:33:45.818562   38357 main.go:141] libmachine: (ha-118173) Calling .GetSSHHostname
	I0806 07:33:45.821378   38357 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:33:45.821766   38357 main.go:141] libmachine: (ha-118173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:df:f0", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:22:33 +0000 UTC Type:0 Mac:52:54:00:ef:df:f0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-118173 Clientid:01:52:54:00:ef:df:f0}
	I0806 07:33:45.821793   38357 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined IP address 192.168.39.164 and MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:33:45.822014   38357 main.go:141] libmachine: (ha-118173) Calling .GetSSHPort
	I0806 07:33:45.822255   38357 main.go:141] libmachine: (ha-118173) Calling .GetSSHKeyPath
	I0806 07:33:45.822410   38357 main.go:141] libmachine: (ha-118173) Calling .GetSSHKeyPath
	I0806 07:33:45.822553   38357 main.go:141] libmachine: (ha-118173) Calling .GetSSHUsername
	I0806 07:33:45.822760   38357 main.go:141] libmachine: Using SSH client type: native
	I0806 07:33:45.822951   38357 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.164 22 <nil> <nil>}
	I0806 07:33:45.822973   38357 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-118173' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-118173/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-118173' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0806 07:33:45.925623   38357 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 07:33:45.925656   38357 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19370-13057/.minikube CaCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19370-13057/.minikube}
	I0806 07:33:45.925701   38357 buildroot.go:174] setting up certificates
	I0806 07:33:45.925714   38357 provision.go:84] configureAuth start
	I0806 07:33:45.925730   38357 main.go:141] libmachine: (ha-118173) Calling .GetMachineName
	I0806 07:33:45.926059   38357 main.go:141] libmachine: (ha-118173) Calling .GetIP
	I0806 07:33:45.928816   38357 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:33:45.929242   38357 main.go:141] libmachine: (ha-118173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:df:f0", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:22:33 +0000 UTC Type:0 Mac:52:54:00:ef:df:f0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-118173 Clientid:01:52:54:00:ef:df:f0}
	I0806 07:33:45.929264   38357 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined IP address 192.168.39.164 and MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:33:45.929436   38357 main.go:141] libmachine: (ha-118173) Calling .GetSSHHostname
	I0806 07:33:45.931566   38357 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:33:45.931968   38357 main.go:141] libmachine: (ha-118173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:df:f0", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:22:33 +0000 UTC Type:0 Mac:52:54:00:ef:df:f0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-118173 Clientid:01:52:54:00:ef:df:f0}
	I0806 07:33:45.931995   38357 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined IP address 192.168.39.164 and MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:33:45.932134   38357 provision.go:143] copyHostCerts
	I0806 07:33:45.932161   38357 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem
	I0806 07:33:45.932203   38357 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem, removing ...
	I0806 07:33:45.932212   38357 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem
	I0806 07:33:45.932274   38357 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem (1675 bytes)
	I0806 07:33:45.932400   38357 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem
	I0806 07:33:45.932426   38357 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem, removing ...
	I0806 07:33:45.932436   38357 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem
	I0806 07:33:45.932488   38357 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem (1078 bytes)
	I0806 07:33:45.932557   38357 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem
	I0806 07:33:45.932574   38357 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem, removing ...
	I0806 07:33:45.932580   38357 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem
	I0806 07:33:45.932615   38357 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem (1123 bytes)
	I0806 07:33:45.932706   38357 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem org=jenkins.ha-118173 san=[127.0.0.1 192.168.39.164 ha-118173 localhost minikube]
	I0806 07:33:46.085845   38357 provision.go:177] copyRemoteCerts
	I0806 07:33:46.085898   38357 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0806 07:33:46.085922   38357 main.go:141] libmachine: (ha-118173) Calling .GetSSHHostname
	I0806 07:33:46.088620   38357 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:33:46.089052   38357 main.go:141] libmachine: (ha-118173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:df:f0", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:22:33 +0000 UTC Type:0 Mac:52:54:00:ef:df:f0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-118173 Clientid:01:52:54:00:ef:df:f0}
	I0806 07:33:46.089076   38357 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined IP address 192.168.39.164 and MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:33:46.089267   38357 main.go:141] libmachine: (ha-118173) Calling .GetSSHPort
	I0806 07:33:46.089456   38357 main.go:141] libmachine: (ha-118173) Calling .GetSSHKeyPath
	I0806 07:33:46.089591   38357 main.go:141] libmachine: (ha-118173) Calling .GetSSHUsername
	I0806 07:33:46.089730   38357 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173/id_rsa Username:docker}
	I0806 07:33:46.172141   38357 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0806 07:33:46.172220   38357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0806 07:33:46.204411   38357 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0806 07:33:46.204490   38357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0806 07:33:46.231446   38357 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0806 07:33:46.231531   38357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0806 07:33:46.258197   38357 provision.go:87] duration metric: took 332.468541ms to configureAuth
	I0806 07:33:46.258224   38357 buildroot.go:189] setting minikube options for container-runtime
	I0806 07:33:46.258437   38357 config.go:182] Loaded profile config "ha-118173": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 07:33:46.258517   38357 main.go:141] libmachine: (ha-118173) Calling .GetSSHHostname
	I0806 07:33:46.261313   38357 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:33:46.261784   38357 main.go:141] libmachine: (ha-118173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:df:f0", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:22:33 +0000 UTC Type:0 Mac:52:54:00:ef:df:f0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-118173 Clientid:01:52:54:00:ef:df:f0}
	I0806 07:33:46.261810   38357 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined IP address 192.168.39.164 and MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:33:46.262068   38357 main.go:141] libmachine: (ha-118173) Calling .GetSSHPort
	I0806 07:33:46.262253   38357 main.go:141] libmachine: (ha-118173) Calling .GetSSHKeyPath
	I0806 07:33:46.262436   38357 main.go:141] libmachine: (ha-118173) Calling .GetSSHKeyPath
	I0806 07:33:46.262556   38357 main.go:141] libmachine: (ha-118173) Calling .GetSSHUsername
	I0806 07:33:46.262711   38357 main.go:141] libmachine: Using SSH client type: native
	I0806 07:33:46.262913   38357 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.164 22 <nil> <nil>}
	I0806 07:33:46.262928   38357 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0806 07:35:17.035660   38357 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0806 07:35:17.035757   38357 machine.go:97] duration metric: took 1m31.459757652s to provisionDockerMachine
	I0806 07:35:17.035775   38357 start.go:293] postStartSetup for "ha-118173" (driver="kvm2")
	I0806 07:35:17.035787   38357 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0806 07:35:17.035813   38357 main.go:141] libmachine: (ha-118173) Calling .DriverName
	I0806 07:35:17.036236   38357 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0806 07:35:17.036263   38357 main.go:141] libmachine: (ha-118173) Calling .GetSSHHostname
	I0806 07:35:17.039356   38357 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:35:17.039750   38357 main.go:141] libmachine: (ha-118173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:df:f0", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:22:33 +0000 UTC Type:0 Mac:52:54:00:ef:df:f0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-118173 Clientid:01:52:54:00:ef:df:f0}
	I0806 07:35:17.039778   38357 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined IP address 192.168.39.164 and MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:35:17.039909   38357 main.go:141] libmachine: (ha-118173) Calling .GetSSHPort
	I0806 07:35:17.040119   38357 main.go:141] libmachine: (ha-118173) Calling .GetSSHKeyPath
	I0806 07:35:17.040291   38357 main.go:141] libmachine: (ha-118173) Calling .GetSSHUsername
	I0806 07:35:17.040471   38357 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173/id_rsa Username:docker}
	I0806 07:35:17.125761   38357 ssh_runner.go:195] Run: cat /etc/os-release
	I0806 07:35:17.130478   38357 info.go:137] Remote host: Buildroot 2023.02.9
	I0806 07:35:17.130507   38357 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-13057/.minikube/addons for local assets ...
	I0806 07:35:17.130573   38357 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-13057/.minikube/files for local assets ...
	I0806 07:35:17.130666   38357 filesync.go:149] local asset: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem -> 202462.pem in /etc/ssl/certs
	I0806 07:35:17.130679   38357 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem -> /etc/ssl/certs/202462.pem
	I0806 07:35:17.130806   38357 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0806 07:35:17.141477   38357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem --> /etc/ssl/certs/202462.pem (1708 bytes)
	I0806 07:35:17.167538   38357 start.go:296] duration metric: took 131.749794ms for postStartSetup
	I0806 07:35:17.167588   38357 main.go:141] libmachine: (ha-118173) Calling .DriverName
	I0806 07:35:17.167871   38357 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0806 07:35:17.167895   38357 main.go:141] libmachine: (ha-118173) Calling .GetSSHHostname
	I0806 07:35:17.170565   38357 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:35:17.170968   38357 main.go:141] libmachine: (ha-118173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:df:f0", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:22:33 +0000 UTC Type:0 Mac:52:54:00:ef:df:f0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-118173 Clientid:01:52:54:00:ef:df:f0}
	I0806 07:35:17.170989   38357 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined IP address 192.168.39.164 and MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:35:17.171278   38357 main.go:141] libmachine: (ha-118173) Calling .GetSSHPort
	I0806 07:35:17.171499   38357 main.go:141] libmachine: (ha-118173) Calling .GetSSHKeyPath
	I0806 07:35:17.171671   38357 main.go:141] libmachine: (ha-118173) Calling .GetSSHUsername
	I0806 07:35:17.171839   38357 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173/id_rsa Username:docker}
	W0806 07:35:17.251514   38357 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0806 07:35:17.251545   38357 fix.go:56] duration metric: took 1m31.697334599s for fixHost
	I0806 07:35:17.251568   38357 main.go:141] libmachine: (ha-118173) Calling .GetSSHHostname
	I0806 07:35:17.254278   38357 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:35:17.254864   38357 main.go:141] libmachine: (ha-118173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:df:f0", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:22:33 +0000 UTC Type:0 Mac:52:54:00:ef:df:f0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-118173 Clientid:01:52:54:00:ef:df:f0}
	I0806 07:35:17.254894   38357 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined IP address 192.168.39.164 and MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:35:17.255059   38357 main.go:141] libmachine: (ha-118173) Calling .GetSSHPort
	I0806 07:35:17.255258   38357 main.go:141] libmachine: (ha-118173) Calling .GetSSHKeyPath
	I0806 07:35:17.255420   38357 main.go:141] libmachine: (ha-118173) Calling .GetSSHKeyPath
	I0806 07:35:17.255576   38357 main.go:141] libmachine: (ha-118173) Calling .GetSSHUsername
	I0806 07:35:17.255742   38357 main.go:141] libmachine: Using SSH client type: native
	I0806 07:35:17.255908   38357 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.164 22 <nil> <nil>}
	I0806 07:35:17.255919   38357 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0806 07:35:17.357344   38357 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722929717.331758082
	
	I0806 07:35:17.357370   38357 fix.go:216] guest clock: 1722929717.331758082
	I0806 07:35:17.357379   38357 fix.go:229] Guest: 2024-08-06 07:35:17.331758082 +0000 UTC Remote: 2024-08-06 07:35:17.251554131 +0000 UTC m=+91.818608040 (delta=80.203951ms)
	I0806 07:35:17.357407   38357 fix.go:200] guest clock delta is within tolerance: 80.203951ms
	I0806 07:35:17.357414   38357 start.go:83] releasing machines lock for "ha-118173", held for 1m31.803217825s
	I0806 07:35:17.357445   38357 main.go:141] libmachine: (ha-118173) Calling .DriverName
	I0806 07:35:17.357720   38357 main.go:141] libmachine: (ha-118173) Calling .GetIP
	I0806 07:35:17.360535   38357 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:35:17.361021   38357 main.go:141] libmachine: (ha-118173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:df:f0", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:22:33 +0000 UTC Type:0 Mac:52:54:00:ef:df:f0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-118173 Clientid:01:52:54:00:ef:df:f0}
	I0806 07:35:17.361059   38357 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined IP address 192.168.39.164 and MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:35:17.361186   38357 main.go:141] libmachine: (ha-118173) Calling .DriverName
	I0806 07:35:17.361707   38357 main.go:141] libmachine: (ha-118173) Calling .DriverName
	I0806 07:35:17.361922   38357 main.go:141] libmachine: (ha-118173) Calling .DriverName
	I0806 07:35:17.362010   38357 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0806 07:35:17.362046   38357 main.go:141] libmachine: (ha-118173) Calling .GetSSHHostname
	I0806 07:35:17.362167   38357 ssh_runner.go:195] Run: cat /version.json
	I0806 07:35:17.362189   38357 main.go:141] libmachine: (ha-118173) Calling .GetSSHHostname
	I0806 07:35:17.364758   38357 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:35:17.365017   38357 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:35:17.365228   38357 main.go:141] libmachine: (ha-118173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:df:f0", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:22:33 +0000 UTC Type:0 Mac:52:54:00:ef:df:f0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-118173 Clientid:01:52:54:00:ef:df:f0}
	I0806 07:35:17.365252   38357 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined IP address 192.168.39.164 and MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:35:17.365394   38357 main.go:141] libmachine: (ha-118173) Calling .GetSSHPort
	I0806 07:35:17.365584   38357 main.go:141] libmachine: (ha-118173) Calling .GetSSHKeyPath
	I0806 07:35:17.365610   38357 main.go:141] libmachine: (ha-118173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:df:f0", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:22:33 +0000 UTC Type:0 Mac:52:54:00:ef:df:f0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-118173 Clientid:01:52:54:00:ef:df:f0}
	I0806 07:35:17.365630   38357 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined IP address 192.168.39.164 and MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:35:17.365759   38357 main.go:141] libmachine: (ha-118173) Calling .GetSSHUsername
	I0806 07:35:17.365777   38357 main.go:141] libmachine: (ha-118173) Calling .GetSSHPort
	I0806 07:35:17.365956   38357 main.go:141] libmachine: (ha-118173) Calling .GetSSHKeyPath
	I0806 07:35:17.365948   38357 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173/id_rsa Username:docker}
	I0806 07:35:17.366101   38357 main.go:141] libmachine: (ha-118173) Calling .GetSSHUsername
	I0806 07:35:17.366265   38357 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/ha-118173/id_rsa Username:docker}
	I0806 07:35:17.462544   38357 ssh_runner.go:195] Run: systemctl --version
	I0806 07:35:17.469182   38357 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0806 07:35:17.633974   38357 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0806 07:35:17.639966   38357 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0806 07:35:17.640037   38357 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0806 07:35:17.650644   38357 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0806 07:35:17.650669   38357 start.go:495] detecting cgroup driver to use...
	I0806 07:35:17.650720   38357 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0806 07:35:17.668995   38357 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 07:35:17.683527   38357 docker.go:217] disabling cri-docker service (if available) ...
	I0806 07:35:17.683595   38357 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0806 07:35:17.697733   38357 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0806 07:35:17.711623   38357 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0806 07:35:17.864404   38357 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0806 07:35:18.019491   38357 docker.go:233] disabling docker service ...
	I0806 07:35:18.019569   38357 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0806 07:35:18.039798   38357 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0806 07:35:18.054343   38357 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0806 07:35:18.216040   38357 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0806 07:35:18.367823   38357 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0806 07:35:18.383521   38357 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 07:35:18.403241   38357 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0806 07:35:18.403306   38357 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 07:35:18.414585   38357 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0806 07:35:18.414650   38357 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 07:35:18.426318   38357 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 07:35:18.439585   38357 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 07:35:18.459522   38357 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0806 07:35:18.472016   38357 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 07:35:18.483076   38357 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 07:35:18.493817   38357 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 07:35:18.504710   38357 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0806 07:35:18.514491   38357 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0806 07:35:18.525143   38357 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 07:35:18.673520   38357 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0806 07:35:21.823024   38357 ssh_runner.go:235] Completed: sudo systemctl restart crio: (3.149467699s)
	I0806 07:35:21.823059   38357 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0806 07:35:21.823106   38357 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0806 07:35:21.828570   38357 start.go:563] Will wait 60s for crictl version
	I0806 07:35:21.828621   38357 ssh_runner.go:195] Run: which crictl
	I0806 07:35:21.832515   38357 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0806 07:35:21.873857   38357 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0806 07:35:21.873957   38357 ssh_runner.go:195] Run: crio --version
	I0806 07:35:21.903415   38357 ssh_runner.go:195] Run: crio --version
	I0806 07:35:21.935451   38357 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0806 07:35:21.936978   38357 main.go:141] libmachine: (ha-118173) Calling .GetIP
	I0806 07:35:21.939664   38357 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:35:21.940047   38357 main.go:141] libmachine: (ha-118173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:df:f0", ip: ""} in network mk-ha-118173: {Iface:virbr1 ExpiryTime:2024-08-06 08:22:33 +0000 UTC Type:0 Mac:52:54:00:ef:df:f0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-118173 Clientid:01:52:54:00:ef:df:f0}
	I0806 07:35:21.940073   38357 main.go:141] libmachine: (ha-118173) DBG | domain ha-118173 has defined IP address 192.168.39.164 and MAC address 52:54:00:ef:df:f0 in network mk-ha-118173
	I0806 07:35:21.940325   38357 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0806 07:35:21.945201   38357 kubeadm.go:883] updating cluster {Name:ha-118173 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-118173 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.164 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.203 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.107 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Moun
tGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0806 07:35:21.945336   38357 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0806 07:35:21.945377   38357 ssh_runner.go:195] Run: sudo crictl images --output json
	I0806 07:35:21.991401   38357 crio.go:514] all images are preloaded for cri-o runtime.
	I0806 07:35:21.991423   38357 crio.go:433] Images already preloaded, skipping extraction
	I0806 07:35:21.991469   38357 ssh_runner.go:195] Run: sudo crictl images --output json
	I0806 07:35:22.026394   38357 crio.go:514] all images are preloaded for cri-o runtime.
	I0806 07:35:22.026416   38357 cache_images.go:84] Images are preloaded, skipping loading
	I0806 07:35:22.026429   38357 kubeadm.go:934] updating node { 192.168.39.164 8443 v1.30.3 crio true true} ...
	I0806 07:35:22.026524   38357 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-118173 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.164
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-118173 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0806 07:35:22.026584   38357 ssh_runner.go:195] Run: crio config
	I0806 07:35:22.080962   38357 cni.go:84] Creating CNI manager for ""
	I0806 07:35:22.080988   38357 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0806 07:35:22.081002   38357 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0806 07:35:22.081022   38357 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.164 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-118173 NodeName:ha-118173 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.164"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.164 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0806 07:35:22.081159   38357 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.164
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-118173"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.164
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.164"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0806 07:35:22.081179   38357 kube-vip.go:115] generating kube-vip config ...
	I0806 07:35:22.081217   38357 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0806 07:35:22.093063   38357 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0806 07:35:22.093192   38357 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0806 07:35:22.093251   38357 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0806 07:35:22.103362   38357 binaries.go:44] Found k8s binaries, skipping transfer
	I0806 07:35:22.103430   38357 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0806 07:35:22.113353   38357 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0806 07:35:22.131867   38357 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0806 07:35:22.150407   38357 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0806 07:35:22.168132   38357 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0806 07:35:22.187227   38357 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0806 07:35:22.191533   38357 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 07:35:22.337122   38357 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 07:35:22.352452   38357 certs.go:68] Setting up /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173 for IP: 192.168.39.164
	I0806 07:35:22.352477   38357 certs.go:194] generating shared ca certs ...
	I0806 07:35:22.352497   38357 certs.go:226] acquiring lock for ca certs: {Name:mkf5c5aed91f4108054d9f2c742cad66c86afbed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 07:35:22.352673   38357 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.key
	I0806 07:35:22.352732   38357 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.key
	I0806 07:35:22.352744   38357 certs.go:256] generating profile certs ...
	I0806 07:35:22.352824   38357 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/client.key
	I0806 07:35:22.352856   38357 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.key.823ff5c5
	I0806 07:35:22.352878   38357 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.crt.823ff5c5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.164 192.168.39.203 192.168.39.5 192.168.39.254]
	I0806 07:35:22.402836   38357 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.crt.823ff5c5 ...
	I0806 07:35:22.402866   38357 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.crt.823ff5c5: {Name:mk10452176194ba763aea9cd13c71a0060a5137e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 07:35:22.403059   38357 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.key.823ff5c5 ...
	I0806 07:35:22.403074   38357 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.key.823ff5c5: {Name:mka4751260bc100f0254a0c37919473fbb7620e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 07:35:22.403153   38357 certs.go:381] copying /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.crt.823ff5c5 -> /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.crt
	I0806 07:35:22.403317   38357 certs.go:385] copying /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.key.823ff5c5 -> /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.key
	I0806 07:35:22.403440   38357 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/proxy-client.key
	I0806 07:35:22.403453   38357 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0806 07:35:22.403464   38357 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0806 07:35:22.403477   38357 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0806 07:35:22.403490   38357 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0806 07:35:22.403502   38357 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0806 07:35:22.403514   38357 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0806 07:35:22.403526   38357 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0806 07:35:22.403545   38357 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0806 07:35:22.403592   38357 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246.pem (1338 bytes)
	W0806 07:35:22.403621   38357 certs.go:480] ignoring /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246_empty.pem, impossibly tiny 0 bytes
	I0806 07:35:22.403631   38357 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem (1675 bytes)
	I0806 07:35:22.403653   38357 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem (1078 bytes)
	I0806 07:35:22.403674   38357 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem (1123 bytes)
	I0806 07:35:22.403695   38357 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem (1675 bytes)
	I0806 07:35:22.403733   38357 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem (1708 bytes)
	I0806 07:35:22.403758   38357 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem -> /usr/share/ca-certificates/202462.pem
	I0806 07:35:22.403771   38357 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0806 07:35:22.403783   38357 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246.pem -> /usr/share/ca-certificates/20246.pem
	I0806 07:35:22.404445   38357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0806 07:35:22.430728   38357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0806 07:35:22.455555   38357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0806 07:35:22.488278   38357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0806 07:35:22.513565   38357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0806 07:35:22.537842   38357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0806 07:35:22.562306   38357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0806 07:35:22.587839   38357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/ha-118173/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0806 07:35:22.612558   38357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem --> /usr/share/ca-certificates/202462.pem (1708 bytes)
	I0806 07:35:22.636942   38357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0806 07:35:22.661625   38357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246.pem --> /usr/share/ca-certificates/20246.pem (1338 bytes)
	I0806 07:35:22.687476   38357 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0806 07:35:22.704757   38357 ssh_runner.go:195] Run: openssl version
	I0806 07:35:22.711003   38357 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202462.pem && ln -fs /usr/share/ca-certificates/202462.pem /etc/ssl/certs/202462.pem"
	I0806 07:35:22.722027   38357 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202462.pem
	I0806 07:35:22.726876   38357 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  6 07:17 /usr/share/ca-certificates/202462.pem
	I0806 07:35:22.726945   38357 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202462.pem
	I0806 07:35:22.732736   38357 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202462.pem /etc/ssl/certs/3ec20f2e.0"
	I0806 07:35:22.742029   38357 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0806 07:35:22.753041   38357 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0806 07:35:22.757734   38357 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  6 07:07 /usr/share/ca-certificates/minikubeCA.pem
	I0806 07:35:22.757789   38357 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0806 07:35:22.763617   38357 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0806 07:35:22.773073   38357 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20246.pem && ln -fs /usr/share/ca-certificates/20246.pem /etc/ssl/certs/20246.pem"
	I0806 07:35:22.784531   38357 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20246.pem
	I0806 07:35:22.789149   38357 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  6 07:17 /usr/share/ca-certificates/20246.pem
	I0806 07:35:22.789210   38357 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20246.pem
	I0806 07:35:22.794992   38357 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20246.pem /etc/ssl/certs/51391683.0"
	I0806 07:35:22.804777   38357 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0806 07:35:22.809636   38357 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0806 07:35:22.815243   38357 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0806 07:35:22.821287   38357 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0806 07:35:22.827720   38357 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0806 07:35:22.833987   38357 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0806 07:35:22.839844   38357 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0806 07:35:22.845612   38357 kubeadm.go:392] StartCluster: {Name:ha-118173 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-118173 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.164 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.203 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.107 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpo
d:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 07:35:22.845718   38357 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0806 07:35:22.845783   38357 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0806 07:35:22.884064   38357 cri.go:89] found id: "eb1e79eb99228d6fe83fe651604e921a5222c01142de9b1d0ef720d196d2123e"
	I0806 07:35:22.884089   38357 cri.go:89] found id: "1bcbed1ca910c67d94f626f7cc508ef7083e003b7d773b1d56376460d37ff9f1"
	I0806 07:35:22.884094   38357 cri.go:89] found id: "86a0d083825b04f1714abda9141cd4a47e060c739d1e5b2e94159e8d4abcd7bd"
	I0806 07:35:22.884097   38357 cri.go:89] found id: "a69ec6c6d89d5362418d48ac0b094fbaf9c00e1489003b9b151de097f482da34"
	I0806 07:35:22.884100   38357 cri.go:89] found id: "c60313490ee322d54c5b3908e628bff77d0ca10249f7253eb9cbd44745a805af"
	I0806 07:35:22.884103   38357 cri.go:89] found id: "321504a9bbbfc57376fccab559a6ebc7388670dc0bf123b5f9fd41b1da99adaa"
	I0806 07:35:22.884105   38357 cri.go:89] found id: "b8f5ae0da505fc7f79225ba7115c83b9e909128bc28df767fce1fc3ba7a01392"
	I0806 07:35:22.884108   38357 cri.go:89] found id: "e5a5bdf0b2a62163f6aff6abe864eedc47b461e233989f23ef7770c226763325"
	I0806 07:35:22.884110   38357 cri.go:89] found id: "9ea977593bf7ff53f7419e821deaa1cf5813b4c116730030cd8d417d078be7fa"
	I0806 07:35:22.884116   38357 cri.go:89] found id: "a94703035c76b778779a89b7fc9176c7a956ebec52628aa1720d26bf2a1c31f5"
	I0806 07:35:22.884118   38357 cri.go:89] found id: "27f4fdf6fa77a51e1295c12b459cda687ed6ae68875218e24ba7959fb37814ed"
	I0806 07:35:22.884122   38357 cri.go:89] found id: "1ffee4f9f096a33fc08f1f6b266ea17ad2181a58a153f7fb1703ded978b23a46"
	I0806 07:35:22.884124   38357 cri.go:89] found id: "306b97b37955570f0a18d38ff1afab7bb09e01c47a3418dba2aeed0f00d6052d"
	I0806 07:35:22.884127   38357 cri.go:89] found id: ""
	I0806 07:35:22.884170   38357 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 06 07:40:37 ha-118173 crio[3786]: time="2024-08-06 07:40:37.430281201Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=06d9b44d-68cd-4054-bd60-4225bb29af6f name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 07:40:37 ha-118173 crio[3786]: time="2024-08-06 07:40:37.430891459Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:99f58b84fab60e2b606761b8ed13c48d86197a37ec557d278c6aeff4b215f6b9,PodSandboxId:9d298728c707a71c61f7e9c12d81847924d5702a23547701f7d70396538b9702,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722929782657022675,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8379466d-0241-40e4-a262-d762b3cda2eb,},Annotations:map[string]string{io.kubernetes.container.hash: d079fdad,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ebb3ef02d9872250e0c4614f3552d20ba04a677b81c274d9cf9061a86d7c9ce,PodSandboxId:e774276d1db20b1d0af6107ff83ecf4ba4a28e6ce7cd5d48f1425a9085296c5c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722929768628612088,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f74055e61dcf5f2385b1934a1a8e3b73,},Annotations:map[string]string{io.kubernetes.container.hash: 27fe2ed3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8e645fa59f15ec29730259bc834efd2b4740bd0cf9d98973d43ac2f3e60f8eb,PodSandboxId:3f467e52d9b47ff5ac03b4a454e0ce36266c74302644c05d81e42b312eca5b9b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722929765627015845,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6346700b562b8d3f4c18c66b3813bc28,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb461c0c1a90ce934f5cc90096bf3ebadcbb030c865637c1b0d6f60c3134e0bb,PodSandboxId:c18a977f42d3413f04444d2f24aa487786b86d9e625780e6e1111c13f46b4c81,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722929761946284425,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-rml5d,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4a82c0ca-1596-4dfe-a2f9-3a34f22480e5,},Annotations:map[string]string{io.kubernetes.container.hash: 950719af,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d631bee7d098005cab603ace8514dea77a9fa74686bb0be29ea68d9a592c3e9c,PodSandboxId:3cf78ca258a5c291aeb00bdf810e1bbed94619b29783b0ec29e87984818cd2ff,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722929741194070005,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e550e7b49e50c28915c957cd8c1df9d,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c038e3b4d8c535621dabf1ffa68a5bb717de411bccf8772e655b2c06282edf3,PodSandboxId:9d298728c707a71c61f7e9c12d81847924d5702a23547701f7d70396538b9702,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722929729209432391,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8379466d-0241-40e4-a262-d762b3cda2eb,},Annotations:map[string]string{io.kubernetes.container.hash: d079fdad,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcf0ac6e03c21e2b64b008ca3dcd8293a980e393216a3103acf1a60f8ef1dc3f,PodSandboxId:c1b179a473da688998355788cf02f16bea7187f30019e6786b025cc877ed72fa,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722929728492501519,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ksm6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 594bc178-8319-4922-9132-69a9c6890b95,},Annotations:map[string]string{io.kubernetes.container.hash: 50a78a81,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:9529a894fdd4b09b00a809dc3ad326ff8ab57eba9ae9438200069348b4426b19,PodSandboxId:4a0b68a5db12d84fa3e54810b85090c13072ad782e230aa867e47298ad248fc6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722929728872421792,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zscz4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef588a32-ac32-4e38-926e-6ee7a662a399,},Annotations:map[string]string{io.kubernetes.container.hash: 38e96eed,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e67330d376d276df974f78a481179786f12a17135fd068411ad63be616869e1e,PodSandboxId:0e8087802602b09a2d0f605ce70c425e402b5a10e4f1497b2eb139342a335f5a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_RUNNING,CreatedAt:1722929728633687734,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-n7jmb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa496bfb-7bf0-4ebc-adbd-36d992d8e279,},Annotations:map[string]string{io.kubernetes.container.hash: db2b7a8c,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8c7c569af258485ebf98698eba57ddc482e2819c27995c4da66803dc7a84de4,PodSandboxId:52b914be0c4a1b45fddcf2b0b959430fced7a213686ea1009a6c37f2a5a399fd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722929728711874905,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 080589493a4905d0fb10f52a1a0e25a5,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ea3f1672f91dabee2d9f8605538a8f3683ca4391c2c9deac554eacfddd243bb,PodSandboxId:6a8f5ef8d3fb5f340f9569eb21fa91020e3b44226daa521ebed7eafd2c5aac8c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722929728725790024,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-t24ww,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8effbe6c-3107-409e-9c94-dee4103d12b0,},Annotations:map[string]string{io.kubernetes.container.hash: 7604f64,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0280e428feb3f49d39e63996a569fe8099b785732ee02450df33eb25a8a39c88,PodSandboxId:3f467e52d9b47ff5ac03b4a454e0ce36266c74302644c05d81e42b312eca5b9b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722929728590957011,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-118173,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: 6346700b562b8d3f4c18c66b3813bc28,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dd16c152a6e76f97a41f8091857757846cf00e2b45fbebefd70bc7191f71d6d,PodSandboxId:e774276d1db20b1d0af6107ff83ecf4ba4a28e6ce7cd5d48f1425a9085296c5c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722929728444127688,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: f74055e61dcf5f2385b1934a1a8e3b73,},Annotations:map[string]string{io.kubernetes.container.hash: 27fe2ed3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eae811e3e34734c78cc8f46ad9c12e0bde5ad445281f50029620e65e67b92b11,PodSandboxId:46116e50c4c43a8f8143460932ea15d87c3499f60948ed31f758cc62c7baae81,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722929728393775474,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65b7cf59bb7c42e61e37dcbebce4eec4,},Anno
tations:map[string]string{io.kubernetes.container.hash: fa683f08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10437bdde23713cae1b222edffa34be0e9583f43a83c01f10aae555f0dd7542e,PodSandboxId:3f725d083d59ecb8b65c8b09466e36c8b01dd5742fa8d97e108de4a49f47de04,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722929221668285640,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-rml5d,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4a82c0ca-1596-4dfe-a2f9-3a34f22480e5,},Annota
tions:map[string]string{io.kubernetes.container.hash: 950719af,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c60313490ee322d54c5b3908e628bff77d0ca10249f7253eb9cbd44745a805af,PodSandboxId:b221107c8052d9be54b23c0ee2e882f4adc57ade5813f27e7dfc4e4f71006663,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722929010372427230,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zscz4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef588a32-ac32-4e38-926e-6ee7a662a399,},Annotations:map[string]string{io.kuber
netes.container.hash: 38e96eed,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:321504a9bbbfc57376fccab559a6ebc7388670dc0bf123b5f9fd41b1da99adaa,PodSandboxId:4a9fe2c539265caf233fd5f3778ebc3e6092ce121ddf647e3f72622e31022e53,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722929010343638533,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-7db6d8ff4d-t24ww,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8effbe6c-3107-409e-9c94-dee4103d12b0,},Annotations:map[string]string{io.kubernetes.container.hash: 7604f64,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8f5ae0da505fc7f79225ba7115c83b9e909128bc28df767fce1fc3ba7a01392,PodSandboxId:5bbdb233d391e1127588924ac263113c9aa0148b24fc7113df3ca0891d7e0fdc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]st
ring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_EXITED,CreatedAt:1722928998315769494,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-n7jmb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa496bfb-7bf0-4ebc-adbd-36d992d8e279,},Annotations:map[string]string{io.kubernetes.container.hash: db2b7a8c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5a5bdf0b2a62163f6aff6abe864eedc47b461e233989f23ef7770c226763325,PodSandboxId:2ed57fdbf65f1846c200a67b50a1043fe8c70dd79c90d1a1c67e5e0634c8088b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHa
ndler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722928994187880481,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ksm6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 594bc178-8319-4922-9132-69a9c6890b95,},Annotations:map[string]string{io.kubernetes.container.hash: 50a78a81,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27f4fdf6fa77a51e1295c12b459cda687ed6ae68875218e24ba7959fb37814ed,PodSandboxId:5ebdb09ee65d6de5671cacfdefaaebdf7b5b00f25aa61461d39015ab5cb5578b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b767
22eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722928974549211585,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 080589493a4905d0fb10f52a1a0e25a5,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:306b97b37955570f0a18d38ff1afab7bb09e01c47a3418dba2aeed0f00d6052d,PodSandboxId:5055641759bdb7b0e57568f6d5310c72587ea715571f3fff96ff5d200be59dc2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0c
fd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722928974442938254,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65b7cf59bb7c42e61e37dcbebce4eec4,},Annotations:map[string]string{io.kubernetes.container.hash: fa683f08,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=06d9b44d-68cd-4054-bd60-4225bb29af6f name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 07:40:37 ha-118173 crio[3786]: time="2024-08-06 07:40:37.474475889Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3bd30050-82dd-4a33-b0ad-94d0ab72ca7e name=/runtime.v1.RuntimeService/Version
	Aug 06 07:40:37 ha-118173 crio[3786]: time="2024-08-06 07:40:37.474610399Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3bd30050-82dd-4a33-b0ad-94d0ab72ca7e name=/runtime.v1.RuntimeService/Version
	Aug 06 07:40:37 ha-118173 crio[3786]: time="2024-08-06 07:40:37.475853754Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8270a300-cec9-4a0c-98cb-d51cf0c9e3d9 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 07:40:37 ha-118173 crio[3786]: time="2024-08-06 07:40:37.476435070Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722930037476409005,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154770,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8270a300-cec9-4a0c-98cb-d51cf0c9e3d9 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 07:40:37 ha-118173 crio[3786]: time="2024-08-06 07:40:37.476961710Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=866b4c40-c2ef-4440-b792-d233ec3047f3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 07:40:37 ha-118173 crio[3786]: time="2024-08-06 07:40:37.477017443Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=866b4c40-c2ef-4440-b792-d233ec3047f3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 07:40:37 ha-118173 crio[3786]: time="2024-08-06 07:40:37.477733108Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:99f58b84fab60e2b606761b8ed13c48d86197a37ec557d278c6aeff4b215f6b9,PodSandboxId:9d298728c707a71c61f7e9c12d81847924d5702a23547701f7d70396538b9702,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722929782657022675,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8379466d-0241-40e4-a262-d762b3cda2eb,},Annotations:map[string]string{io.kubernetes.container.hash: d079fdad,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ebb3ef02d9872250e0c4614f3552d20ba04a677b81c274d9cf9061a86d7c9ce,PodSandboxId:e774276d1db20b1d0af6107ff83ecf4ba4a28e6ce7cd5d48f1425a9085296c5c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722929768628612088,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f74055e61dcf5f2385b1934a1a8e3b73,},Annotations:map[string]string{io.kubernetes.container.hash: 27fe2ed3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8e645fa59f15ec29730259bc834efd2b4740bd0cf9d98973d43ac2f3e60f8eb,PodSandboxId:3f467e52d9b47ff5ac03b4a454e0ce36266c74302644c05d81e42b312eca5b9b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722929765627015845,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6346700b562b8d3f4c18c66b3813bc28,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb461c0c1a90ce934f5cc90096bf3ebadcbb030c865637c1b0d6f60c3134e0bb,PodSandboxId:c18a977f42d3413f04444d2f24aa487786b86d9e625780e6e1111c13f46b4c81,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722929761946284425,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-rml5d,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4a82c0ca-1596-4dfe-a2f9-3a34f22480e5,},Annotations:map[string]string{io.kubernetes.container.hash: 950719af,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d631bee7d098005cab603ace8514dea77a9fa74686bb0be29ea68d9a592c3e9c,PodSandboxId:3cf78ca258a5c291aeb00bdf810e1bbed94619b29783b0ec29e87984818cd2ff,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722929741194070005,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e550e7b49e50c28915c957cd8c1df9d,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c038e3b4d8c535621dabf1ffa68a5bb717de411bccf8772e655b2c06282edf3,PodSandboxId:9d298728c707a71c61f7e9c12d81847924d5702a23547701f7d70396538b9702,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722929729209432391,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8379466d-0241-40e4-a262-d762b3cda2eb,},Annotations:map[string]string{io.kubernetes.container.hash: d079fdad,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcf0ac6e03c21e2b64b008ca3dcd8293a980e393216a3103acf1a60f8ef1dc3f,PodSandboxId:c1b179a473da688998355788cf02f16bea7187f30019e6786b025cc877ed72fa,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722929728492501519,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ksm6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 594bc178-8319-4922-9132-69a9c6890b95,},Annotations:map[string]string{io.kubernetes.container.hash: 50a78a81,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:9529a894fdd4b09b00a809dc3ad326ff8ab57eba9ae9438200069348b4426b19,PodSandboxId:4a0b68a5db12d84fa3e54810b85090c13072ad782e230aa867e47298ad248fc6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722929728872421792,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zscz4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef588a32-ac32-4e38-926e-6ee7a662a399,},Annotations:map[string]string{io.kubernetes.container.hash: 38e96eed,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e67330d376d276df974f78a481179786f12a17135fd068411ad63be616869e1e,PodSandboxId:0e8087802602b09a2d0f605ce70c425e402b5a10e4f1497b2eb139342a335f5a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_RUNNING,CreatedAt:1722929728633687734,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-n7jmb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa496bfb-7bf0-4ebc-adbd-36d992d8e279,},Annotations:map[string]string{io.kubernetes.container.hash: db2b7a8c,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8c7c569af258485ebf98698eba57ddc482e2819c27995c4da66803dc7a84de4,PodSandboxId:52b914be0c4a1b45fddcf2b0b959430fced7a213686ea1009a6c37f2a5a399fd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722929728711874905,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 080589493a4905d0fb10f52a1a0e25a5,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ea3f1672f91dabee2d9f8605538a8f3683ca4391c2c9deac554eacfddd243bb,PodSandboxId:6a8f5ef8d3fb5f340f9569eb21fa91020e3b44226daa521ebed7eafd2c5aac8c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722929728725790024,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-t24ww,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8effbe6c-3107-409e-9c94-dee4103d12b0,},Annotations:map[string]string{io.kubernetes.container.hash: 7604f64,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0280e428feb3f49d39e63996a569fe8099b785732ee02450df33eb25a8a39c88,PodSandboxId:3f467e52d9b47ff5ac03b4a454e0ce36266c74302644c05d81e42b312eca5b9b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722929728590957011,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-118173,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: 6346700b562b8d3f4c18c66b3813bc28,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dd16c152a6e76f97a41f8091857757846cf00e2b45fbebefd70bc7191f71d6d,PodSandboxId:e774276d1db20b1d0af6107ff83ecf4ba4a28e6ce7cd5d48f1425a9085296c5c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722929728444127688,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: f74055e61dcf5f2385b1934a1a8e3b73,},Annotations:map[string]string{io.kubernetes.container.hash: 27fe2ed3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eae811e3e34734c78cc8f46ad9c12e0bde5ad445281f50029620e65e67b92b11,PodSandboxId:46116e50c4c43a8f8143460932ea15d87c3499f60948ed31f758cc62c7baae81,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722929728393775474,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65b7cf59bb7c42e61e37dcbebce4eec4,},Anno
tations:map[string]string{io.kubernetes.container.hash: fa683f08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10437bdde23713cae1b222edffa34be0e9583f43a83c01f10aae555f0dd7542e,PodSandboxId:3f725d083d59ecb8b65c8b09466e36c8b01dd5742fa8d97e108de4a49f47de04,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722929221668285640,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-rml5d,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4a82c0ca-1596-4dfe-a2f9-3a34f22480e5,},Annota
tions:map[string]string{io.kubernetes.container.hash: 950719af,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c60313490ee322d54c5b3908e628bff77d0ca10249f7253eb9cbd44745a805af,PodSandboxId:b221107c8052d9be54b23c0ee2e882f4adc57ade5813f27e7dfc4e4f71006663,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722929010372427230,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zscz4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef588a32-ac32-4e38-926e-6ee7a662a399,},Annotations:map[string]string{io.kuber
netes.container.hash: 38e96eed,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:321504a9bbbfc57376fccab559a6ebc7388670dc0bf123b5f9fd41b1da99adaa,PodSandboxId:4a9fe2c539265caf233fd5f3778ebc3e6092ce121ddf647e3f72622e31022e53,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722929010343638533,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-7db6d8ff4d-t24ww,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8effbe6c-3107-409e-9c94-dee4103d12b0,},Annotations:map[string]string{io.kubernetes.container.hash: 7604f64,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8f5ae0da505fc7f79225ba7115c83b9e909128bc28df767fce1fc3ba7a01392,PodSandboxId:5bbdb233d391e1127588924ac263113c9aa0148b24fc7113df3ca0891d7e0fdc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]st
ring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_EXITED,CreatedAt:1722928998315769494,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-n7jmb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa496bfb-7bf0-4ebc-adbd-36d992d8e279,},Annotations:map[string]string{io.kubernetes.container.hash: db2b7a8c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5a5bdf0b2a62163f6aff6abe864eedc47b461e233989f23ef7770c226763325,PodSandboxId:2ed57fdbf65f1846c200a67b50a1043fe8c70dd79c90d1a1c67e5e0634c8088b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHa
ndler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722928994187880481,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ksm6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 594bc178-8319-4922-9132-69a9c6890b95,},Annotations:map[string]string{io.kubernetes.container.hash: 50a78a81,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27f4fdf6fa77a51e1295c12b459cda687ed6ae68875218e24ba7959fb37814ed,PodSandboxId:5ebdb09ee65d6de5671cacfdefaaebdf7b5b00f25aa61461d39015ab5cb5578b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b767
22eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722928974549211585,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 080589493a4905d0fb10f52a1a0e25a5,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:306b97b37955570f0a18d38ff1afab7bb09e01c47a3418dba2aeed0f00d6052d,PodSandboxId:5055641759bdb7b0e57568f6d5310c72587ea715571f3fff96ff5d200be59dc2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0c
fd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722928974442938254,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65b7cf59bb7c42e61e37dcbebce4eec4,},Annotations:map[string]string{io.kubernetes.container.hash: fa683f08,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=866b4c40-c2ef-4440-b792-d233ec3047f3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 07:40:37 ha-118173 crio[3786]: time="2024-08-06 07:40:37.518695992Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=097b6557-7b0d-44a6-acf6-0ca659a0ded1 name=/runtime.v1.RuntimeService/Version
	Aug 06 07:40:37 ha-118173 crio[3786]: time="2024-08-06 07:40:37.518782087Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=097b6557-7b0d-44a6-acf6-0ca659a0ded1 name=/runtime.v1.RuntimeService/Version
	Aug 06 07:40:37 ha-118173 crio[3786]: time="2024-08-06 07:40:37.520469010Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=54c0601b-8b91-4e11-96ad-250e300175b4 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 07:40:37 ha-118173 crio[3786]: time="2024-08-06 07:40:37.521034169Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722930037521008154,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154770,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=54c0601b-8b91-4e11-96ad-250e300175b4 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 07:40:37 ha-118173 crio[3786]: time="2024-08-06 07:40:37.521762754Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bf5cb660-4b57-40cb-8aa4-bba9ce3ded1e name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 07:40:37 ha-118173 crio[3786]: time="2024-08-06 07:40:37.521819672Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bf5cb660-4b57-40cb-8aa4-bba9ce3ded1e name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 07:40:37 ha-118173 crio[3786]: time="2024-08-06 07:40:37.522201657Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:99f58b84fab60e2b606761b8ed13c48d86197a37ec557d278c6aeff4b215f6b9,PodSandboxId:9d298728c707a71c61f7e9c12d81847924d5702a23547701f7d70396538b9702,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722929782657022675,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8379466d-0241-40e4-a262-d762b3cda2eb,},Annotations:map[string]string{io.kubernetes.container.hash: d079fdad,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ebb3ef02d9872250e0c4614f3552d20ba04a677b81c274d9cf9061a86d7c9ce,PodSandboxId:e774276d1db20b1d0af6107ff83ecf4ba4a28e6ce7cd5d48f1425a9085296c5c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722929768628612088,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f74055e61dcf5f2385b1934a1a8e3b73,},Annotations:map[string]string{io.kubernetes.container.hash: 27fe2ed3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8e645fa59f15ec29730259bc834efd2b4740bd0cf9d98973d43ac2f3e60f8eb,PodSandboxId:3f467e52d9b47ff5ac03b4a454e0ce36266c74302644c05d81e42b312eca5b9b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722929765627015845,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6346700b562b8d3f4c18c66b3813bc28,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb461c0c1a90ce934f5cc90096bf3ebadcbb030c865637c1b0d6f60c3134e0bb,PodSandboxId:c18a977f42d3413f04444d2f24aa487786b86d9e625780e6e1111c13f46b4c81,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722929761946284425,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-rml5d,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4a82c0ca-1596-4dfe-a2f9-3a34f22480e5,},Annotations:map[string]string{io.kubernetes.container.hash: 950719af,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d631bee7d098005cab603ace8514dea77a9fa74686bb0be29ea68d9a592c3e9c,PodSandboxId:3cf78ca258a5c291aeb00bdf810e1bbed94619b29783b0ec29e87984818cd2ff,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722929741194070005,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e550e7b49e50c28915c957cd8c1df9d,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c038e3b4d8c535621dabf1ffa68a5bb717de411bccf8772e655b2c06282edf3,PodSandboxId:9d298728c707a71c61f7e9c12d81847924d5702a23547701f7d70396538b9702,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722929729209432391,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8379466d-0241-40e4-a262-d762b3cda2eb,},Annotations:map[string]string{io.kubernetes.container.hash: d079fdad,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcf0ac6e03c21e2b64b008ca3dcd8293a980e393216a3103acf1a60f8ef1dc3f,PodSandboxId:c1b179a473da688998355788cf02f16bea7187f30019e6786b025cc877ed72fa,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722929728492501519,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ksm6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 594bc178-8319-4922-9132-69a9c6890b95,},Annotations:map[string]string{io.kubernetes.container.hash: 50a78a81,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:9529a894fdd4b09b00a809dc3ad326ff8ab57eba9ae9438200069348b4426b19,PodSandboxId:4a0b68a5db12d84fa3e54810b85090c13072ad782e230aa867e47298ad248fc6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722929728872421792,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zscz4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef588a32-ac32-4e38-926e-6ee7a662a399,},Annotations:map[string]string{io.kubernetes.container.hash: 38e96eed,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e67330d376d276df974f78a481179786f12a17135fd068411ad63be616869e1e,PodSandboxId:0e8087802602b09a2d0f605ce70c425e402b5a10e4f1497b2eb139342a335f5a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_RUNNING,CreatedAt:1722929728633687734,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-n7jmb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa496bfb-7bf0-4ebc-adbd-36d992d8e279,},Annotations:map[string]string{io.kubernetes.container.hash: db2b7a8c,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8c7c569af258485ebf98698eba57ddc482e2819c27995c4da66803dc7a84de4,PodSandboxId:52b914be0c4a1b45fddcf2b0b959430fced7a213686ea1009a6c37f2a5a399fd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722929728711874905,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 080589493a4905d0fb10f52a1a0e25a5,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ea3f1672f91dabee2d9f8605538a8f3683ca4391c2c9deac554eacfddd243bb,PodSandboxId:6a8f5ef8d3fb5f340f9569eb21fa91020e3b44226daa521ebed7eafd2c5aac8c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722929728725790024,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-t24ww,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8effbe6c-3107-409e-9c94-dee4103d12b0,},Annotations:map[string]string{io.kubernetes.container.hash: 7604f64,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0280e428feb3f49d39e63996a569fe8099b785732ee02450df33eb25a8a39c88,PodSandboxId:3f467e52d9b47ff5ac03b4a454e0ce36266c74302644c05d81e42b312eca5b9b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722929728590957011,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-118173,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: 6346700b562b8d3f4c18c66b3813bc28,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dd16c152a6e76f97a41f8091857757846cf00e2b45fbebefd70bc7191f71d6d,PodSandboxId:e774276d1db20b1d0af6107ff83ecf4ba4a28e6ce7cd5d48f1425a9085296c5c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722929728444127688,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: f74055e61dcf5f2385b1934a1a8e3b73,},Annotations:map[string]string{io.kubernetes.container.hash: 27fe2ed3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eae811e3e34734c78cc8f46ad9c12e0bde5ad445281f50029620e65e67b92b11,PodSandboxId:46116e50c4c43a8f8143460932ea15d87c3499f60948ed31f758cc62c7baae81,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722929728393775474,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65b7cf59bb7c42e61e37dcbebce4eec4,},Anno
tations:map[string]string{io.kubernetes.container.hash: fa683f08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10437bdde23713cae1b222edffa34be0e9583f43a83c01f10aae555f0dd7542e,PodSandboxId:3f725d083d59ecb8b65c8b09466e36c8b01dd5742fa8d97e108de4a49f47de04,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722929221668285640,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-rml5d,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4a82c0ca-1596-4dfe-a2f9-3a34f22480e5,},Annota
tions:map[string]string{io.kubernetes.container.hash: 950719af,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c60313490ee322d54c5b3908e628bff77d0ca10249f7253eb9cbd44745a805af,PodSandboxId:b221107c8052d9be54b23c0ee2e882f4adc57ade5813f27e7dfc4e4f71006663,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722929010372427230,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zscz4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef588a32-ac32-4e38-926e-6ee7a662a399,},Annotations:map[string]string{io.kuber
netes.container.hash: 38e96eed,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:321504a9bbbfc57376fccab559a6ebc7388670dc0bf123b5f9fd41b1da99adaa,PodSandboxId:4a9fe2c539265caf233fd5f3778ebc3e6092ce121ddf647e3f72622e31022e53,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722929010343638533,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-7db6d8ff4d-t24ww,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8effbe6c-3107-409e-9c94-dee4103d12b0,},Annotations:map[string]string{io.kubernetes.container.hash: 7604f64,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8f5ae0da505fc7f79225ba7115c83b9e909128bc28df767fce1fc3ba7a01392,PodSandboxId:5bbdb233d391e1127588924ac263113c9aa0148b24fc7113df3ca0891d7e0fdc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]st
ring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_EXITED,CreatedAt:1722928998315769494,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-n7jmb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa496bfb-7bf0-4ebc-adbd-36d992d8e279,},Annotations:map[string]string{io.kubernetes.container.hash: db2b7a8c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5a5bdf0b2a62163f6aff6abe864eedc47b461e233989f23ef7770c226763325,PodSandboxId:2ed57fdbf65f1846c200a67b50a1043fe8c70dd79c90d1a1c67e5e0634c8088b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHa
ndler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722928994187880481,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ksm6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 594bc178-8319-4922-9132-69a9c6890b95,},Annotations:map[string]string{io.kubernetes.container.hash: 50a78a81,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27f4fdf6fa77a51e1295c12b459cda687ed6ae68875218e24ba7959fb37814ed,PodSandboxId:5ebdb09ee65d6de5671cacfdefaaebdf7b5b00f25aa61461d39015ab5cb5578b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b767
22eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722928974549211585,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 080589493a4905d0fb10f52a1a0e25a5,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:306b97b37955570f0a18d38ff1afab7bb09e01c47a3418dba2aeed0f00d6052d,PodSandboxId:5055641759bdb7b0e57568f6d5310c72587ea715571f3fff96ff5d200be59dc2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0c
fd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722928974442938254,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65b7cf59bb7c42e61e37dcbebce4eec4,},Annotations:map[string]string{io.kubernetes.container.hash: fa683f08,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bf5cb660-4b57-40cb-8aa4-bba9ce3ded1e name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 07:40:37 ha-118173 crio[3786]: time="2024-08-06 07:40:37.567683638Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a9793eab-b414-49bb-82d2-7117acf1bf47 name=/runtime.v1.RuntimeService/Version
	Aug 06 07:40:37 ha-118173 crio[3786]: time="2024-08-06 07:40:37.567815735Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a9793eab-b414-49bb-82d2-7117acf1bf47 name=/runtime.v1.RuntimeService/Version
	Aug 06 07:40:37 ha-118173 crio[3786]: time="2024-08-06 07:40:37.569484245Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dda34600-a2b1-4c3e-aa6b-f7ff211effb8 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 07:40:37 ha-118173 crio[3786]: time="2024-08-06 07:40:37.569987887Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722930037569963969,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154770,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dda34600-a2b1-4c3e-aa6b-f7ff211effb8 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 07:40:37 ha-118173 crio[3786]: time="2024-08-06 07:40:37.570502584Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2b9a43c5-96bb-4e0c-9119-2bd1b3e001ac name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 07:40:37 ha-118173 crio[3786]: time="2024-08-06 07:40:37.570606622Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2b9a43c5-96bb-4e0c-9119-2bd1b3e001ac name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 07:40:37 ha-118173 crio[3786]: time="2024-08-06 07:40:37.571016329Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:99f58b84fab60e2b606761b8ed13c48d86197a37ec557d278c6aeff4b215f6b9,PodSandboxId:9d298728c707a71c61f7e9c12d81847924d5702a23547701f7d70396538b9702,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722929782657022675,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8379466d-0241-40e4-a262-d762b3cda2eb,},Annotations:map[string]string{io.kubernetes.container.hash: d079fdad,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ebb3ef02d9872250e0c4614f3552d20ba04a677b81c274d9cf9061a86d7c9ce,PodSandboxId:e774276d1db20b1d0af6107ff83ecf4ba4a28e6ce7cd5d48f1425a9085296c5c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722929768628612088,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f74055e61dcf5f2385b1934a1a8e3b73,},Annotations:map[string]string{io.kubernetes.container.hash: 27fe2ed3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8e645fa59f15ec29730259bc834efd2b4740bd0cf9d98973d43ac2f3e60f8eb,PodSandboxId:3f467e52d9b47ff5ac03b4a454e0ce36266c74302644c05d81e42b312eca5b9b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722929765627015845,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6346700b562b8d3f4c18c66b3813bc28,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb461c0c1a90ce934f5cc90096bf3ebadcbb030c865637c1b0d6f60c3134e0bb,PodSandboxId:c18a977f42d3413f04444d2f24aa487786b86d9e625780e6e1111c13f46b4c81,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722929761946284425,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-rml5d,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4a82c0ca-1596-4dfe-a2f9-3a34f22480e5,},Annotations:map[string]string{io.kubernetes.container.hash: 950719af,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d631bee7d098005cab603ace8514dea77a9fa74686bb0be29ea68d9a592c3e9c,PodSandboxId:3cf78ca258a5c291aeb00bdf810e1bbed94619b29783b0ec29e87984818cd2ff,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722929741194070005,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e550e7b49e50c28915c957cd8c1df9d,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c038e3b4d8c535621dabf1ffa68a5bb717de411bccf8772e655b2c06282edf3,PodSandboxId:9d298728c707a71c61f7e9c12d81847924d5702a23547701f7d70396538b9702,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722929729209432391,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8379466d-0241-40e4-a262-d762b3cda2eb,},Annotations:map[string]string{io.kubernetes.container.hash: d079fdad,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcf0ac6e03c21e2b64b008ca3dcd8293a980e393216a3103acf1a60f8ef1dc3f,PodSandboxId:c1b179a473da688998355788cf02f16bea7187f30019e6786b025cc877ed72fa,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722929728492501519,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ksm6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 594bc178-8319-4922-9132-69a9c6890b95,},Annotations:map[string]string{io.kubernetes.container.hash: 50a78a81,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:9529a894fdd4b09b00a809dc3ad326ff8ab57eba9ae9438200069348b4426b19,PodSandboxId:4a0b68a5db12d84fa3e54810b85090c13072ad782e230aa867e47298ad248fc6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722929728872421792,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zscz4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef588a32-ac32-4e38-926e-6ee7a662a399,},Annotations:map[string]string{io.kubernetes.container.hash: 38e96eed,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e67330d376d276df974f78a481179786f12a17135fd068411ad63be616869e1e,PodSandboxId:0e8087802602b09a2d0f605ce70c425e402b5a10e4f1497b2eb139342a335f5a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_RUNNING,CreatedAt:1722929728633687734,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-n7jmb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa496bfb-7bf0-4ebc-adbd-36d992d8e279,},Annotations:map[string]string{io.kubernetes.container.hash: db2b7a8c,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8c7c569af258485ebf98698eba57ddc482e2819c27995c4da66803dc7a84de4,PodSandboxId:52b914be0c4a1b45fddcf2b0b959430fced7a213686ea1009a6c37f2a5a399fd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722929728711874905,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 080589493a4905d0fb10f52a1a0e25a5,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ea3f1672f91dabee2d9f8605538a8f3683ca4391c2c9deac554eacfddd243bb,PodSandboxId:6a8f5ef8d3fb5f340f9569eb21fa91020e3b44226daa521ebed7eafd2c5aac8c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722929728725790024,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-t24ww,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8effbe6c-3107-409e-9c94-dee4103d12b0,},Annotations:map[string]string{io.kubernetes.container.hash: 7604f64,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0280e428feb3f49d39e63996a569fe8099b785732ee02450df33eb25a8a39c88,PodSandboxId:3f467e52d9b47ff5ac03b4a454e0ce36266c74302644c05d81e42b312eca5b9b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722929728590957011,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-118173,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: 6346700b562b8d3f4c18c66b3813bc28,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dd16c152a6e76f97a41f8091857757846cf00e2b45fbebefd70bc7191f71d6d,PodSandboxId:e774276d1db20b1d0af6107ff83ecf4ba4a28e6ce7cd5d48f1425a9085296c5c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722929728444127688,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: f74055e61dcf5f2385b1934a1a8e3b73,},Annotations:map[string]string{io.kubernetes.container.hash: 27fe2ed3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eae811e3e34734c78cc8f46ad9c12e0bde5ad445281f50029620e65e67b92b11,PodSandboxId:46116e50c4c43a8f8143460932ea15d87c3499f60948ed31f758cc62c7baae81,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722929728393775474,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65b7cf59bb7c42e61e37dcbebce4eec4,},Anno
tations:map[string]string{io.kubernetes.container.hash: fa683f08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10437bdde23713cae1b222edffa34be0e9583f43a83c01f10aae555f0dd7542e,PodSandboxId:3f725d083d59ecb8b65c8b09466e36c8b01dd5742fa8d97e108de4a49f47de04,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722929221668285640,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-rml5d,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4a82c0ca-1596-4dfe-a2f9-3a34f22480e5,},Annota
tions:map[string]string{io.kubernetes.container.hash: 950719af,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c60313490ee322d54c5b3908e628bff77d0ca10249f7253eb9cbd44745a805af,PodSandboxId:b221107c8052d9be54b23c0ee2e882f4adc57ade5813f27e7dfc4e4f71006663,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722929010372427230,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zscz4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef588a32-ac32-4e38-926e-6ee7a662a399,},Annotations:map[string]string{io.kuber
netes.container.hash: 38e96eed,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:321504a9bbbfc57376fccab559a6ebc7388670dc0bf123b5f9fd41b1da99adaa,PodSandboxId:4a9fe2c539265caf233fd5f3778ebc3e6092ce121ddf647e3f72622e31022e53,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722929010343638533,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-7db6d8ff4d-t24ww,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8effbe6c-3107-409e-9c94-dee4103d12b0,},Annotations:map[string]string{io.kubernetes.container.hash: 7604f64,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8f5ae0da505fc7f79225ba7115c83b9e909128bc28df767fce1fc3ba7a01392,PodSandboxId:5bbdb233d391e1127588924ac263113c9aa0148b24fc7113df3ca0891d7e0fdc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]st
ring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_EXITED,CreatedAt:1722928998315769494,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-n7jmb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa496bfb-7bf0-4ebc-adbd-36d992d8e279,},Annotations:map[string]string{io.kubernetes.container.hash: db2b7a8c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5a5bdf0b2a62163f6aff6abe864eedc47b461e233989f23ef7770c226763325,PodSandboxId:2ed57fdbf65f1846c200a67b50a1043fe8c70dd79c90d1a1c67e5e0634c8088b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHa
ndler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722928994187880481,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ksm6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 594bc178-8319-4922-9132-69a9c6890b95,},Annotations:map[string]string{io.kubernetes.container.hash: 50a78a81,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27f4fdf6fa77a51e1295c12b459cda687ed6ae68875218e24ba7959fb37814ed,PodSandboxId:5ebdb09ee65d6de5671cacfdefaaebdf7b5b00f25aa61461d39015ab5cb5578b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b767
22eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722928974549211585,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 080589493a4905d0fb10f52a1a0e25a5,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:306b97b37955570f0a18d38ff1afab7bb09e01c47a3418dba2aeed0f00d6052d,PodSandboxId:5055641759bdb7b0e57568f6d5310c72587ea715571f3fff96ff5d200be59dc2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0c
fd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722928974442938254,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-118173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65b7cf59bb7c42e61e37dcbebce4eec4,},Annotations:map[string]string{io.kubernetes.container.hash: fa683f08,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2b9a43c5-96bb-4e0c-9119-2bd1b3e001ac name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 07:40:37 ha-118173 crio[3786]: time="2024-08-06 07:40:37.595493972Z" level=debug msg="Request: &VersionRequest{Version:0.1.0,}" file="otel-collector/interceptors.go:62" id=32b464c9-71b4-4dfa-8054-be60756429d8 name=/runtime.v1.RuntimeService/Version
	Aug 06 07:40:37 ha-118173 crio[3786]: time="2024-08-06 07:40:37.595615269Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=32b464c9-71b4-4dfa-8054-be60756429d8 name=/runtime.v1.RuntimeService/Version
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	99f58b84fab60       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       4                   9d298728c707a       storage-provisioner
	7ebb3ef02d987       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      4 minutes ago       Running             kube-apiserver            3                   e774276d1db20       kube-apiserver-ha-118173
	d8e645fa59f15       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      4 minutes ago       Running             kube-controller-manager   2                   3f467e52d9b47       kube-controller-manager-ha-118173
	bb461c0c1a90c       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      4 minutes ago       Running             busybox                   1                   c18a977f42d34       busybox-fc5497c4f-rml5d
	d631bee7d0980       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      4 minutes ago       Running             kube-vip                  0                   3cf78ca258a5c       kube-vip-ha-118173
	4c038e3b4d8c5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Exited              storage-provisioner       3                   9d298728c707a       storage-provisioner
	9529a894fdd4b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   4a0b68a5db12d       coredns-7db6d8ff4d-zscz4
	6ea3f1672f91d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   6a8f5ef8d3fb5       coredns-7db6d8ff4d-t24ww
	f8c7c569af258       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      5 minutes ago       Running             kube-scheduler            1                   52b914be0c4a1       kube-scheduler-ha-118173
	e67330d376d27       917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557                                      5 minutes ago       Running             kindnet-cni               1                   0e8087802602b       kindnet-n7jmb
	0280e428feb3f       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      5 minutes ago       Exited              kube-controller-manager   1                   3f467e52d9b47       kube-controller-manager-ha-118173
	bcf0ac6e03c21       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      5 minutes ago       Running             kube-proxy                1                   c1b179a473da6       kube-proxy-ksm6v
	2dd16c152a6e7       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      5 minutes ago       Exited              kube-apiserver            2                   e774276d1db20       kube-apiserver-ha-118173
	eae811e3e3473       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      5 minutes ago       Running             etcd                      1                   46116e50c4c43       etcd-ha-118173
	10437bdde2371       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   13 minutes ago      Exited              busybox                   0                   3f725d083d59e       busybox-fc5497c4f-rml5d
	c60313490ee32       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      17 minutes ago      Exited              coredns                   0                   b221107c8052d       coredns-7db6d8ff4d-zscz4
	321504a9bbbfc       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      17 minutes ago      Exited              coredns                   0                   4a9fe2c539265       coredns-7db6d8ff4d-t24ww
	b8f5ae0da505f       docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3    17 minutes ago      Exited              kindnet-cni               0                   5bbdb233d391e       kindnet-n7jmb
	e5a5bdf0b2a62       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      17 minutes ago      Exited              kube-proxy                0                   2ed57fdbf65f1       kube-proxy-ksm6v
	27f4fdf6fa77a       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      17 minutes ago      Exited              kube-scheduler            0                   5ebdb09ee65d6       kube-scheduler-ha-118173
	306b97b379555       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      17 minutes ago      Exited              etcd                      0                   5055641759bdb       etcd-ha-118173
	
	
	==> coredns [321504a9bbbfc57376fccab559a6ebc7388670dc0bf123b5f9fd41b1da99adaa] <==
	[INFO] 10.244.1.2:50320 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000621303s
	[INFO] 10.244.1.2:50307 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001920609s
	[INFO] 10.244.0.4:35382 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.001483476s
	[INFO] 10.244.2.2:42914 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003426884s
	[INFO] 10.244.2.2:54544 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002889491s
	[INFO] 10.244.2.2:56241 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000153019s
	[INFO] 10.244.1.2:42127 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000287649s
	[INFO] 10.244.1.2:57726 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001778651s
	[INFO] 10.244.1.2:48193 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000186214s
	[INFO] 10.244.1.2:39745 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001882s
	[INFO] 10.244.0.4:43115 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00007628s
	[INFO] 10.244.0.4:43987 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001861604s
	[INFO] 10.244.0.4:36070 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000175723s
	[INFO] 10.244.0.4:57521 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000070761s
	[INFO] 10.244.2.2:54569 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000133483s
	[INFO] 10.244.2.2:45084 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000078772s
	[INFO] 10.244.0.4:53171 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000115977s
	[INFO] 10.244.0.4:46499 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000151186s
	[INFO] 10.244.1.2:46474 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000192738s
	[INFO] 10.244.1.2:42611 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000163773s
	[INFO] 10.244.0.4:56252 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000104237s
	[INFO] 10.244.0.4:53725 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000119201s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=2003&timeout=8m1s&timeoutSeconds=481&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	
	
	==> coredns [6ea3f1672f91dabee2d9f8605538a8f3683ca4391c2c9deac554eacfddd243bb] <==
	[INFO] plugin/kubernetes: Trace[2007680759]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (06-Aug-2024 07:35:35.106) (total time: 10001ms):
	Trace[2007680759]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (07:35:45.107)
	Trace[2007680759]: [10.001373038s] [10.001373038s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:40116->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[987296960]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (06-Aug-2024 07:35:43.676) (total time: 10206ms):
	Trace[987296960]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:40116->10.96.0.1:443: read: connection reset by peer 10206ms (07:35:53.882)
	Trace[987296960]: [10.206694555s] [10.206694555s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:40116->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [9529a894fdd4b09b00a809dc3ad326ff8ab57eba9ae9438200069348b4426b19] <==
	Trace[2091369959]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (07:35:43.988)
	Trace[2091369959]: [10.000899279s] [10.000899279s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:42182->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:42182->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:39732->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:39732->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:42184->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[558389846]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (06-Aug-2024 07:35:43.774) (total time: 10109ms):
	Trace[558389846]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:42184->10.96.0.1:443: read: connection reset by peer 10109ms (07:35:53.883)
	Trace[558389846]: [10.109294055s] [10.109294055s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:42184->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [c60313490ee322d54c5b3908e628bff77d0ca10249f7253eb9cbd44745a805af] <==
	[INFO] 10.244.0.4:40510 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000107856s
	[INFO] 10.244.0.4:53202 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001964559s
	[INFO] 10.244.0.4:37382 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000120987s
	[INFO] 10.244.0.4:55432 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000068014s
	[INFO] 10.244.2.2:55791 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000121334s
	[INFO] 10.244.2.2:41409 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000170119s
	[INFO] 10.244.1.2:38348 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000159643s
	[INFO] 10.244.1.2:59261 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000120304s
	[INFO] 10.244.1.2:34766 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001388s
	[INFO] 10.244.1.2:51650 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00006659s
	[INFO] 10.244.0.4:60377 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000100376s
	[INFO] 10.244.0.4:36616 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000046708s
	[INFO] 10.244.2.2:40205 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000126264s
	[INFO] 10.244.2.2:43822 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000154718s
	[INFO] 10.244.2.2:55803 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000127422s
	[INFO] 10.244.2.2:36078 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000107543s
	[INFO] 10.244.1.2:57979 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000156818s
	[INFO] 10.244.1.2:46537 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000172879s
	[INFO] 10.244.0.4:53503 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000186753s
	[INFO] 10.244.0.4:59777 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000063489s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	
	
	==> describe nodes <==
	Name:               ha-118173
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-118173
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e92cb06692f5ea1ba801d10d148e5e92e807f9c8
	                    minikube.k8s.io/name=ha-118173
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_06T07_23_01_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 06 Aug 2024 07:22:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-118173
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 06 Aug 2024 07:40:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 06 Aug 2024 07:36:11 +0000   Tue, 06 Aug 2024 07:22:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 06 Aug 2024 07:36:11 +0000   Tue, 06 Aug 2024 07:22:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 06 Aug 2024 07:36:11 +0000   Tue, 06 Aug 2024 07:22:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 06 Aug 2024 07:36:11 +0000   Tue, 06 Aug 2024 07:23:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.164
	  Hostname:    ha-118173
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 bc4f3dc5ed054dcb94a91cb696ade310
	  System UUID:                bc4f3dc5-ed05-4dcb-94a9-1cb696ade310
	  Boot ID:                    34e88bd3-feb2-44fb-9f4d-def7e685eb93
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-rml5d              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 coredns-7db6d8ff4d-t24ww             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     17m
	  kube-system                 coredns-7db6d8ff4d-zscz4             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     17m
	  kube-system                 etcd-ha-118173                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         17m
	  kube-system                 kindnet-n7jmb                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      17m
	  kube-system                 kube-apiserver-ha-118173             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-controller-manager-ha-118173    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-proxy-ksm6v                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-scheduler-ha-118173             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-vip-ha-118173                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m53s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4m26s                  kube-proxy       
	  Normal   Starting                 17m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  17m                    kubelet          Node ha-118173 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  17m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 17m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  17m                    kubelet          Node ha-118173 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    17m                    kubelet          Node ha-118173 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     17m                    kubelet          Node ha-118173 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           17m                    node-controller  Node ha-118173 event: Registered Node ha-118173 in Controller
	  Normal   NodeReady                17m                    kubelet          Node ha-118173 status is now: NodeReady
	  Normal   RegisteredNode           15m                    node-controller  Node ha-118173 event: Registered Node ha-118173 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-118173 event: Registered Node ha-118173 in Controller
	  Warning  ContainerGCFailed        5m38s (x2 over 6m38s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           4m20s                  node-controller  Node ha-118173 event: Registered Node ha-118173 in Controller
	  Normal   RegisteredNode           4m16s                  node-controller  Node ha-118173 event: Registered Node ha-118173 in Controller
	  Normal   RegisteredNode           3m10s                  node-controller  Node ha-118173 event: Registered Node ha-118173 in Controller
	
	
	Name:               ha-118173-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-118173-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e92cb06692f5ea1ba801d10d148e5e92e807f9c8
	                    minikube.k8s.io/name=ha-118173
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_06T07_25_16_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 06 Aug 2024 07:25:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-118173-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 06 Aug 2024 07:40:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 06 Aug 2024 07:39:13 +0000   Tue, 06 Aug 2024 07:39:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 06 Aug 2024 07:39:13 +0000   Tue, 06 Aug 2024 07:39:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 06 Aug 2024 07:39:13 +0000   Tue, 06 Aug 2024 07:39:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 06 Aug 2024 07:39:13 +0000   Tue, 06 Aug 2024 07:39:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.203
	  Hostname:    ha-118173-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1d472203b4d8448095b6313de6675a72
	  System UUID:                1d472203-b4d8-4480-95b6-313de6675a72
	  Boot ID:                    66643318-fea9-4928-bcf3-6d8a09896d20
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-j9d92                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 etcd-ha-118173-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kindnet-z9w6l                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      15m
	  kube-system                 kube-apiserver-ha-118173-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-ha-118173-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-k8sq6                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-ha-118173-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-vip-ha-118173-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 15m                    kube-proxy       
	  Normal  Starting                 4m20s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)      kubelet          Node ha-118173-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)      kubelet          Node ha-118173-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  15m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)      kubelet          Node ha-118173-m02 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           15m                    node-controller  Node ha-118173-m02 event: Registered Node ha-118173-m02 in Controller
	  Normal  RegisteredNode           15m                    node-controller  Node ha-118173-m02 event: Registered Node ha-118173-m02 in Controller
	  Normal  RegisteredNode           13m                    node-controller  Node ha-118173-m02 event: Registered Node ha-118173-m02 in Controller
	  Normal  NodeNotReady             11m                    node-controller  Node ha-118173-m02 status is now: NodeNotReady
	  Normal  NodeHasNoDiskPressure    4m52s (x8 over 4m52s)  kubelet          Node ha-118173-m02 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 4m52s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m52s (x8 over 4m52s)  kubelet          Node ha-118173-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     4m52s (x7 over 4m52s)  kubelet          Node ha-118173-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m52s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m20s                  node-controller  Node ha-118173-m02 event: Registered Node ha-118173-m02 in Controller
	  Normal  RegisteredNode           4m16s                  node-controller  Node ha-118173-m02 event: Registered Node ha-118173-m02 in Controller
	  Normal  RegisteredNode           3m10s                  node-controller  Node ha-118173-m02 event: Registered Node ha-118173-m02 in Controller
	  Normal  NodeNotReady             110s                   node-controller  Node ha-118173-m02 status is now: NodeNotReady
	
	
	Name:               ha-118173-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-118173-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e92cb06692f5ea1ba801d10d148e5e92e807f9c8
	                    minikube.k8s.io/name=ha-118173
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_06T07_27_39_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 06 Aug 2024 07:27:38 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-118173-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 06 Aug 2024 07:38:10 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 06 Aug 2024 07:37:50 +0000   Tue, 06 Aug 2024 07:38:53 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 06 Aug 2024 07:37:50 +0000   Tue, 06 Aug 2024 07:38:53 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 06 Aug 2024 07:37:50 +0000   Tue, 06 Aug 2024 07:38:53 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 06 Aug 2024 07:37:50 +0000   Tue, 06 Aug 2024 07:38:53 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.107
	  Hostname:    ha-118173-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a0510d6c5dda47be93ffda6ba4fd5419
	  System UUID:                a0510d6c-5dda-47be-93ff-da6ba4fd5419
	  Boot ID:                    c2e1f570-28d9-485d-a15a-e0d3f781139a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-xwdms    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m38s
	  kube-system                 kindnet-97v8n              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-proxy-q4x6p           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   Starting                 2m43s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  13m (x2 over 13m)      kubelet          Node ha-118173-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x2 over 13m)      kubelet          Node ha-118173-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x2 over 13m)      kubelet          Node ha-118173-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           12m                    node-controller  Node ha-118173-m04 event: Registered Node ha-118173-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-118173-m04 event: Registered Node ha-118173-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-118173-m04 event: Registered Node ha-118173-m04 in Controller
	  Normal   NodeReady                12m                    kubelet          Node ha-118173-m04 status is now: NodeReady
	  Normal   RegisteredNode           4m20s                  node-controller  Node ha-118173-m04 event: Registered Node ha-118173-m04 in Controller
	  Normal   RegisteredNode           4m16s                  node-controller  Node ha-118173-m04 event: Registered Node ha-118173-m04 in Controller
	  Normal   NodeNotReady             3m40s                  node-controller  Node ha-118173-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           3m10s                  node-controller  Node ha-118173-m04 event: Registered Node ha-118173-m04 in Controller
	  Normal   Starting                 2m48s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  2m48s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  2m48s (x2 over 2m48s)  kubelet          Node ha-118173-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m48s (x2 over 2m48s)  kubelet          Node ha-118173-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m48s (x2 over 2m48s)  kubelet          Node ha-118173-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 2m48s                  kubelet          Node ha-118173-m04 has been rebooted, boot id: c2e1f570-28d9-485d-a15a-e0d3f781139a
	  Normal   NodeReady                2m48s                  kubelet          Node ha-118173-m04 status is now: NodeReady
	  Normal   NodeNotReady             105s                   node-controller  Node ha-118173-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +9.212519] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.057280] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.055993] systemd-fstab-generator[605]: Ignoring "noauto" option for root device
	[  +0.205541] systemd-fstab-generator[619]: Ignoring "noauto" option for root device
	[  +0.119779] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	[  +0.271137] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[  +4.288191] systemd-fstab-generator[759]: Ignoring "noauto" option for root device
	[  +0.057342] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.007696] systemd-fstab-generator[935]: Ignoring "noauto" option for root device
	[  +1.004774] kauditd_printk_skb: 62 callbacks suppressed
	[  +5.981141] systemd-fstab-generator[1356]: Ignoring "noauto" option for root device
	[  +0.087170] kauditd_printk_skb: 35 callbacks suppressed
	[Aug 6 07:23] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.080054] kauditd_printk_skb: 35 callbacks suppressed
	[Aug 6 07:25] kauditd_printk_skb: 24 callbacks suppressed
	[Aug 6 07:32] kauditd_printk_skb: 1 callbacks suppressed
	[Aug 6 07:35] systemd-fstab-generator[3702]: Ignoring "noauto" option for root device
	[  +0.148178] systemd-fstab-generator[3714]: Ignoring "noauto" option for root device
	[  +0.200114] systemd-fstab-generator[3728]: Ignoring "noauto" option for root device
	[  +0.158270] systemd-fstab-generator[3740]: Ignoring "noauto" option for root device
	[  +0.303387] systemd-fstab-generator[3768]: Ignoring "noauto" option for root device
	[  +3.658124] systemd-fstab-generator[3873]: Ignoring "noauto" option for root device
	[  +5.822187] kauditd_printk_skb: 122 callbacks suppressed
	[ +13.107609] kauditd_printk_skb: 86 callbacks suppressed
	[Aug 6 07:36] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [306b97b37955570f0a18d38ff1afab7bb09e01c47a3418dba2aeed0f00d6052d] <==
	2024/08/06 07:33:46 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/08/06 07:33:46 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/08/06 07:33:46 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/08/06 07:33:46 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-06T07:33:46.466511Z","caller":"etcdserver/v3_server.go:897","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":6999316365728003832,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-08-06T07:33:46.530482Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.164:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-06T07:33:46.530523Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.164:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-06T07:33:46.530683Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"6a00153c0a3e6122","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-08-06T07:33:46.530894Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"9fed855c7863c2a"}
	{"level":"info","ts":"2024-08-06T07:33:46.530918Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"9fed855c7863c2a"}
	{"level":"info","ts":"2024-08-06T07:33:46.530944Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"9fed855c7863c2a"}
	{"level":"info","ts":"2024-08-06T07:33:46.531049Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"6a00153c0a3e6122","remote-peer-id":"9fed855c7863c2a"}
	{"level":"info","ts":"2024-08-06T07:33:46.531103Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6a00153c0a3e6122","remote-peer-id":"9fed855c7863c2a"}
	{"level":"info","ts":"2024-08-06T07:33:46.531135Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"6a00153c0a3e6122","remote-peer-id":"9fed855c7863c2a"}
	{"level":"info","ts":"2024-08-06T07:33:46.531146Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"9fed855c7863c2a"}
	{"level":"info","ts":"2024-08-06T07:33:46.531152Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"7c092d3c19282747"}
	{"level":"info","ts":"2024-08-06T07:33:46.53116Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"7c092d3c19282747"}
	{"level":"info","ts":"2024-08-06T07:33:46.531209Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"7c092d3c19282747"}
	{"level":"info","ts":"2024-08-06T07:33:46.531319Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"6a00153c0a3e6122","remote-peer-id":"7c092d3c19282747"}
	{"level":"info","ts":"2024-08-06T07:33:46.531412Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6a00153c0a3e6122","remote-peer-id":"7c092d3c19282747"}
	{"level":"info","ts":"2024-08-06T07:33:46.53149Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"6a00153c0a3e6122","remote-peer-id":"7c092d3c19282747"}
	{"level":"info","ts":"2024-08-06T07:33:46.531532Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"7c092d3c19282747"}
	{"level":"info","ts":"2024-08-06T07:33:46.535372Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.164:2380"}
	{"level":"info","ts":"2024-08-06T07:33:46.535602Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.164:2380"}
	{"level":"info","ts":"2024-08-06T07:33:46.535667Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-118173","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.164:2380"],"advertise-client-urls":["https://192.168.39.164:2379"]}
	
	
	==> etcd [eae811e3e34734c78cc8f46ad9c12e0bde5ad445281f50029620e65e67b92b11] <==
	{"level":"info","ts":"2024-08-06T07:37:10.730872Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6a00153c0a3e6122","remote-peer-id":"7c092d3c19282747"}
	{"level":"info","ts":"2024-08-06T07:37:10.736248Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"6a00153c0a3e6122","remote-peer-id":"7c092d3c19282747"}
	{"level":"info","ts":"2024-08-06T07:37:10.738053Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"6a00153c0a3e6122","to":"7c092d3c19282747","stream-type":"stream Message"}
	{"level":"info","ts":"2024-08-06T07:37:10.738182Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"6a00153c0a3e6122","remote-peer-id":"7c092d3c19282747"}
	{"level":"info","ts":"2024-08-06T07:37:11.732994Z","caller":"traceutil/trace.go:171","msg":"trace[491905090] transaction","detail":"{read_only:false; response_revision:2530; number_of_response:1; }","duration":"115.088451ms","start":"2024-08-06T07:37:11.617855Z","end":"2024-08-06T07:37:11.732943Z","steps":["trace[491905090] 'process raft request'  (duration: 114.9643ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-06T07:37:14.672959Z","caller":"traceutil/trace.go:171","msg":"trace[459309595] transaction","detail":"{read_only:false; response_revision:2542; number_of_response:1; }","duration":"147.393069ms","start":"2024-08-06T07:37:14.525535Z","end":"2024-08-06T07:37:14.672928Z","steps":["trace[459309595] 'process raft request'  (duration: 142.306514ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-06T07:38:03.886192Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.5:47570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2024-08-06T07:38:03.915058Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.5:47584","server-name":"","error":"EOF"}
	{"level":"info","ts":"2024-08-06T07:38:03.931233Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6a00153c0a3e6122 switched to configuration voters=(720250853357141034 7638128315634442530)"}
	{"level":"info","ts":"2024-08-06T07:38:03.93408Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"ae46f2aa0c35daf3","local-member-id":"6a00153c0a3e6122","removed-remote-peer-id":"7c092d3c19282747","removed-remote-peer-urls":["https://192.168.39.5:2380"]}
	{"level":"info","ts":"2024-08-06T07:38:03.934167Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"7c092d3c19282747"}
	{"level":"warn","ts":"2024-08-06T07:38:03.93446Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"7c092d3c19282747"}
	{"level":"info","ts":"2024-08-06T07:38:03.934626Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"7c092d3c19282747"}
	{"level":"warn","ts":"2024-08-06T07:38:03.935411Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"7c092d3c19282747"}
	{"level":"info","ts":"2024-08-06T07:38:03.935464Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"7c092d3c19282747"}
	{"level":"info","ts":"2024-08-06T07:38:03.935528Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"6a00153c0a3e6122","remote-peer-id":"7c092d3c19282747"}
	{"level":"warn","ts":"2024-08-06T07:38:03.936074Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6a00153c0a3e6122","remote-peer-id":"7c092d3c19282747","error":"context canceled"}
	{"level":"warn","ts":"2024-08-06T07:38:03.936144Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"7c092d3c19282747","error":"failed to read 7c092d3c19282747 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-08-06T07:38:03.936184Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6a00153c0a3e6122","remote-peer-id":"7c092d3c19282747"}
	{"level":"warn","ts":"2024-08-06T07:38:03.936408Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"6a00153c0a3e6122","remote-peer-id":"7c092d3c19282747","error":"context canceled"}
	{"level":"info","ts":"2024-08-06T07:38:03.936437Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"6a00153c0a3e6122","remote-peer-id":"7c092d3c19282747"}
	{"level":"info","ts":"2024-08-06T07:38:03.93646Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"7c092d3c19282747"}
	{"level":"info","ts":"2024-08-06T07:38:03.936476Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"6a00153c0a3e6122","removed-remote-peer-id":"7c092d3c19282747"}
	{"level":"warn","ts":"2024-08-06T07:38:03.950129Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"6a00153c0a3e6122","remote-peer-id-stream-handler":"6a00153c0a3e6122","remote-peer-id-from":"7c092d3c19282747"}
	{"level":"warn","ts":"2024-08-06T07:38:03.956644Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"6a00153c0a3e6122","remote-peer-id-stream-handler":"6a00153c0a3e6122","remote-peer-id-from":"7c092d3c19282747"}
	
	
	==> kernel <==
	 07:40:38 up 18 min,  0 users,  load average: 0.29, 0.45, 0.35
	Linux ha-118173 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [b8f5ae0da505fc7f79225ba7115c83b9e909128bc28df767fce1fc3ba7a01392] <==
	I0806 07:33:19.272783       1 main.go:295] Handling node with IPs: map[192.168.39.164:{}]
	I0806 07:33:19.272985       1 main.go:299] handling current node
	I0806 07:33:19.273051       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0806 07:33:19.273071       1 main.go:322] Node ha-118173-m02 has CIDR [10.244.1.0/24] 
	I0806 07:33:19.273255       1 main.go:295] Handling node with IPs: map[192.168.39.5:{}]
	I0806 07:33:19.273279       1 main.go:322] Node ha-118173-m03 has CIDR [10.244.2.0/24] 
	I0806 07:33:19.273350       1 main.go:295] Handling node with IPs: map[192.168.39.107:{}]
	I0806 07:33:19.273369       1 main.go:322] Node ha-118173-m04 has CIDR [10.244.3.0/24] 
	I0806 07:33:29.275367       1 main.go:295] Handling node with IPs: map[192.168.39.164:{}]
	I0806 07:33:29.275414       1 main.go:299] handling current node
	I0806 07:33:29.275429       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0806 07:33:29.275434       1 main.go:322] Node ha-118173-m02 has CIDR [10.244.1.0/24] 
	I0806 07:33:29.275664       1 main.go:295] Handling node with IPs: map[192.168.39.5:{}]
	I0806 07:33:29.275695       1 main.go:322] Node ha-118173-m03 has CIDR [10.244.2.0/24] 
	I0806 07:33:29.275759       1 main.go:295] Handling node with IPs: map[192.168.39.107:{}]
	I0806 07:33:29.275765       1 main.go:322] Node ha-118173-m04 has CIDR [10.244.3.0/24] 
	I0806 07:33:39.271928       1 main.go:295] Handling node with IPs: map[192.168.39.5:{}]
	I0806 07:33:39.272031       1 main.go:322] Node ha-118173-m03 has CIDR [10.244.2.0/24] 
	I0806 07:33:39.272297       1 main.go:295] Handling node with IPs: map[192.168.39.107:{}]
	I0806 07:33:39.272348       1 main.go:322] Node ha-118173-m04 has CIDR [10.244.3.0/24] 
	I0806 07:33:39.272420       1 main.go:295] Handling node with IPs: map[192.168.39.164:{}]
	I0806 07:33:39.272439       1 main.go:299] handling current node
	I0806 07:33:39.272461       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0806 07:33:39.272479       1 main.go:322] Node ha-118173-m02 has CIDR [10.244.1.0/24] 
	E0806 07:33:39.674227       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=2011&timeout=5m45s&timeoutSeconds=345&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=7, ErrCode=NO_ERROR, debug=""
	
	
	==> kindnet [e67330d376d276df974f78a481179786f12a17135fd068411ad63be616869e1e] <==
	I0806 07:39:49.898848       1 main.go:322] Node ha-118173-m04 has CIDR [10.244.3.0/24] 
	I0806 07:39:59.906202       1 main.go:295] Handling node with IPs: map[192.168.39.164:{}]
	I0806 07:39:59.906310       1 main.go:299] handling current node
	I0806 07:39:59.906338       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0806 07:39:59.906356       1 main.go:322] Node ha-118173-m02 has CIDR [10.244.1.0/24] 
	I0806 07:39:59.906519       1 main.go:295] Handling node with IPs: map[192.168.39.107:{}]
	I0806 07:39:59.906541       1 main.go:322] Node ha-118173-m04 has CIDR [10.244.3.0/24] 
	I0806 07:40:09.905759       1 main.go:295] Handling node with IPs: map[192.168.39.164:{}]
	I0806 07:40:09.905813       1 main.go:299] handling current node
	I0806 07:40:09.905834       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0806 07:40:09.905840       1 main.go:322] Node ha-118173-m02 has CIDR [10.244.1.0/24] 
	I0806 07:40:09.906007       1 main.go:295] Handling node with IPs: map[192.168.39.107:{}]
	I0806 07:40:09.906028       1 main.go:322] Node ha-118173-m04 has CIDR [10.244.3.0/24] 
	I0806 07:40:19.898313       1 main.go:295] Handling node with IPs: map[192.168.39.107:{}]
	I0806 07:40:19.898417       1 main.go:322] Node ha-118173-m04 has CIDR [10.244.3.0/24] 
	I0806 07:40:19.898644       1 main.go:295] Handling node with IPs: map[192.168.39.164:{}]
	I0806 07:40:19.898696       1 main.go:299] handling current node
	I0806 07:40:19.898728       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0806 07:40:19.898746       1 main.go:322] Node ha-118173-m02 has CIDR [10.244.1.0/24] 
	I0806 07:40:29.898716       1 main.go:295] Handling node with IPs: map[192.168.39.164:{}]
	I0806 07:40:29.898878       1 main.go:299] handling current node
	I0806 07:40:29.898969       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0806 07:40:29.899017       1 main.go:322] Node ha-118173-m02 has CIDR [10.244.1.0/24] 
	I0806 07:40:29.899224       1 main.go:295] Handling node with IPs: map[192.168.39.107:{}]
	I0806 07:40:29.899277       1 main.go:322] Node ha-118173-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [2dd16c152a6e76f97a41f8091857757846cf00e2b45fbebefd70bc7191f71d6d] <==
	I0806 07:35:29.440717       1 options.go:221] external host was not specified, using 192.168.39.164
	I0806 07:35:29.445460       1 server.go:148] Version: v1.30.3
	I0806 07:35:29.445612       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0806 07:35:30.090974       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0806 07:35:30.106678       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0806 07:35:30.114996       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0806 07:35:30.115034       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0806 07:35:30.115299       1 instance.go:299] Using reconciler: lease
	W0806 07:35:50.088335       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0806 07:35:50.090452       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0806 07:35:50.118761       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0806 07:35:50.118921       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [7ebb3ef02d9872250e0c4614f3552d20ba04a677b81c274d9cf9061a86d7c9ce] <==
	I0806 07:36:10.315347       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0806 07:36:10.342879       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0806 07:36:10.342915       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0806 07:36:10.389931       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0806 07:36:10.395898       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0806 07:36:10.395943       1 policy_source.go:224] refreshing policies
	I0806 07:36:10.410176       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0806 07:36:10.415443       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0806 07:36:10.416772       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0806 07:36:10.418085       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0806 07:36:10.416791       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0806 07:36:10.416951       1 shared_informer.go:320] Caches are synced for configmaps
	I0806 07:36:10.424294       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0806 07:36:10.431393       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	W0806 07:36:10.435092       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.203 192.168.39.5]
	I0806 07:36:10.436780       1 controller.go:615] quota admission added evaluator for: endpoints
	I0806 07:36:10.443986       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0806 07:36:10.444232       1 aggregator.go:165] initial CRD sync complete...
	I0806 07:36:10.444331       1 autoregister_controller.go:141] Starting autoregister controller
	I0806 07:36:10.444473       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0806 07:36:10.444624       1 cache.go:39] Caches are synced for autoregister controller
	I0806 07:36:10.454487       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0806 07:36:10.458176       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0806 07:36:11.321080       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0806 07:36:11.675689       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.164 192.168.39.203 192.168.39.5]
	
	
	==> kube-controller-manager [0280e428feb3f49d39e63996a569fe8099b785732ee02450df33eb25a8a39c88] <==
	I0806 07:35:30.038688       1 serving.go:380] Generated self-signed cert in-memory
	I0806 07:35:30.388338       1 controllermanager.go:189] "Starting" version="v1.30.3"
	I0806 07:35:30.388438       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0806 07:35:30.392013       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0806 07:35:30.392884       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0806 07:35:30.393097       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0806 07:35:30.393211       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0806 07:35:51.125837       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.164:8443/healthz\": dial tcp 192.168.39.164:8443: connect: connection refused"
	
	
	==> kube-controller-manager [d8e645fa59f15ec29730259bc834efd2b4740bd0cf9d98973d43ac2f3e60f8eb] <==
	I0806 07:38:02.034827       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="102.127µs"
	I0806 07:38:02.732614       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="64.416µs"
	I0806 07:38:03.082545       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="122.605µs"
	I0806 07:38:03.106937       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.119µs"
	I0806 07:38:03.117056       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="108.766µs"
	I0806 07:38:06.014601       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.612346ms"
	I0806 07:38:06.015787       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="62.876µs"
	I0806 07:38:15.464651       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-118173-m04"
	E0806 07:38:22.911522       1 gc_controller.go:153] "Failed to get node" err="node \"ha-118173-m03\" not found" logger="pod-garbage-collector-controller" node="ha-118173-m03"
	E0806 07:38:22.911613       1 gc_controller.go:153] "Failed to get node" err="node \"ha-118173-m03\" not found" logger="pod-garbage-collector-controller" node="ha-118173-m03"
	E0806 07:38:22.911620       1 gc_controller.go:153] "Failed to get node" err="node \"ha-118173-m03\" not found" logger="pod-garbage-collector-controller" node="ha-118173-m03"
	E0806 07:38:22.911626       1 gc_controller.go:153] "Failed to get node" err="node \"ha-118173-m03\" not found" logger="pod-garbage-collector-controller" node="ha-118173-m03"
	E0806 07:38:22.911630       1 gc_controller.go:153] "Failed to get node" err="node \"ha-118173-m03\" not found" logger="pod-garbage-collector-controller" node="ha-118173-m03"
	E0806 07:38:42.912245       1 gc_controller.go:153] "Failed to get node" err="node \"ha-118173-m03\" not found" logger="pod-garbage-collector-controller" node="ha-118173-m03"
	E0806 07:38:42.912293       1 gc_controller.go:153] "Failed to get node" err="node \"ha-118173-m03\" not found" logger="pod-garbage-collector-controller" node="ha-118173-m03"
	E0806 07:38:42.912301       1 gc_controller.go:153] "Failed to get node" err="node \"ha-118173-m03\" not found" logger="pod-garbage-collector-controller" node="ha-118173-m03"
	E0806 07:38:42.912317       1 gc_controller.go:153] "Failed to get node" err="node \"ha-118173-m03\" not found" logger="pod-garbage-collector-controller" node="ha-118173-m03"
	E0806 07:38:42.912323       1 gc_controller.go:153] "Failed to get node" err="node \"ha-118173-m03\" not found" logger="pod-garbage-collector-controller" node="ha-118173-m03"
	I0806 07:38:48.016907       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-118173-m04"
	I0806 07:38:48.153234       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="62.254321ms"
	I0806 07:38:48.153484       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.649µs"
	I0806 07:38:53.306123       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="70.340447ms"
	I0806 07:38:53.306406       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="100.59µs"
	I0806 07:39:06.658200       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.203668ms"
	I0806 07:39:06.660953       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="69.615µs"
	
	
	==> kube-proxy [bcf0ac6e03c21e2b64b008ca3dcd8293a980e393216a3103acf1a60f8ef1dc3f] <==
	E0806 07:35:31.867482       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-118173\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0806 07:35:34.938013       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-118173\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0806 07:35:38.010294       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-118173\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0806 07:35:44.153963       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-118173\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0806 07:35:53.371962       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-118173\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0806 07:36:11.802035       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-118173\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0806 07:36:11.802120       1 server.go:1032] "Can't determine this node's IP, assuming loopback; if this is incorrect, please set the --bind-address flag"
	I0806 07:36:11.889144       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0806 07:36:11.889275       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0806 07:36:11.889311       1 server_linux.go:165] "Using iptables Proxier"
	I0806 07:36:11.901455       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0806 07:36:11.901878       1 server.go:872] "Version info" version="v1.30.3"
	I0806 07:36:11.902148       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0806 07:36:11.903775       1 config.go:192] "Starting service config controller"
	I0806 07:36:11.903832       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0806 07:36:11.903872       1 config.go:101] "Starting endpoint slice config controller"
	I0806 07:36:11.903889       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0806 07:36:11.904788       1 config.go:319] "Starting node config controller"
	I0806 07:36:11.904831       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0806 07:36:12.003982       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0806 07:36:12.004063       1 shared_informer.go:320] Caches are synced for service config
	I0806 07:36:12.008108       1 shared_informer.go:320] Caches are synced for node config
	W0806 07:39:01.944748       1 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0806 07:39:01.944897       1 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0806 07:39:01.944946       1 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.EndpointSlice ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	
	
	==> kube-proxy [e5a5bdf0b2a62163f6aff6abe864eedc47b461e233989f23ef7770c226763325] <==
	W0806 07:32:24.090367       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-118173&resourceVersion=2011": dial tcp 192.168.39.254:8443: connect: no route to host
	E0806 07:32:24.090626       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-118173&resourceVersion=2011": dial tcp 192.168.39.254:8443: connect: no route to host
	E0806 07:32:24.090752       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2003": dial tcp 192.168.39.254:8443: connect: no route to host
	W0806 07:32:30.489974       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2003": dial tcp 192.168.39.254:8443: connect: no route to host
	E0806 07:32:30.490040       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2003": dial tcp 192.168.39.254:8443: connect: no route to host
	W0806 07:32:30.490112       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-118173&resourceVersion=2011": dial tcp 192.168.39.254:8443: connect: no route to host
	E0806 07:32:30.490159       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-118173&resourceVersion=2011": dial tcp 192.168.39.254:8443: connect: no route to host
	W0806 07:32:30.490314       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1976": dial tcp 192.168.39.254:8443: connect: no route to host
	E0806 07:32:30.490511       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1976": dial tcp 192.168.39.254:8443: connect: no route to host
	W0806 07:32:42.906164       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-118173&resourceVersion=2011": dial tcp 192.168.39.254:8443: connect: no route to host
	W0806 07:32:42.907236       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2003": dial tcp 192.168.39.254:8443: connect: no route to host
	E0806 07:32:42.907331       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2003": dial tcp 192.168.39.254:8443: connect: no route to host
	W0806 07:32:42.907493       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1976": dial tcp 192.168.39.254:8443: connect: no route to host
	E0806 07:32:42.907542       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1976": dial tcp 192.168.39.254:8443: connect: no route to host
	E0806 07:32:42.907603       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-118173&resourceVersion=2011": dial tcp 192.168.39.254:8443: connect: no route to host
	W0806 07:32:58.267364       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2003": dial tcp 192.168.39.254:8443: connect: no route to host
	E0806 07:32:58.267537       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2003": dial tcp 192.168.39.254:8443: connect: no route to host
	W0806 07:33:01.339818       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-118173&resourceVersion=2011": dial tcp 192.168.39.254:8443: connect: no route to host
	E0806 07:33:01.339916       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-118173&resourceVersion=2011": dial tcp 192.168.39.254:8443: connect: no route to host
	W0806 07:33:04.411257       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1976": dial tcp 192.168.39.254:8443: connect: no route to host
	E0806 07:33:04.411326       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1976": dial tcp 192.168.39.254:8443: connect: no route to host
	W0806 07:33:32.059472       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-118173&resourceVersion=2011": dial tcp 192.168.39.254:8443: connect: no route to host
	E0806 07:33:32.060366       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-118173&resourceVersion=2011": dial tcp 192.168.39.254:8443: connect: no route to host
	W0806 07:33:44.347617       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1976": dial tcp 192.168.39.254:8443: connect: no route to host
	E0806 07:33:44.347737       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1976": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-scheduler [27f4fdf6fa77a51e1295c12b459cda687ed6ae68875218e24ba7959fb37814ed] <==
	W0806 07:33:42.235539       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0806 07:33:42.235630       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0806 07:33:42.240813       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0806 07:33:42.240915       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0806 07:33:42.414364       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0806 07:33:42.414453       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0806 07:33:42.605282       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0806 07:33:42.605374       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0806 07:33:42.708045       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0806 07:33:42.708181       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0806 07:33:42.786890       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0806 07:33:42.787018       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0806 07:33:42.800242       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0806 07:33:42.800407       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0806 07:33:42.830828       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0806 07:33:42.830934       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0806 07:33:42.890127       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0806 07:33:42.890234       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0806 07:33:42.955486       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0806 07:33:42.955595       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0806 07:33:43.488770       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0806 07:33:43.488816       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0806 07:33:46.179357       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0806 07:33:46.179397       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0806 07:33:46.374456       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [f8c7c569af258485ebf98698eba57ddc482e2819c27995c4da66803dc7a84de4] <==
	W0806 07:36:06.849476       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.164:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.164:8443: connect: connection refused
	E0806 07:36:06.849543       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.164:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.164:8443: connect: connection refused
	W0806 07:36:07.196986       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.164:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.164:8443: connect: connection refused
	E0806 07:36:07.197093       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.164:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.164:8443: connect: connection refused
	W0806 07:36:07.859339       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.164:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.164:8443: connect: connection refused
	E0806 07:36:07.859393       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.164:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.164:8443: connect: connection refused
	W0806 07:36:07.993368       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.164:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.164:8443: connect: connection refused
	E0806 07:36:07.993495       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.164:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.164:8443: connect: connection refused
	W0806 07:36:08.234202       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.164:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.164:8443: connect: connection refused
	E0806 07:36:08.234254       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.164:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.164:8443: connect: connection refused
	W0806 07:36:08.503693       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.164:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.164:8443: connect: connection refused
	E0806 07:36:08.503763       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.164:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.164:8443: connect: connection refused
	W0806 07:36:10.347323       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0806 07:36:10.350096       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0806 07:36:10.349956       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0806 07:36:10.350210       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0806 07:36:10.350012       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0806 07:36:10.350285       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0806 07:36:10.350060       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0806 07:36:10.350332       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0806 07:36:22.132414       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0806 07:38:01.992919       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-xwdms\": pod busybox-fc5497c4f-xwdms is already assigned to node \"ha-118173-m04\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-xwdms" node="ha-118173-m04"
	E0806 07:38:01.993101       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 94504aeb-6ad3-4036-905f-a38c98fb8b9d(default/busybox-fc5497c4f-xwdms) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-xwdms"
	E0806 07:38:01.993134       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-xwdms\": pod busybox-fc5497c4f-xwdms is already assigned to node \"ha-118173-m04\"" pod="default/busybox-fc5497c4f-xwdms"
	I0806 07:38:01.993164       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-xwdms" node="ha-118173-m04"
	
	
	==> kubelet <==
	Aug 06 07:36:11 ha-118173 kubelet[1363]: E0806 07:36:11.803802    1363 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=1952": dial tcp 192.168.39.254:8443: connect: no route to host
	Aug 06 07:36:16 ha-118173 kubelet[1363]: I0806 07:36:16.680890    1363 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-fc5497c4f-rml5d" podStartSLOduration=557.933328795 podStartE2EDuration="9m20.680857278s" podCreationTimestamp="2024-08-06 07:26:56 +0000 UTC" firstStartedPulling="2024-08-06 07:26:58.899899769 +0000 UTC m=+238.425475882" lastFinishedPulling="2024-08-06 07:27:01.647428245 +0000 UTC m=+241.173004365" observedRunningTime="2024-08-06 07:27:02.645036339 +0000 UTC m=+242.170612471" watchObservedRunningTime="2024-08-06 07:36:16.680857278 +0000 UTC m=+796.206433411"
	Aug 06 07:36:22 ha-118173 kubelet[1363]: I0806 07:36:22.616883    1363 scope.go:117] "RemoveContainer" containerID="4c038e3b4d8c535621dabf1ffa68a5bb717de411bccf8772e655b2c06282edf3"
	Aug 06 07:36:45 ha-118173 kubelet[1363]: I0806 07:36:45.616744    1363 kubelet.go:1917] "Trying to delete pod" pod="kube-system/kube-vip-ha-118173" podUID="594d77d2-83e6-420d-8f0a-f54e9fd704f3"
	Aug 06 07:36:45 ha-118173 kubelet[1363]: I0806 07:36:45.634831    1363 kubelet.go:1922] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-118173"
	Aug 06 07:37:00 ha-118173 kubelet[1363]: E0806 07:37:00.649180    1363 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 06 07:37:00 ha-118173 kubelet[1363]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 06 07:37:00 ha-118173 kubelet[1363]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 06 07:37:00 ha-118173 kubelet[1363]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 06 07:37:00 ha-118173 kubelet[1363]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 06 07:38:00 ha-118173 kubelet[1363]: E0806 07:38:00.663542    1363 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 06 07:38:00 ha-118173 kubelet[1363]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 06 07:38:00 ha-118173 kubelet[1363]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 06 07:38:00 ha-118173 kubelet[1363]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 06 07:38:00 ha-118173 kubelet[1363]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 06 07:39:00 ha-118173 kubelet[1363]: E0806 07:39:00.651498    1363 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 06 07:39:00 ha-118173 kubelet[1363]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 06 07:39:00 ha-118173 kubelet[1363]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 06 07:39:00 ha-118173 kubelet[1363]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 06 07:39:00 ha-118173 kubelet[1363]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 06 07:40:00 ha-118173 kubelet[1363]: E0806 07:40:00.648399    1363 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 06 07:40:00 ha-118173 kubelet[1363]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 06 07:40:00 ha-118173 kubelet[1363]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 06 07:40:00 ha-118173 kubelet[1363]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 06 07:40:00 ha-118173 kubelet[1363]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0806 07:40:37.120018   40675 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19370-13057/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-118173 -n ha-118173
helpers_test.go:261: (dbg) Run:  kubectl --context ha-118173 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (141.63s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (327.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-551471
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-551471
E0806 07:56:21.825708   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/functional-811691/client.crt: no such file or directory
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-551471: exit status 82 (2m1.826715243s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-551471-m03"  ...
	* Stopping node "multinode-551471-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-551471" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-551471 --wait=true -v=8 --alsologtostderr
E0806 07:59:13.202858   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/addons-505457/client.crt: no such file or directory
E0806 07:59:24.870211   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/functional-811691/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-551471 --wait=true -v=8 --alsologtostderr: (3m23.650197205s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-551471
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-551471 -n multinode-551471
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-551471 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-551471 logs -n 25: (1.563867983s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-551471 ssh -n                                                                 | multinode-551471 | jenkins | v1.33.1 | 06 Aug 24 07:53 UTC | 06 Aug 24 07:53 UTC |
	|         | multinode-551471-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-551471 cp multinode-551471-m02:/home/docker/cp-test.txt                       | multinode-551471 | jenkins | v1.33.1 | 06 Aug 24 07:53 UTC | 06 Aug 24 07:53 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2280163446/001/cp-test_multinode-551471-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-551471 ssh -n                                                                 | multinode-551471 | jenkins | v1.33.1 | 06 Aug 24 07:53 UTC | 06 Aug 24 07:53 UTC |
	|         | multinode-551471-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-551471 cp multinode-551471-m02:/home/docker/cp-test.txt                       | multinode-551471 | jenkins | v1.33.1 | 06 Aug 24 07:53 UTC | 06 Aug 24 07:53 UTC |
	|         | multinode-551471:/home/docker/cp-test_multinode-551471-m02_multinode-551471.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-551471 ssh -n                                                                 | multinode-551471 | jenkins | v1.33.1 | 06 Aug 24 07:53 UTC | 06 Aug 24 07:53 UTC |
	|         | multinode-551471-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-551471 ssh -n multinode-551471 sudo cat                                       | multinode-551471 | jenkins | v1.33.1 | 06 Aug 24 07:53 UTC | 06 Aug 24 07:53 UTC |
	|         | /home/docker/cp-test_multinode-551471-m02_multinode-551471.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-551471 cp multinode-551471-m02:/home/docker/cp-test.txt                       | multinode-551471 | jenkins | v1.33.1 | 06 Aug 24 07:53 UTC | 06 Aug 24 07:53 UTC |
	|         | multinode-551471-m03:/home/docker/cp-test_multinode-551471-m02_multinode-551471-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-551471 ssh -n                                                                 | multinode-551471 | jenkins | v1.33.1 | 06 Aug 24 07:53 UTC | 06 Aug 24 07:53 UTC |
	|         | multinode-551471-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-551471 ssh -n multinode-551471-m03 sudo cat                                   | multinode-551471 | jenkins | v1.33.1 | 06 Aug 24 07:53 UTC | 06 Aug 24 07:53 UTC |
	|         | /home/docker/cp-test_multinode-551471-m02_multinode-551471-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-551471 cp testdata/cp-test.txt                                                | multinode-551471 | jenkins | v1.33.1 | 06 Aug 24 07:53 UTC | 06 Aug 24 07:53 UTC |
	|         | multinode-551471-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-551471 ssh -n                                                                 | multinode-551471 | jenkins | v1.33.1 | 06 Aug 24 07:53 UTC | 06 Aug 24 07:53 UTC |
	|         | multinode-551471-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-551471 cp multinode-551471-m03:/home/docker/cp-test.txt                       | multinode-551471 | jenkins | v1.33.1 | 06 Aug 24 07:53 UTC | 06 Aug 24 07:53 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2280163446/001/cp-test_multinode-551471-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-551471 ssh -n                                                                 | multinode-551471 | jenkins | v1.33.1 | 06 Aug 24 07:53 UTC | 06 Aug 24 07:53 UTC |
	|         | multinode-551471-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-551471 cp multinode-551471-m03:/home/docker/cp-test.txt                       | multinode-551471 | jenkins | v1.33.1 | 06 Aug 24 07:53 UTC | 06 Aug 24 07:53 UTC |
	|         | multinode-551471:/home/docker/cp-test_multinode-551471-m03_multinode-551471.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-551471 ssh -n                                                                 | multinode-551471 | jenkins | v1.33.1 | 06 Aug 24 07:53 UTC | 06 Aug 24 07:53 UTC |
	|         | multinode-551471-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-551471 ssh -n multinode-551471 sudo cat                                       | multinode-551471 | jenkins | v1.33.1 | 06 Aug 24 07:53 UTC | 06 Aug 24 07:53 UTC |
	|         | /home/docker/cp-test_multinode-551471-m03_multinode-551471.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-551471 cp multinode-551471-m03:/home/docker/cp-test.txt                       | multinode-551471 | jenkins | v1.33.1 | 06 Aug 24 07:53 UTC | 06 Aug 24 07:53 UTC |
	|         | multinode-551471-m02:/home/docker/cp-test_multinode-551471-m03_multinode-551471-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-551471 ssh -n                                                                 | multinode-551471 | jenkins | v1.33.1 | 06 Aug 24 07:53 UTC | 06 Aug 24 07:53 UTC |
	|         | multinode-551471-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-551471 ssh -n multinode-551471-m02 sudo cat                                   | multinode-551471 | jenkins | v1.33.1 | 06 Aug 24 07:53 UTC | 06 Aug 24 07:53 UTC |
	|         | /home/docker/cp-test_multinode-551471-m03_multinode-551471-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-551471 node stop m03                                                          | multinode-551471 | jenkins | v1.33.1 | 06 Aug 24 07:53 UTC | 06 Aug 24 07:53 UTC |
	| node    | multinode-551471 node start                                                             | multinode-551471 | jenkins | v1.33.1 | 06 Aug 24 07:53 UTC | 06 Aug 24 07:54 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-551471                                                                | multinode-551471 | jenkins | v1.33.1 | 06 Aug 24 07:54 UTC |                     |
	| stop    | -p multinode-551471                                                                     | multinode-551471 | jenkins | v1.33.1 | 06 Aug 24 07:54 UTC |                     |
	| start   | -p multinode-551471                                                                     | multinode-551471 | jenkins | v1.33.1 | 06 Aug 24 07:56 UTC | 06 Aug 24 08:00 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-551471                                                                | multinode-551471 | jenkins | v1.33.1 | 06 Aug 24 08:00 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/06 07:56:41
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0806 07:56:41.682964   49751 out.go:291] Setting OutFile to fd 1 ...
	I0806 07:56:41.683290   49751 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 07:56:41.683302   49751 out.go:304] Setting ErrFile to fd 2...
	I0806 07:56:41.683307   49751 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 07:56:41.683475   49751 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19370-13057/.minikube/bin
	I0806 07:56:41.684020   49751 out.go:298] Setting JSON to false
	I0806 07:56:41.684974   49751 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5948,"bootTime":1722925054,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0806 07:56:41.685034   49751 start.go:139] virtualization: kvm guest
	I0806 07:56:41.687438   49751 out.go:177] * [multinode-551471] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0806 07:56:41.688902   49751 notify.go:220] Checking for updates...
	I0806 07:56:41.688933   49751 out.go:177]   - MINIKUBE_LOCATION=19370
	I0806 07:56:41.690656   49751 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0806 07:56:41.692250   49751 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19370-13057/kubeconfig
	I0806 07:56:41.693871   49751 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19370-13057/.minikube
	I0806 07:56:41.695379   49751 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0806 07:56:41.696899   49751 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0806 07:56:41.698647   49751 config.go:182] Loaded profile config "multinode-551471": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 07:56:41.698780   49751 driver.go:392] Setting default libvirt URI to qemu:///system
	I0806 07:56:41.699223   49751 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:56:41.699279   49751 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:56:41.715418   49751 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45661
	I0806 07:56:41.715800   49751 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:56:41.716348   49751 main.go:141] libmachine: Using API Version  1
	I0806 07:56:41.716395   49751 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:56:41.716905   49751 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:56:41.717151   49751 main.go:141] libmachine: (multinode-551471) Calling .DriverName
	I0806 07:56:41.754160   49751 out.go:177] * Using the kvm2 driver based on existing profile
	I0806 07:56:41.755474   49751 start.go:297] selected driver: kvm2
	I0806 07:56:41.755488   49751 start.go:901] validating driver "kvm2" against &{Name:multinode-551471 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:multinode-551471 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.186 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.134 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.189 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ing
ress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 07:56:41.755711   49751 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0806 07:56:41.756162   49751 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 07:56:41.756246   49751 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19370-13057/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0806 07:56:41.772290   49751 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0806 07:56:41.773027   49751 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0806 07:56:41.773115   49751 cni.go:84] Creating CNI manager for ""
	I0806 07:56:41.773143   49751 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0806 07:56:41.773220   49751 start.go:340] cluster config:
	{Name:multinode-551471 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-551471 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.186 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.134 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.189 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false
kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 07:56:41.773363   49751 iso.go:125] acquiring lock: {Name:mke58d71de28babe305c5eb0c31c274e9713bb3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 07:56:41.775244   49751 out.go:177] * Starting "multinode-551471" primary control-plane node in "multinode-551471" cluster
	I0806 07:56:41.776470   49751 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0806 07:56:41.776513   49751 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0806 07:56:41.776526   49751 cache.go:56] Caching tarball of preloaded images
	I0806 07:56:41.776628   49751 preload.go:172] Found /home/jenkins/minikube-integration/19370-13057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0806 07:56:41.776641   49751 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0806 07:56:41.776774   49751 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/multinode-551471/config.json ...
	I0806 07:56:41.776994   49751 start.go:360] acquireMachinesLock for multinode-551471: {Name:mkc602ac9c8cc3360dcc0fcb8104303837aaab61 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 07:56:41.777044   49751 start.go:364] duration metric: took 30.41µs to acquireMachinesLock for "multinode-551471"
	I0806 07:56:41.777062   49751 start.go:96] Skipping create...Using existing machine configuration
	I0806 07:56:41.777072   49751 fix.go:54] fixHost starting: 
	I0806 07:56:41.777370   49751 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:56:41.777416   49751 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:56:41.792768   49751 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36409
	I0806 07:56:41.793216   49751 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:56:41.793675   49751 main.go:141] libmachine: Using API Version  1
	I0806 07:56:41.793697   49751 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:56:41.794088   49751 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:56:41.794328   49751 main.go:141] libmachine: (multinode-551471) Calling .DriverName
	I0806 07:56:41.794585   49751 main.go:141] libmachine: (multinode-551471) Calling .GetState
	I0806 07:56:41.796564   49751 fix.go:112] recreateIfNeeded on multinode-551471: state=Running err=<nil>
	W0806 07:56:41.796599   49751 fix.go:138] unexpected machine state, will restart: <nil>
	I0806 07:56:41.798536   49751 out.go:177] * Updating the running kvm2 "multinode-551471" VM ...
	I0806 07:56:41.799758   49751 machine.go:94] provisionDockerMachine start ...
	I0806 07:56:41.799785   49751 main.go:141] libmachine: (multinode-551471) Calling .DriverName
	I0806 07:56:41.800027   49751 main.go:141] libmachine: (multinode-551471) Calling .GetSSHHostname
	I0806 07:56:41.803172   49751 main.go:141] libmachine: (multinode-551471) DBG | domain multinode-551471 has defined MAC address 52:54:00:e9:96:6b in network mk-multinode-551471
	I0806 07:56:41.803608   49751 main.go:141] libmachine: (multinode-551471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:96:6b", ip: ""} in network mk-multinode-551471: {Iface:virbr1 ExpiryTime:2024-08-06 08:51:03 +0000 UTC Type:0 Mac:52:54:00:e9:96:6b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:multinode-551471 Clientid:01:52:54:00:e9:96:6b}
	I0806 07:56:41.803634   49751 main.go:141] libmachine: (multinode-551471) DBG | domain multinode-551471 has defined IP address 192.168.39.186 and MAC address 52:54:00:e9:96:6b in network mk-multinode-551471
	I0806 07:56:41.803811   49751 main.go:141] libmachine: (multinode-551471) Calling .GetSSHPort
	I0806 07:56:41.803966   49751 main.go:141] libmachine: (multinode-551471) Calling .GetSSHKeyPath
	I0806 07:56:41.804108   49751 main.go:141] libmachine: (multinode-551471) Calling .GetSSHKeyPath
	I0806 07:56:41.804271   49751 main.go:141] libmachine: (multinode-551471) Calling .GetSSHUsername
	I0806 07:56:41.804459   49751 main.go:141] libmachine: Using SSH client type: native
	I0806 07:56:41.804652   49751 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I0806 07:56:41.804673   49751 main.go:141] libmachine: About to run SSH command:
	hostname
	I0806 07:56:41.914685   49751 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-551471
	
	I0806 07:56:41.914710   49751 main.go:141] libmachine: (multinode-551471) Calling .GetMachineName
	I0806 07:56:41.914978   49751 buildroot.go:166] provisioning hostname "multinode-551471"
	I0806 07:56:41.915002   49751 main.go:141] libmachine: (multinode-551471) Calling .GetMachineName
	I0806 07:56:41.915202   49751 main.go:141] libmachine: (multinode-551471) Calling .GetSSHHostname
	I0806 07:56:41.918165   49751 main.go:141] libmachine: (multinode-551471) DBG | domain multinode-551471 has defined MAC address 52:54:00:e9:96:6b in network mk-multinode-551471
	I0806 07:56:41.918464   49751 main.go:141] libmachine: (multinode-551471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:96:6b", ip: ""} in network mk-multinode-551471: {Iface:virbr1 ExpiryTime:2024-08-06 08:51:03 +0000 UTC Type:0 Mac:52:54:00:e9:96:6b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:multinode-551471 Clientid:01:52:54:00:e9:96:6b}
	I0806 07:56:41.918508   49751 main.go:141] libmachine: (multinode-551471) DBG | domain multinode-551471 has defined IP address 192.168.39.186 and MAC address 52:54:00:e9:96:6b in network mk-multinode-551471
	I0806 07:56:41.918598   49751 main.go:141] libmachine: (multinode-551471) Calling .GetSSHPort
	I0806 07:56:41.918778   49751 main.go:141] libmachine: (multinode-551471) Calling .GetSSHKeyPath
	I0806 07:56:41.918946   49751 main.go:141] libmachine: (multinode-551471) Calling .GetSSHKeyPath
	I0806 07:56:41.919116   49751 main.go:141] libmachine: (multinode-551471) Calling .GetSSHUsername
	I0806 07:56:41.919273   49751 main.go:141] libmachine: Using SSH client type: native
	I0806 07:56:41.919441   49751 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I0806 07:56:41.919452   49751 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-551471 && echo "multinode-551471" | sudo tee /etc/hostname
	I0806 07:56:42.037455   49751 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-551471
	
	I0806 07:56:42.037483   49751 main.go:141] libmachine: (multinode-551471) Calling .GetSSHHostname
	I0806 07:56:42.040203   49751 main.go:141] libmachine: (multinode-551471) DBG | domain multinode-551471 has defined MAC address 52:54:00:e9:96:6b in network mk-multinode-551471
	I0806 07:56:42.040630   49751 main.go:141] libmachine: (multinode-551471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:96:6b", ip: ""} in network mk-multinode-551471: {Iface:virbr1 ExpiryTime:2024-08-06 08:51:03 +0000 UTC Type:0 Mac:52:54:00:e9:96:6b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:multinode-551471 Clientid:01:52:54:00:e9:96:6b}
	I0806 07:56:42.040675   49751 main.go:141] libmachine: (multinode-551471) DBG | domain multinode-551471 has defined IP address 192.168.39.186 and MAC address 52:54:00:e9:96:6b in network mk-multinode-551471
	I0806 07:56:42.040934   49751 main.go:141] libmachine: (multinode-551471) Calling .GetSSHPort
	I0806 07:56:42.041146   49751 main.go:141] libmachine: (multinode-551471) Calling .GetSSHKeyPath
	I0806 07:56:42.041335   49751 main.go:141] libmachine: (multinode-551471) Calling .GetSSHKeyPath
	I0806 07:56:42.041515   49751 main.go:141] libmachine: (multinode-551471) Calling .GetSSHUsername
	I0806 07:56:42.041713   49751 main.go:141] libmachine: Using SSH client type: native
	I0806 07:56:42.041933   49751 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I0806 07:56:42.041962   49751 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-551471' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-551471/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-551471' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0806 07:56:42.145956   49751 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 07:56:42.145991   49751 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19370-13057/.minikube CaCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19370-13057/.minikube}
	I0806 07:56:42.146009   49751 buildroot.go:174] setting up certificates
	I0806 07:56:42.146017   49751 provision.go:84] configureAuth start
	I0806 07:56:42.146025   49751 main.go:141] libmachine: (multinode-551471) Calling .GetMachineName
	I0806 07:56:42.146379   49751 main.go:141] libmachine: (multinode-551471) Calling .GetIP
	I0806 07:56:42.148959   49751 main.go:141] libmachine: (multinode-551471) DBG | domain multinode-551471 has defined MAC address 52:54:00:e9:96:6b in network mk-multinode-551471
	I0806 07:56:42.149347   49751 main.go:141] libmachine: (multinode-551471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:96:6b", ip: ""} in network mk-multinode-551471: {Iface:virbr1 ExpiryTime:2024-08-06 08:51:03 +0000 UTC Type:0 Mac:52:54:00:e9:96:6b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:multinode-551471 Clientid:01:52:54:00:e9:96:6b}
	I0806 07:56:42.149368   49751 main.go:141] libmachine: (multinode-551471) DBG | domain multinode-551471 has defined IP address 192.168.39.186 and MAC address 52:54:00:e9:96:6b in network mk-multinode-551471
	I0806 07:56:42.149526   49751 main.go:141] libmachine: (multinode-551471) Calling .GetSSHHostname
	I0806 07:56:42.151457   49751 main.go:141] libmachine: (multinode-551471) DBG | domain multinode-551471 has defined MAC address 52:54:00:e9:96:6b in network mk-multinode-551471
	I0806 07:56:42.151750   49751 main.go:141] libmachine: (multinode-551471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:96:6b", ip: ""} in network mk-multinode-551471: {Iface:virbr1 ExpiryTime:2024-08-06 08:51:03 +0000 UTC Type:0 Mac:52:54:00:e9:96:6b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:multinode-551471 Clientid:01:52:54:00:e9:96:6b}
	I0806 07:56:42.151779   49751 main.go:141] libmachine: (multinode-551471) DBG | domain multinode-551471 has defined IP address 192.168.39.186 and MAC address 52:54:00:e9:96:6b in network mk-multinode-551471
	I0806 07:56:42.151919   49751 provision.go:143] copyHostCerts
	I0806 07:56:42.151964   49751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem
	I0806 07:56:42.152014   49751 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem, removing ...
	I0806 07:56:42.152026   49751 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem
	I0806 07:56:42.152108   49751 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem (1078 bytes)
	I0806 07:56:42.152200   49751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem
	I0806 07:56:42.152225   49751 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem, removing ...
	I0806 07:56:42.152232   49751 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem
	I0806 07:56:42.152274   49751 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem (1123 bytes)
	I0806 07:56:42.152339   49751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem
	I0806 07:56:42.152377   49751 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem, removing ...
	I0806 07:56:42.152386   49751 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem
	I0806 07:56:42.152421   49751 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem (1675 bytes)
	I0806 07:56:42.152530   49751 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem org=jenkins.multinode-551471 san=[127.0.0.1 192.168.39.186 localhost minikube multinode-551471]
	I0806 07:56:42.349045   49751 provision.go:177] copyRemoteCerts
	I0806 07:56:42.349114   49751 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0806 07:56:42.349144   49751 main.go:141] libmachine: (multinode-551471) Calling .GetSSHHostname
	I0806 07:56:42.352050   49751 main.go:141] libmachine: (multinode-551471) DBG | domain multinode-551471 has defined MAC address 52:54:00:e9:96:6b in network mk-multinode-551471
	I0806 07:56:42.352525   49751 main.go:141] libmachine: (multinode-551471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:96:6b", ip: ""} in network mk-multinode-551471: {Iface:virbr1 ExpiryTime:2024-08-06 08:51:03 +0000 UTC Type:0 Mac:52:54:00:e9:96:6b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:multinode-551471 Clientid:01:52:54:00:e9:96:6b}
	I0806 07:56:42.352552   49751 main.go:141] libmachine: (multinode-551471) DBG | domain multinode-551471 has defined IP address 192.168.39.186 and MAC address 52:54:00:e9:96:6b in network mk-multinode-551471
	I0806 07:56:42.352748   49751 main.go:141] libmachine: (multinode-551471) Calling .GetSSHPort
	I0806 07:56:42.352993   49751 main.go:141] libmachine: (multinode-551471) Calling .GetSSHKeyPath
	I0806 07:56:42.353169   49751 main.go:141] libmachine: (multinode-551471) Calling .GetSSHUsername
	I0806 07:56:42.353275   49751 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/multinode-551471/id_rsa Username:docker}
	I0806 07:56:42.434787   49751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0806 07:56:42.434873   49751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0806 07:56:42.461022   49751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0806 07:56:42.461085   49751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0806 07:56:42.488595   49751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0806 07:56:42.488692   49751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0806 07:56:42.514567   49751 provision.go:87] duration metric: took 368.540738ms to configureAuth
	I0806 07:56:42.514595   49751 buildroot.go:189] setting minikube options for container-runtime
	I0806 07:56:42.514820   49751 config.go:182] Loaded profile config "multinode-551471": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 07:56:42.514901   49751 main.go:141] libmachine: (multinode-551471) Calling .GetSSHHostname
	I0806 07:56:42.518220   49751 main.go:141] libmachine: (multinode-551471) DBG | domain multinode-551471 has defined MAC address 52:54:00:e9:96:6b in network mk-multinode-551471
	I0806 07:56:42.518664   49751 main.go:141] libmachine: (multinode-551471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:96:6b", ip: ""} in network mk-multinode-551471: {Iface:virbr1 ExpiryTime:2024-08-06 08:51:03 +0000 UTC Type:0 Mac:52:54:00:e9:96:6b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:multinode-551471 Clientid:01:52:54:00:e9:96:6b}
	I0806 07:56:42.518691   49751 main.go:141] libmachine: (multinode-551471) DBG | domain multinode-551471 has defined IP address 192.168.39.186 and MAC address 52:54:00:e9:96:6b in network mk-multinode-551471
	I0806 07:56:42.518888   49751 main.go:141] libmachine: (multinode-551471) Calling .GetSSHPort
	I0806 07:56:42.519098   49751 main.go:141] libmachine: (multinode-551471) Calling .GetSSHKeyPath
	I0806 07:56:42.519258   49751 main.go:141] libmachine: (multinode-551471) Calling .GetSSHKeyPath
	I0806 07:56:42.519444   49751 main.go:141] libmachine: (multinode-551471) Calling .GetSSHUsername
	I0806 07:56:42.519654   49751 main.go:141] libmachine: Using SSH client type: native
	I0806 07:56:42.519871   49751 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I0806 07:56:42.519892   49751 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0806 07:58:13.440454   49751 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0806 07:58:13.440502   49751 machine.go:97] duration metric: took 1m31.640725322s to provisionDockerMachine
	I0806 07:58:13.440520   49751 start.go:293] postStartSetup for "multinode-551471" (driver="kvm2")
	I0806 07:58:13.440540   49751 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0806 07:58:13.440565   49751 main.go:141] libmachine: (multinode-551471) Calling .DriverName
	I0806 07:58:13.440987   49751 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0806 07:58:13.441020   49751 main.go:141] libmachine: (multinode-551471) Calling .GetSSHHostname
	I0806 07:58:13.444453   49751 main.go:141] libmachine: (multinode-551471) DBG | domain multinode-551471 has defined MAC address 52:54:00:e9:96:6b in network mk-multinode-551471
	I0806 07:58:13.444884   49751 main.go:141] libmachine: (multinode-551471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:96:6b", ip: ""} in network mk-multinode-551471: {Iface:virbr1 ExpiryTime:2024-08-06 08:51:03 +0000 UTC Type:0 Mac:52:54:00:e9:96:6b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:multinode-551471 Clientid:01:52:54:00:e9:96:6b}
	I0806 07:58:13.444915   49751 main.go:141] libmachine: (multinode-551471) DBG | domain multinode-551471 has defined IP address 192.168.39.186 and MAC address 52:54:00:e9:96:6b in network mk-multinode-551471
	I0806 07:58:13.445111   49751 main.go:141] libmachine: (multinode-551471) Calling .GetSSHPort
	I0806 07:58:13.445362   49751 main.go:141] libmachine: (multinode-551471) Calling .GetSSHKeyPath
	I0806 07:58:13.445567   49751 main.go:141] libmachine: (multinode-551471) Calling .GetSSHUsername
	I0806 07:58:13.445757   49751 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/multinode-551471/id_rsa Username:docker}
	I0806 07:58:13.528642   49751 ssh_runner.go:195] Run: cat /etc/os-release
	I0806 07:58:13.532693   49751 command_runner.go:130] > NAME=Buildroot
	I0806 07:58:13.532718   49751 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0806 07:58:13.532724   49751 command_runner.go:130] > ID=buildroot
	I0806 07:58:13.532740   49751 command_runner.go:130] > VERSION_ID=2023.02.9
	I0806 07:58:13.532748   49751 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0806 07:58:13.532786   49751 info.go:137] Remote host: Buildroot 2023.02.9
	I0806 07:58:13.532802   49751 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-13057/.minikube/addons for local assets ...
	I0806 07:58:13.532893   49751 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-13057/.minikube/files for local assets ...
	I0806 07:58:13.532970   49751 filesync.go:149] local asset: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem -> 202462.pem in /etc/ssl/certs
	I0806 07:58:13.532979   49751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem -> /etc/ssl/certs/202462.pem
	I0806 07:58:13.533062   49751 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0806 07:58:13.542678   49751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem --> /etc/ssl/certs/202462.pem (1708 bytes)
	I0806 07:58:13.568542   49751 start.go:296] duration metric: took 128.007794ms for postStartSetup
	I0806 07:58:13.568609   49751 fix.go:56] duration metric: took 1m31.791538245s for fixHost
	I0806 07:58:13.568638   49751 main.go:141] libmachine: (multinode-551471) Calling .GetSSHHostname
	I0806 07:58:13.571373   49751 main.go:141] libmachine: (multinode-551471) DBG | domain multinode-551471 has defined MAC address 52:54:00:e9:96:6b in network mk-multinode-551471
	I0806 07:58:13.571775   49751 main.go:141] libmachine: (multinode-551471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:96:6b", ip: ""} in network mk-multinode-551471: {Iface:virbr1 ExpiryTime:2024-08-06 08:51:03 +0000 UTC Type:0 Mac:52:54:00:e9:96:6b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:multinode-551471 Clientid:01:52:54:00:e9:96:6b}
	I0806 07:58:13.571800   49751 main.go:141] libmachine: (multinode-551471) DBG | domain multinode-551471 has defined IP address 192.168.39.186 and MAC address 52:54:00:e9:96:6b in network mk-multinode-551471
	I0806 07:58:13.572073   49751 main.go:141] libmachine: (multinode-551471) Calling .GetSSHPort
	I0806 07:58:13.572320   49751 main.go:141] libmachine: (multinode-551471) Calling .GetSSHKeyPath
	I0806 07:58:13.572587   49751 main.go:141] libmachine: (multinode-551471) Calling .GetSSHKeyPath
	I0806 07:58:13.572736   49751 main.go:141] libmachine: (multinode-551471) Calling .GetSSHUsername
	I0806 07:58:13.573057   49751 main.go:141] libmachine: Using SSH client type: native
	I0806 07:58:13.573236   49751 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I0806 07:58:13.573251   49751 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0806 07:58:13.677647   49751 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722931093.655562722
	
	I0806 07:58:13.677676   49751 fix.go:216] guest clock: 1722931093.655562722
	I0806 07:58:13.677693   49751 fix.go:229] Guest: 2024-08-06 07:58:13.655562722 +0000 UTC Remote: 2024-08-06 07:58:13.568617668 +0000 UTC m=+91.922074622 (delta=86.945054ms)
	I0806 07:58:13.677750   49751 fix.go:200] guest clock delta is within tolerance: 86.945054ms
	I0806 07:58:13.677762   49751 start.go:83] releasing machines lock for "multinode-551471", held for 1m31.900708135s
	I0806 07:58:13.677803   49751 main.go:141] libmachine: (multinode-551471) Calling .DriverName
	I0806 07:58:13.678083   49751 main.go:141] libmachine: (multinode-551471) Calling .GetIP
	I0806 07:58:13.681138   49751 main.go:141] libmachine: (multinode-551471) DBG | domain multinode-551471 has defined MAC address 52:54:00:e9:96:6b in network mk-multinode-551471
	I0806 07:58:13.681546   49751 main.go:141] libmachine: (multinode-551471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:96:6b", ip: ""} in network mk-multinode-551471: {Iface:virbr1 ExpiryTime:2024-08-06 08:51:03 +0000 UTC Type:0 Mac:52:54:00:e9:96:6b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:multinode-551471 Clientid:01:52:54:00:e9:96:6b}
	I0806 07:58:13.681574   49751 main.go:141] libmachine: (multinode-551471) DBG | domain multinode-551471 has defined IP address 192.168.39.186 and MAC address 52:54:00:e9:96:6b in network mk-multinode-551471
	I0806 07:58:13.681701   49751 main.go:141] libmachine: (multinode-551471) Calling .DriverName
	I0806 07:58:13.682288   49751 main.go:141] libmachine: (multinode-551471) Calling .DriverName
	I0806 07:58:13.682472   49751 main.go:141] libmachine: (multinode-551471) Calling .DriverName
	I0806 07:58:13.682570   49751 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0806 07:58:13.682615   49751 main.go:141] libmachine: (multinode-551471) Calling .GetSSHHostname
	I0806 07:58:13.682725   49751 ssh_runner.go:195] Run: cat /version.json
	I0806 07:58:13.682758   49751 main.go:141] libmachine: (multinode-551471) Calling .GetSSHHostname
	I0806 07:58:13.685385   49751 main.go:141] libmachine: (multinode-551471) DBG | domain multinode-551471 has defined MAC address 52:54:00:e9:96:6b in network mk-multinode-551471
	I0806 07:58:13.685464   49751 main.go:141] libmachine: (multinode-551471) DBG | domain multinode-551471 has defined MAC address 52:54:00:e9:96:6b in network mk-multinode-551471
	I0806 07:58:13.685707   49751 main.go:141] libmachine: (multinode-551471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:96:6b", ip: ""} in network mk-multinode-551471: {Iface:virbr1 ExpiryTime:2024-08-06 08:51:03 +0000 UTC Type:0 Mac:52:54:00:e9:96:6b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:multinode-551471 Clientid:01:52:54:00:e9:96:6b}
	I0806 07:58:13.685731   49751 main.go:141] libmachine: (multinode-551471) DBG | domain multinode-551471 has defined IP address 192.168.39.186 and MAC address 52:54:00:e9:96:6b in network mk-multinode-551471
	I0806 07:58:13.685879   49751 main.go:141] libmachine: (multinode-551471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:96:6b", ip: ""} in network mk-multinode-551471: {Iface:virbr1 ExpiryTime:2024-08-06 08:51:03 +0000 UTC Type:0 Mac:52:54:00:e9:96:6b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:multinode-551471 Clientid:01:52:54:00:e9:96:6b}
	I0806 07:58:13.685890   49751 main.go:141] libmachine: (multinode-551471) Calling .GetSSHPort
	I0806 07:58:13.685909   49751 main.go:141] libmachine: (multinode-551471) DBG | domain multinode-551471 has defined IP address 192.168.39.186 and MAC address 52:54:00:e9:96:6b in network mk-multinode-551471
	I0806 07:58:13.686030   49751 main.go:141] libmachine: (multinode-551471) Calling .GetSSHPort
	I0806 07:58:13.686099   49751 main.go:141] libmachine: (multinode-551471) Calling .GetSSHKeyPath
	I0806 07:58:13.686273   49751 main.go:141] libmachine: (multinode-551471) Calling .GetSSHKeyPath
	I0806 07:58:13.686452   49751 main.go:141] libmachine: (multinode-551471) Calling .GetSSHUsername
	I0806 07:58:13.686463   49751 main.go:141] libmachine: (multinode-551471) Calling .GetSSHUsername
	I0806 07:58:13.686606   49751 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/multinode-551471/id_rsa Username:docker}
	I0806 07:58:13.686664   49751 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/multinode-551471/id_rsa Username:docker}
	I0806 07:58:13.762247   49751 command_runner.go:130] > {"iso_version": "v1.33.1-1722248113-19339", "kicbase_version": "v0.0.44-1721902582-19326", "minikube_version": "v1.33.1", "commit": "b8389556a97747a5bbaa1906d238251ad536d76e"}
	I0806 07:58:13.762474   49751 ssh_runner.go:195] Run: systemctl --version
	I0806 07:58:13.784898   49751 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0806 07:58:13.785667   49751 command_runner.go:130] > systemd 252 (252)
	I0806 07:58:13.785724   49751 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0806 07:58:13.785801   49751 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0806 07:58:13.947440   49751 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0806 07:58:13.955455   49751 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0806 07:58:13.955736   49751 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0806 07:58:13.955815   49751 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0806 07:58:13.968521   49751 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0806 07:58:13.968553   49751 start.go:495] detecting cgroup driver to use...
	I0806 07:58:13.968613   49751 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0806 07:58:13.990927   49751 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 07:58:14.006987   49751 docker.go:217] disabling cri-docker service (if available) ...
	I0806 07:58:14.007049   49751 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0806 07:58:14.024014   49751 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0806 07:58:14.040416   49751 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0806 07:58:14.195553   49751 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0806 07:58:14.337404   49751 docker.go:233] disabling docker service ...
	I0806 07:58:14.337486   49751 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0806 07:58:14.356268   49751 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0806 07:58:14.374416   49751 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0806 07:58:14.519097   49751 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0806 07:58:14.669629   49751 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0806 07:58:14.685116   49751 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 07:58:14.704986   49751 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0806 07:58:14.705050   49751 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0806 07:58:14.705116   49751 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 07:58:14.717029   49751 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0806 07:58:14.717127   49751 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 07:58:14.729142   49751 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 07:58:14.740593   49751 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 07:58:14.752524   49751 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0806 07:58:14.764629   49751 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 07:58:14.776538   49751 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 07:58:14.788228   49751 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 07:58:14.799904   49751 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0806 07:58:14.810436   49751 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0806 07:58:14.810520   49751 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0806 07:58:14.820939   49751 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 07:58:14.964414   49751 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0806 07:58:21.103667   49751 ssh_runner.go:235] Completed: sudo systemctl restart crio: (6.139212478s)
	I0806 07:58:21.103698   49751 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0806 07:58:21.103753   49751 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0806 07:58:21.109024   49751 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0806 07:58:21.109050   49751 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0806 07:58:21.109059   49751 command_runner.go:130] > Device: 0,22	Inode: 1340        Links: 1
	I0806 07:58:21.109069   49751 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0806 07:58:21.109077   49751 command_runner.go:130] > Access: 2024-08-06 07:58:20.971324906 +0000
	I0806 07:58:21.109086   49751 command_runner.go:130] > Modify: 2024-08-06 07:58:20.971324906 +0000
	I0806 07:58:21.109106   49751 command_runner.go:130] > Change: 2024-08-06 07:58:20.971324906 +0000
	I0806 07:58:21.109118   49751 command_runner.go:130] >  Birth: -
	I0806 07:58:21.109135   49751 start.go:563] Will wait 60s for crictl version
	I0806 07:58:21.109184   49751 ssh_runner.go:195] Run: which crictl
	I0806 07:58:21.113153   49751 command_runner.go:130] > /usr/bin/crictl
	I0806 07:58:21.113232   49751 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0806 07:58:21.156643   49751 command_runner.go:130] > Version:  0.1.0
	I0806 07:58:21.156667   49751 command_runner.go:130] > RuntimeName:  cri-o
	I0806 07:58:21.156675   49751 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0806 07:58:21.156683   49751 command_runner.go:130] > RuntimeApiVersion:  v1
	I0806 07:58:21.156708   49751 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0806 07:58:21.156787   49751 ssh_runner.go:195] Run: crio --version
	I0806 07:58:21.186189   49751 command_runner.go:130] > crio version 1.29.1
	I0806 07:58:21.186216   49751 command_runner.go:130] > Version:        1.29.1
	I0806 07:58:21.186224   49751 command_runner.go:130] > GitCommit:      unknown
	I0806 07:58:21.186232   49751 command_runner.go:130] > GitCommitDate:  unknown
	I0806 07:58:21.186239   49751 command_runner.go:130] > GitTreeState:   clean
	I0806 07:58:21.186249   49751 command_runner.go:130] > BuildDate:      2024-07-29T16:04:01Z
	I0806 07:58:21.186256   49751 command_runner.go:130] > GoVersion:      go1.21.6
	I0806 07:58:21.186264   49751 command_runner.go:130] > Compiler:       gc
	I0806 07:58:21.186271   49751 command_runner.go:130] > Platform:       linux/amd64
	I0806 07:58:21.186275   49751 command_runner.go:130] > Linkmode:       dynamic
	I0806 07:58:21.186279   49751 command_runner.go:130] > BuildTags:      
	I0806 07:58:21.186283   49751 command_runner.go:130] >   containers_image_ostree_stub
	I0806 07:58:21.186288   49751 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0806 07:58:21.186295   49751 command_runner.go:130] >   btrfs_noversion
	I0806 07:58:21.186300   49751 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0806 07:58:21.186305   49751 command_runner.go:130] >   libdm_no_deferred_remove
	I0806 07:58:21.186308   49751 command_runner.go:130] >   seccomp
	I0806 07:58:21.186312   49751 command_runner.go:130] > LDFlags:          unknown
	I0806 07:58:21.186324   49751 command_runner.go:130] > SeccompEnabled:   true
	I0806 07:58:21.186332   49751 command_runner.go:130] > AppArmorEnabled:  false
	I0806 07:58:21.187464   49751 ssh_runner.go:195] Run: crio --version
	I0806 07:58:21.215814   49751 command_runner.go:130] > crio version 1.29.1
	I0806 07:58:21.215838   49751 command_runner.go:130] > Version:        1.29.1
	I0806 07:58:21.215859   49751 command_runner.go:130] > GitCommit:      unknown
	I0806 07:58:21.215865   49751 command_runner.go:130] > GitCommitDate:  unknown
	I0806 07:58:21.215877   49751 command_runner.go:130] > GitTreeState:   clean
	I0806 07:58:21.215886   49751 command_runner.go:130] > BuildDate:      2024-07-29T16:04:01Z
	I0806 07:58:21.215893   49751 command_runner.go:130] > GoVersion:      go1.21.6
	I0806 07:58:21.215899   49751 command_runner.go:130] > Compiler:       gc
	I0806 07:58:21.215905   49751 command_runner.go:130] > Platform:       linux/amd64
	I0806 07:58:21.215913   49751 command_runner.go:130] > Linkmode:       dynamic
	I0806 07:58:21.215917   49751 command_runner.go:130] > BuildTags:      
	I0806 07:58:21.215922   49751 command_runner.go:130] >   containers_image_ostree_stub
	I0806 07:58:21.215927   49751 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0806 07:58:21.215931   49751 command_runner.go:130] >   btrfs_noversion
	I0806 07:58:21.215935   49751 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0806 07:58:21.215940   49751 command_runner.go:130] >   libdm_no_deferred_remove
	I0806 07:58:21.215943   49751 command_runner.go:130] >   seccomp
	I0806 07:58:21.215947   49751 command_runner.go:130] > LDFlags:          unknown
	I0806 07:58:21.215954   49751 command_runner.go:130] > SeccompEnabled:   true
	I0806 07:58:21.215960   49751 command_runner.go:130] > AppArmorEnabled:  false
	I0806 07:58:21.219946   49751 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0806 07:58:21.221262   49751 main.go:141] libmachine: (multinode-551471) Calling .GetIP
	I0806 07:58:21.224168   49751 main.go:141] libmachine: (multinode-551471) DBG | domain multinode-551471 has defined MAC address 52:54:00:e9:96:6b in network mk-multinode-551471
	I0806 07:58:21.224729   49751 main.go:141] libmachine: (multinode-551471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:96:6b", ip: ""} in network mk-multinode-551471: {Iface:virbr1 ExpiryTime:2024-08-06 08:51:03 +0000 UTC Type:0 Mac:52:54:00:e9:96:6b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:multinode-551471 Clientid:01:52:54:00:e9:96:6b}
	I0806 07:58:21.224763   49751 main.go:141] libmachine: (multinode-551471) DBG | domain multinode-551471 has defined IP address 192.168.39.186 and MAC address 52:54:00:e9:96:6b in network mk-multinode-551471
	I0806 07:58:21.224970   49751 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0806 07:58:21.229462   49751 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0806 07:58:21.229565   49751 kubeadm.go:883] updating cluster {Name:multinode-551471 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.3 ClusterName:multinode-551471 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.186 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.134 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.189 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fa
lse inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0806 07:58:21.229691   49751 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0806 07:58:21.229739   49751 ssh_runner.go:195] Run: sudo crictl images --output json
	I0806 07:58:21.271124   49751 command_runner.go:130] > {
	I0806 07:58:21.271145   49751 command_runner.go:130] >   "images": [
	I0806 07:58:21.271149   49751 command_runner.go:130] >     {
	I0806 07:58:21.271157   49751 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0806 07:58:21.271162   49751 command_runner.go:130] >       "repoTags": [
	I0806 07:58:21.271170   49751 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0806 07:58:21.271173   49751 command_runner.go:130] >       ],
	I0806 07:58:21.271177   49751 command_runner.go:130] >       "repoDigests": [
	I0806 07:58:21.271184   49751 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0806 07:58:21.271191   49751 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0806 07:58:21.271194   49751 command_runner.go:130] >       ],
	I0806 07:58:21.271199   49751 command_runner.go:130] >       "size": "87165492",
	I0806 07:58:21.271202   49751 command_runner.go:130] >       "uid": null,
	I0806 07:58:21.271207   49751 command_runner.go:130] >       "username": "",
	I0806 07:58:21.271217   49751 command_runner.go:130] >       "spec": null,
	I0806 07:58:21.271221   49751 command_runner.go:130] >       "pinned": false
	I0806 07:58:21.271224   49751 command_runner.go:130] >     },
	I0806 07:58:21.271234   49751 command_runner.go:130] >     {
	I0806 07:58:21.271240   49751 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0806 07:58:21.271244   49751 command_runner.go:130] >       "repoTags": [
	I0806 07:58:21.271249   49751 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0806 07:58:21.271255   49751 command_runner.go:130] >       ],
	I0806 07:58:21.271260   49751 command_runner.go:130] >       "repoDigests": [
	I0806 07:58:21.271267   49751 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0806 07:58:21.271274   49751 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0806 07:58:21.271278   49751 command_runner.go:130] >       ],
	I0806 07:58:21.271285   49751 command_runner.go:130] >       "size": "87165492",
	I0806 07:58:21.271288   49751 command_runner.go:130] >       "uid": null,
	I0806 07:58:21.271296   49751 command_runner.go:130] >       "username": "",
	I0806 07:58:21.271302   49751 command_runner.go:130] >       "spec": null,
	I0806 07:58:21.271306   49751 command_runner.go:130] >       "pinned": false
	I0806 07:58:21.271309   49751 command_runner.go:130] >     },
	I0806 07:58:21.271312   49751 command_runner.go:130] >     {
	I0806 07:58:21.271318   49751 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0806 07:58:21.271324   49751 command_runner.go:130] >       "repoTags": [
	I0806 07:58:21.271329   49751 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0806 07:58:21.271333   49751 command_runner.go:130] >       ],
	I0806 07:58:21.271337   49751 command_runner.go:130] >       "repoDigests": [
	I0806 07:58:21.271344   49751 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0806 07:58:21.271351   49751 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0806 07:58:21.271355   49751 command_runner.go:130] >       ],
	I0806 07:58:21.271360   49751 command_runner.go:130] >       "size": "1363676",
	I0806 07:58:21.271364   49751 command_runner.go:130] >       "uid": null,
	I0806 07:58:21.271368   49751 command_runner.go:130] >       "username": "",
	I0806 07:58:21.271373   49751 command_runner.go:130] >       "spec": null,
	I0806 07:58:21.271377   49751 command_runner.go:130] >       "pinned": false
	I0806 07:58:21.271381   49751 command_runner.go:130] >     },
	I0806 07:58:21.271385   49751 command_runner.go:130] >     {
	I0806 07:58:21.271391   49751 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0806 07:58:21.271395   49751 command_runner.go:130] >       "repoTags": [
	I0806 07:58:21.271400   49751 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0806 07:58:21.271404   49751 command_runner.go:130] >       ],
	I0806 07:58:21.271410   49751 command_runner.go:130] >       "repoDigests": [
	I0806 07:58:21.271422   49751 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0806 07:58:21.271436   49751 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0806 07:58:21.271442   49751 command_runner.go:130] >       ],
	I0806 07:58:21.271446   49751 command_runner.go:130] >       "size": "31470524",
	I0806 07:58:21.271450   49751 command_runner.go:130] >       "uid": null,
	I0806 07:58:21.271456   49751 command_runner.go:130] >       "username": "",
	I0806 07:58:21.271460   49751 command_runner.go:130] >       "spec": null,
	I0806 07:58:21.271463   49751 command_runner.go:130] >       "pinned": false
	I0806 07:58:21.271469   49751 command_runner.go:130] >     },
	I0806 07:58:21.271472   49751 command_runner.go:130] >     {
	I0806 07:58:21.271478   49751 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0806 07:58:21.271484   49751 command_runner.go:130] >       "repoTags": [
	I0806 07:58:21.271490   49751 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0806 07:58:21.271493   49751 command_runner.go:130] >       ],
	I0806 07:58:21.271497   49751 command_runner.go:130] >       "repoDigests": [
	I0806 07:58:21.271503   49751 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0806 07:58:21.271511   49751 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0806 07:58:21.271516   49751 command_runner.go:130] >       ],
	I0806 07:58:21.271520   49751 command_runner.go:130] >       "size": "61245718",
	I0806 07:58:21.271524   49751 command_runner.go:130] >       "uid": null,
	I0806 07:58:21.271528   49751 command_runner.go:130] >       "username": "nonroot",
	I0806 07:58:21.271532   49751 command_runner.go:130] >       "spec": null,
	I0806 07:58:21.271535   49751 command_runner.go:130] >       "pinned": false
	I0806 07:58:21.271539   49751 command_runner.go:130] >     },
	I0806 07:58:21.271542   49751 command_runner.go:130] >     {
	I0806 07:58:21.271548   49751 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0806 07:58:21.271554   49751 command_runner.go:130] >       "repoTags": [
	I0806 07:58:21.271558   49751 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0806 07:58:21.271561   49751 command_runner.go:130] >       ],
	I0806 07:58:21.271565   49751 command_runner.go:130] >       "repoDigests": [
	I0806 07:58:21.271571   49751 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0806 07:58:21.271581   49751 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0806 07:58:21.271584   49751 command_runner.go:130] >       ],
	I0806 07:58:21.271592   49751 command_runner.go:130] >       "size": "150779692",
	I0806 07:58:21.271595   49751 command_runner.go:130] >       "uid": {
	I0806 07:58:21.271601   49751 command_runner.go:130] >         "value": "0"
	I0806 07:58:21.271609   49751 command_runner.go:130] >       },
	I0806 07:58:21.271615   49751 command_runner.go:130] >       "username": "",
	I0806 07:58:21.271619   49751 command_runner.go:130] >       "spec": null,
	I0806 07:58:21.271627   49751 command_runner.go:130] >       "pinned": false
	I0806 07:58:21.271630   49751 command_runner.go:130] >     },
	I0806 07:58:21.271633   49751 command_runner.go:130] >     {
	I0806 07:58:21.271639   49751 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0806 07:58:21.271645   49751 command_runner.go:130] >       "repoTags": [
	I0806 07:58:21.271649   49751 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0806 07:58:21.271652   49751 command_runner.go:130] >       ],
	I0806 07:58:21.271656   49751 command_runner.go:130] >       "repoDigests": [
	I0806 07:58:21.271663   49751 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0806 07:58:21.271672   49751 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0806 07:58:21.271675   49751 command_runner.go:130] >       ],
	I0806 07:58:21.271679   49751 command_runner.go:130] >       "size": "117609954",
	I0806 07:58:21.271684   49751 command_runner.go:130] >       "uid": {
	I0806 07:58:21.271688   49751 command_runner.go:130] >         "value": "0"
	I0806 07:58:21.271694   49751 command_runner.go:130] >       },
	I0806 07:58:21.271697   49751 command_runner.go:130] >       "username": "",
	I0806 07:58:21.271701   49751 command_runner.go:130] >       "spec": null,
	I0806 07:58:21.271705   49751 command_runner.go:130] >       "pinned": false
	I0806 07:58:21.271709   49751 command_runner.go:130] >     },
	I0806 07:58:21.271712   49751 command_runner.go:130] >     {
	I0806 07:58:21.271718   49751 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0806 07:58:21.271723   49751 command_runner.go:130] >       "repoTags": [
	I0806 07:58:21.271728   49751 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0806 07:58:21.271734   49751 command_runner.go:130] >       ],
	I0806 07:58:21.271738   49751 command_runner.go:130] >       "repoDigests": [
	I0806 07:58:21.271758   49751 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0806 07:58:21.271768   49751 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0806 07:58:21.271772   49751 command_runner.go:130] >       ],
	I0806 07:58:21.271776   49751 command_runner.go:130] >       "size": "112198984",
	I0806 07:58:21.271779   49751 command_runner.go:130] >       "uid": {
	I0806 07:58:21.271783   49751 command_runner.go:130] >         "value": "0"
	I0806 07:58:21.271787   49751 command_runner.go:130] >       },
	I0806 07:58:21.271790   49751 command_runner.go:130] >       "username": "",
	I0806 07:58:21.271798   49751 command_runner.go:130] >       "spec": null,
	I0806 07:58:21.271802   49751 command_runner.go:130] >       "pinned": false
	I0806 07:58:21.271805   49751 command_runner.go:130] >     },
	I0806 07:58:21.271808   49751 command_runner.go:130] >     {
	I0806 07:58:21.271813   49751 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0806 07:58:21.271817   49751 command_runner.go:130] >       "repoTags": [
	I0806 07:58:21.271821   49751 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0806 07:58:21.271824   49751 command_runner.go:130] >       ],
	I0806 07:58:21.271827   49751 command_runner.go:130] >       "repoDigests": [
	I0806 07:58:21.271834   49751 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0806 07:58:21.271842   49751 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0806 07:58:21.271846   49751 command_runner.go:130] >       ],
	I0806 07:58:21.271850   49751 command_runner.go:130] >       "size": "85953945",
	I0806 07:58:21.271854   49751 command_runner.go:130] >       "uid": null,
	I0806 07:58:21.271858   49751 command_runner.go:130] >       "username": "",
	I0806 07:58:21.271861   49751 command_runner.go:130] >       "spec": null,
	I0806 07:58:21.271865   49751 command_runner.go:130] >       "pinned": false
	I0806 07:58:21.271868   49751 command_runner.go:130] >     },
	I0806 07:58:21.271872   49751 command_runner.go:130] >     {
	I0806 07:58:21.271877   49751 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0806 07:58:21.271884   49751 command_runner.go:130] >       "repoTags": [
	I0806 07:58:21.271888   49751 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0806 07:58:21.271892   49751 command_runner.go:130] >       ],
	I0806 07:58:21.271895   49751 command_runner.go:130] >       "repoDigests": [
	I0806 07:58:21.271903   49751 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0806 07:58:21.271912   49751 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0806 07:58:21.271916   49751 command_runner.go:130] >       ],
	I0806 07:58:21.271920   49751 command_runner.go:130] >       "size": "63051080",
	I0806 07:58:21.271923   49751 command_runner.go:130] >       "uid": {
	I0806 07:58:21.271927   49751 command_runner.go:130] >         "value": "0"
	I0806 07:58:21.271931   49751 command_runner.go:130] >       },
	I0806 07:58:21.271935   49751 command_runner.go:130] >       "username": "",
	I0806 07:58:21.271940   49751 command_runner.go:130] >       "spec": null,
	I0806 07:58:21.271944   49751 command_runner.go:130] >       "pinned": false
	I0806 07:58:21.271947   49751 command_runner.go:130] >     },
	I0806 07:58:21.271951   49751 command_runner.go:130] >     {
	I0806 07:58:21.271961   49751 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0806 07:58:21.271967   49751 command_runner.go:130] >       "repoTags": [
	I0806 07:58:21.271971   49751 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0806 07:58:21.271974   49751 command_runner.go:130] >       ],
	I0806 07:58:21.271981   49751 command_runner.go:130] >       "repoDigests": [
	I0806 07:58:21.271989   49751 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0806 07:58:21.271996   49751 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0806 07:58:21.272001   49751 command_runner.go:130] >       ],
	I0806 07:58:21.272005   49751 command_runner.go:130] >       "size": "750414",
	I0806 07:58:21.272008   49751 command_runner.go:130] >       "uid": {
	I0806 07:58:21.272012   49751 command_runner.go:130] >         "value": "65535"
	I0806 07:58:21.272015   49751 command_runner.go:130] >       },
	I0806 07:58:21.272019   49751 command_runner.go:130] >       "username": "",
	I0806 07:58:21.272024   49751 command_runner.go:130] >       "spec": null,
	I0806 07:58:21.272029   49751 command_runner.go:130] >       "pinned": true
	I0806 07:58:21.272033   49751 command_runner.go:130] >     }
	I0806 07:58:21.272036   49751 command_runner.go:130] >   ]
	I0806 07:58:21.272039   49751 command_runner.go:130] > }
	I0806 07:58:21.273129   49751 crio.go:514] all images are preloaded for cri-o runtime.
	I0806 07:58:21.273147   49751 crio.go:433] Images already preloaded, skipping extraction
	I0806 07:58:21.273203   49751 ssh_runner.go:195] Run: sudo crictl images --output json
	I0806 07:58:21.310771   49751 command_runner.go:130] > {
	I0806 07:58:21.310799   49751 command_runner.go:130] >   "images": [
	I0806 07:58:21.310805   49751 command_runner.go:130] >     {
	I0806 07:58:21.310818   49751 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0806 07:58:21.310827   49751 command_runner.go:130] >       "repoTags": [
	I0806 07:58:21.310836   49751 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0806 07:58:21.310841   49751 command_runner.go:130] >       ],
	I0806 07:58:21.310865   49751 command_runner.go:130] >       "repoDigests": [
	I0806 07:58:21.310879   49751 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0806 07:58:21.310890   49751 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0806 07:58:21.310897   49751 command_runner.go:130] >       ],
	I0806 07:58:21.310905   49751 command_runner.go:130] >       "size": "87165492",
	I0806 07:58:21.310915   49751 command_runner.go:130] >       "uid": null,
	I0806 07:58:21.310922   49751 command_runner.go:130] >       "username": "",
	I0806 07:58:21.310933   49751 command_runner.go:130] >       "spec": null,
	I0806 07:58:21.310940   49751 command_runner.go:130] >       "pinned": false
	I0806 07:58:21.310948   49751 command_runner.go:130] >     },
	I0806 07:58:21.310954   49751 command_runner.go:130] >     {
	I0806 07:58:21.310966   49751 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0806 07:58:21.310976   49751 command_runner.go:130] >       "repoTags": [
	I0806 07:58:21.310985   49751 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0806 07:58:21.310995   49751 command_runner.go:130] >       ],
	I0806 07:58:21.311005   49751 command_runner.go:130] >       "repoDigests": [
	I0806 07:58:21.311019   49751 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0806 07:58:21.311036   49751 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0806 07:58:21.311045   49751 command_runner.go:130] >       ],
	I0806 07:58:21.311055   49751 command_runner.go:130] >       "size": "87165492",
	I0806 07:58:21.311064   49751 command_runner.go:130] >       "uid": null,
	I0806 07:58:21.311073   49751 command_runner.go:130] >       "username": "",
	I0806 07:58:21.311082   49751 command_runner.go:130] >       "spec": null,
	I0806 07:58:21.311090   49751 command_runner.go:130] >       "pinned": false
	I0806 07:58:21.311099   49751 command_runner.go:130] >     },
	I0806 07:58:21.311104   49751 command_runner.go:130] >     {
	I0806 07:58:21.311116   49751 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0806 07:58:21.311127   49751 command_runner.go:130] >       "repoTags": [
	I0806 07:58:21.311137   49751 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0806 07:58:21.311145   49751 command_runner.go:130] >       ],
	I0806 07:58:21.311153   49751 command_runner.go:130] >       "repoDigests": [
	I0806 07:58:21.311167   49751 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0806 07:58:21.311180   49751 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0806 07:58:21.311188   49751 command_runner.go:130] >       ],
	I0806 07:58:21.311197   49751 command_runner.go:130] >       "size": "1363676",
	I0806 07:58:21.311205   49751 command_runner.go:130] >       "uid": null,
	I0806 07:58:21.311213   49751 command_runner.go:130] >       "username": "",
	I0806 07:58:21.311222   49751 command_runner.go:130] >       "spec": null,
	I0806 07:58:21.311231   49751 command_runner.go:130] >       "pinned": false
	I0806 07:58:21.311244   49751 command_runner.go:130] >     },
	I0806 07:58:21.311252   49751 command_runner.go:130] >     {
	I0806 07:58:21.311261   49751 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0806 07:58:21.311269   49751 command_runner.go:130] >       "repoTags": [
	I0806 07:58:21.311278   49751 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0806 07:58:21.311286   49751 command_runner.go:130] >       ],
	I0806 07:58:21.311293   49751 command_runner.go:130] >       "repoDigests": [
	I0806 07:58:21.311321   49751 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0806 07:58:21.311341   49751 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0806 07:58:21.311347   49751 command_runner.go:130] >       ],
	I0806 07:58:21.311353   49751 command_runner.go:130] >       "size": "31470524",
	I0806 07:58:21.311361   49751 command_runner.go:130] >       "uid": null,
	I0806 07:58:21.311366   49751 command_runner.go:130] >       "username": "",
	I0806 07:58:21.311375   49751 command_runner.go:130] >       "spec": null,
	I0806 07:58:21.311383   49751 command_runner.go:130] >       "pinned": false
	I0806 07:58:21.311391   49751 command_runner.go:130] >     },
	I0806 07:58:21.311396   49751 command_runner.go:130] >     {
	I0806 07:58:21.311408   49751 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0806 07:58:21.311417   49751 command_runner.go:130] >       "repoTags": [
	I0806 07:58:21.311428   49751 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0806 07:58:21.311436   49751 command_runner.go:130] >       ],
	I0806 07:58:21.311445   49751 command_runner.go:130] >       "repoDigests": [
	I0806 07:58:21.311459   49751 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0806 07:58:21.311475   49751 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0806 07:58:21.311485   49751 command_runner.go:130] >       ],
	I0806 07:58:21.311491   49751 command_runner.go:130] >       "size": "61245718",
	I0806 07:58:21.311500   49751 command_runner.go:130] >       "uid": null,
	I0806 07:58:21.311506   49751 command_runner.go:130] >       "username": "nonroot",
	I0806 07:58:21.311516   49751 command_runner.go:130] >       "spec": null,
	I0806 07:58:21.311523   49751 command_runner.go:130] >       "pinned": false
	I0806 07:58:21.311532   49751 command_runner.go:130] >     },
	I0806 07:58:21.311539   49751 command_runner.go:130] >     {
	I0806 07:58:21.311552   49751 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0806 07:58:21.311562   49751 command_runner.go:130] >       "repoTags": [
	I0806 07:58:21.311572   49751 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0806 07:58:21.311580   49751 command_runner.go:130] >       ],
	I0806 07:58:21.311586   49751 command_runner.go:130] >       "repoDigests": [
	I0806 07:58:21.311600   49751 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0806 07:58:21.311615   49751 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0806 07:58:21.311624   49751 command_runner.go:130] >       ],
	I0806 07:58:21.311633   49751 command_runner.go:130] >       "size": "150779692",
	I0806 07:58:21.311641   49751 command_runner.go:130] >       "uid": {
	I0806 07:58:21.311647   49751 command_runner.go:130] >         "value": "0"
	I0806 07:58:21.311655   49751 command_runner.go:130] >       },
	I0806 07:58:21.311662   49751 command_runner.go:130] >       "username": "",
	I0806 07:58:21.311669   49751 command_runner.go:130] >       "spec": null,
	I0806 07:58:21.311678   49751 command_runner.go:130] >       "pinned": false
	I0806 07:58:21.311687   49751 command_runner.go:130] >     },
	I0806 07:58:21.311693   49751 command_runner.go:130] >     {
	I0806 07:58:21.311706   49751 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0806 07:58:21.311715   49751 command_runner.go:130] >       "repoTags": [
	I0806 07:58:21.311724   49751 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0806 07:58:21.311732   49751 command_runner.go:130] >       ],
	I0806 07:58:21.311739   49751 command_runner.go:130] >       "repoDigests": [
	I0806 07:58:21.311752   49751 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0806 07:58:21.311766   49751 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0806 07:58:21.311775   49751 command_runner.go:130] >       ],
	I0806 07:58:21.311786   49751 command_runner.go:130] >       "size": "117609954",
	I0806 07:58:21.311795   49751 command_runner.go:130] >       "uid": {
	I0806 07:58:21.311804   49751 command_runner.go:130] >         "value": "0"
	I0806 07:58:21.311813   49751 command_runner.go:130] >       },
	I0806 07:58:21.311821   49751 command_runner.go:130] >       "username": "",
	I0806 07:58:21.311827   49751 command_runner.go:130] >       "spec": null,
	I0806 07:58:21.311836   49751 command_runner.go:130] >       "pinned": false
	I0806 07:58:21.311841   49751 command_runner.go:130] >     },
	I0806 07:58:21.311855   49751 command_runner.go:130] >     {
	I0806 07:58:21.311866   49751 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0806 07:58:21.311874   49751 command_runner.go:130] >       "repoTags": [
	I0806 07:58:21.311883   49751 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0806 07:58:21.311890   49751 command_runner.go:130] >       ],
	I0806 07:58:21.311896   49751 command_runner.go:130] >       "repoDigests": [
	I0806 07:58:21.311919   49751 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0806 07:58:21.311933   49751 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0806 07:58:21.311938   49751 command_runner.go:130] >       ],
	I0806 07:58:21.311948   49751 command_runner.go:130] >       "size": "112198984",
	I0806 07:58:21.311953   49751 command_runner.go:130] >       "uid": {
	I0806 07:58:21.311961   49751 command_runner.go:130] >         "value": "0"
	I0806 07:58:21.311967   49751 command_runner.go:130] >       },
	I0806 07:58:21.311976   49751 command_runner.go:130] >       "username": "",
	I0806 07:58:21.311982   49751 command_runner.go:130] >       "spec": null,
	I0806 07:58:21.311990   49751 command_runner.go:130] >       "pinned": false
	I0806 07:58:21.311996   49751 command_runner.go:130] >     },
	I0806 07:58:21.312005   49751 command_runner.go:130] >     {
	I0806 07:58:21.312016   49751 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0806 07:58:21.312022   49751 command_runner.go:130] >       "repoTags": [
	I0806 07:58:21.312027   49751 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0806 07:58:21.312031   49751 command_runner.go:130] >       ],
	I0806 07:58:21.312035   49751 command_runner.go:130] >       "repoDigests": [
	I0806 07:58:21.312042   49751 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0806 07:58:21.312051   49751 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0806 07:58:21.312055   49751 command_runner.go:130] >       ],
	I0806 07:58:21.312059   49751 command_runner.go:130] >       "size": "85953945",
	I0806 07:58:21.312063   49751 command_runner.go:130] >       "uid": null,
	I0806 07:58:21.312066   49751 command_runner.go:130] >       "username": "",
	I0806 07:58:21.312070   49751 command_runner.go:130] >       "spec": null,
	I0806 07:58:21.312074   49751 command_runner.go:130] >       "pinned": false
	I0806 07:58:21.312079   49751 command_runner.go:130] >     },
	I0806 07:58:21.312082   49751 command_runner.go:130] >     {
	I0806 07:58:21.312088   49751 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0806 07:58:21.312092   49751 command_runner.go:130] >       "repoTags": [
	I0806 07:58:21.312099   49751 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0806 07:58:21.312104   49751 command_runner.go:130] >       ],
	I0806 07:58:21.312109   49751 command_runner.go:130] >       "repoDigests": [
	I0806 07:58:21.312116   49751 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0806 07:58:21.312128   49751 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0806 07:58:21.312132   49751 command_runner.go:130] >       ],
	I0806 07:58:21.312136   49751 command_runner.go:130] >       "size": "63051080",
	I0806 07:58:21.312140   49751 command_runner.go:130] >       "uid": {
	I0806 07:58:21.312147   49751 command_runner.go:130] >         "value": "0"
	I0806 07:58:21.312154   49751 command_runner.go:130] >       },
	I0806 07:58:21.312162   49751 command_runner.go:130] >       "username": "",
	I0806 07:58:21.312167   49751 command_runner.go:130] >       "spec": null,
	I0806 07:58:21.312177   49751 command_runner.go:130] >       "pinned": false
	I0806 07:58:21.312182   49751 command_runner.go:130] >     },
	I0806 07:58:21.312190   49751 command_runner.go:130] >     {
	I0806 07:58:21.312198   49751 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0806 07:58:21.312207   49751 command_runner.go:130] >       "repoTags": [
	I0806 07:58:21.312215   49751 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0806 07:58:21.312224   49751 command_runner.go:130] >       ],
	I0806 07:58:21.312230   49751 command_runner.go:130] >       "repoDigests": [
	I0806 07:58:21.312239   49751 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0806 07:58:21.312250   49751 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0806 07:58:21.312255   49751 command_runner.go:130] >       ],
	I0806 07:58:21.312261   49751 command_runner.go:130] >       "size": "750414",
	I0806 07:58:21.312270   49751 command_runner.go:130] >       "uid": {
	I0806 07:58:21.312276   49751 command_runner.go:130] >         "value": "65535"
	I0806 07:58:21.312283   49751 command_runner.go:130] >       },
	I0806 07:58:21.312289   49751 command_runner.go:130] >       "username": "",
	I0806 07:58:21.312299   49751 command_runner.go:130] >       "spec": null,
	I0806 07:58:21.312306   49751 command_runner.go:130] >       "pinned": true
	I0806 07:58:21.312314   49751 command_runner.go:130] >     }
	I0806 07:58:21.312320   49751 command_runner.go:130] >   ]
	I0806 07:58:21.312329   49751 command_runner.go:130] > }
	I0806 07:58:21.312485   49751 crio.go:514] all images are preloaded for cri-o runtime.
	I0806 07:58:21.312499   49751 cache_images.go:84] Images are preloaded, skipping loading
	I0806 07:58:21.312507   49751 kubeadm.go:934] updating node { 192.168.39.186 8443 v1.30.3 crio true true} ...
	I0806 07:58:21.312607   49751 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-551471 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.186
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-551471 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0806 07:58:21.312671   49751 ssh_runner.go:195] Run: crio config
	I0806 07:58:21.354414   49751 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0806 07:58:21.354449   49751 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0806 07:58:21.354460   49751 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0806 07:58:21.354465   49751 command_runner.go:130] > #
	I0806 07:58:21.354476   49751 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0806 07:58:21.354485   49751 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0806 07:58:21.354494   49751 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0806 07:58:21.354512   49751 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0806 07:58:21.354517   49751 command_runner.go:130] > # reload'.
	I0806 07:58:21.354528   49751 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0806 07:58:21.354540   49751 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0806 07:58:21.354551   49751 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0806 07:58:21.354559   49751 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0806 07:58:21.354571   49751 command_runner.go:130] > [crio]
	I0806 07:58:21.354582   49751 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0806 07:58:21.354600   49751 command_runner.go:130] > # containers images, in this directory.
	I0806 07:58:21.354611   49751 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0806 07:58:21.354629   49751 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0806 07:58:21.354641   49751 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0806 07:58:21.354654   49751 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0806 07:58:21.354661   49751 command_runner.go:130] > # imagestore = ""
	I0806 07:58:21.354671   49751 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0806 07:58:21.354683   49751 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0806 07:58:21.354693   49751 command_runner.go:130] > storage_driver = "overlay"
	I0806 07:58:21.354706   49751 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0806 07:58:21.354719   49751 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0806 07:58:21.354729   49751 command_runner.go:130] > storage_option = [
	I0806 07:58:21.354738   49751 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0806 07:58:21.354747   49751 command_runner.go:130] > ]
	I0806 07:58:21.354757   49751 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0806 07:58:21.354769   49751 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0806 07:58:21.354776   49751 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0806 07:58:21.354787   49751 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0806 07:58:21.354799   49751 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0806 07:58:21.354809   49751 command_runner.go:130] > # always happen on a node reboot
	I0806 07:58:21.354818   49751 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0806 07:58:21.354834   49751 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0806 07:58:21.354853   49751 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0806 07:58:21.354864   49751 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0806 07:58:21.354873   49751 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0806 07:58:21.354888   49751 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0806 07:58:21.354904   49751 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0806 07:58:21.354916   49751 command_runner.go:130] > # internal_wipe = true
	I0806 07:58:21.354932   49751 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0806 07:58:21.354944   49751 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0806 07:58:21.354954   49751 command_runner.go:130] > # internal_repair = false
	I0806 07:58:21.354963   49751 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0806 07:58:21.354976   49751 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0806 07:58:21.354987   49751 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0806 07:58:21.354996   49751 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0806 07:58:21.355008   49751 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0806 07:58:21.355017   49751 command_runner.go:130] > [crio.api]
	I0806 07:58:21.355025   49751 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0806 07:58:21.355035   49751 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0806 07:58:21.355043   49751 command_runner.go:130] > # IP address on which the stream server will listen.
	I0806 07:58:21.355053   49751 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0806 07:58:21.355063   49751 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0806 07:58:21.355075   49751 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0806 07:58:21.355083   49751 command_runner.go:130] > # stream_port = "0"
	I0806 07:58:21.355093   49751 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0806 07:58:21.355103   49751 command_runner.go:130] > # stream_enable_tls = false
	I0806 07:58:21.355113   49751 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0806 07:58:21.355122   49751 command_runner.go:130] > # stream_idle_timeout = ""
	I0806 07:58:21.355132   49751 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0806 07:58:21.355145   49751 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0806 07:58:21.355150   49751 command_runner.go:130] > # minutes.
	I0806 07:58:21.355158   49751 command_runner.go:130] > # stream_tls_cert = ""
	I0806 07:58:21.355170   49751 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0806 07:58:21.355181   49751 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0806 07:58:21.355191   49751 command_runner.go:130] > # stream_tls_key = ""
	I0806 07:58:21.355201   49751 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0806 07:58:21.355213   49751 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0806 07:58:21.355231   49751 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0806 07:58:21.355241   49751 command_runner.go:130] > # stream_tls_ca = ""
	I0806 07:58:21.355252   49751 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0806 07:58:21.355262   49751 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0806 07:58:21.355275   49751 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0806 07:58:21.355285   49751 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0806 07:58:21.355296   49751 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0806 07:58:21.355308   49751 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0806 07:58:21.355316   49751 command_runner.go:130] > [crio.runtime]
	I0806 07:58:21.355325   49751 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0806 07:58:21.355336   49751 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0806 07:58:21.355342   49751 command_runner.go:130] > # "nofile=1024:2048"
	I0806 07:58:21.355354   49751 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0806 07:58:21.355364   49751 command_runner.go:130] > # default_ulimits = [
	I0806 07:58:21.355369   49751 command_runner.go:130] > # ]
	I0806 07:58:21.355382   49751 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0806 07:58:21.355390   49751 command_runner.go:130] > # no_pivot = false
	I0806 07:58:21.355398   49751 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0806 07:58:21.355410   49751 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0806 07:58:21.355420   49751 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0806 07:58:21.355430   49751 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0806 07:58:21.355439   49751 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0806 07:58:21.355448   49751 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0806 07:58:21.355458   49751 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0806 07:58:21.355464   49751 command_runner.go:130] > # Cgroup setting for conmon
	I0806 07:58:21.355476   49751 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0806 07:58:21.355486   49751 command_runner.go:130] > conmon_cgroup = "pod"
	I0806 07:58:21.355495   49751 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0806 07:58:21.355505   49751 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0806 07:58:21.355516   49751 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0806 07:58:21.355525   49751 command_runner.go:130] > conmon_env = [
	I0806 07:58:21.355536   49751 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0806 07:58:21.355545   49751 command_runner.go:130] > ]
	I0806 07:58:21.355554   49751 command_runner.go:130] > # Additional environment variables to set for all the
	I0806 07:58:21.355565   49751 command_runner.go:130] > # containers. These are overridden if set in the
	I0806 07:58:21.355575   49751 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0806 07:58:21.355584   49751 command_runner.go:130] > # default_env = [
	I0806 07:58:21.355590   49751 command_runner.go:130] > # ]
	I0806 07:58:21.355603   49751 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0806 07:58:21.355623   49751 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0806 07:58:21.355633   49751 command_runner.go:130] > # selinux = false
	I0806 07:58:21.355644   49751 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0806 07:58:21.355656   49751 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0806 07:58:21.355668   49751 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0806 07:58:21.355677   49751 command_runner.go:130] > # seccomp_profile = ""
	I0806 07:58:21.355686   49751 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0806 07:58:21.355697   49751 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0806 07:58:21.355707   49751 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0806 07:58:21.355715   49751 command_runner.go:130] > # which might increase security.
	I0806 07:58:21.355725   49751 command_runner.go:130] > # This option is currently deprecated,
	I0806 07:58:21.355734   49751 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0806 07:58:21.355747   49751 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0806 07:58:21.355760   49751 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0806 07:58:21.355772   49751 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0806 07:58:21.355785   49751 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0806 07:58:21.355799   49751 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0806 07:58:21.355810   49751 command_runner.go:130] > # This option supports live configuration reload.
	I0806 07:58:21.355822   49751 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0806 07:58:21.355834   49751 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0806 07:58:21.355851   49751 command_runner.go:130] > # the cgroup blockio controller.
	I0806 07:58:21.355861   49751 command_runner.go:130] > # blockio_config_file = ""
	I0806 07:58:21.355872   49751 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0806 07:58:21.355881   49751 command_runner.go:130] > # blockio parameters.
	I0806 07:58:21.355888   49751 command_runner.go:130] > # blockio_reload = false
	I0806 07:58:21.355902   49751 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0806 07:58:21.355911   49751 command_runner.go:130] > # irqbalance daemon.
	I0806 07:58:21.355919   49751 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0806 07:58:21.355930   49751 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0806 07:58:21.355941   49751 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0806 07:58:21.355954   49751 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0806 07:58:21.355966   49751 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0806 07:58:21.355978   49751 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0806 07:58:21.355988   49751 command_runner.go:130] > # This option supports live configuration reload.
	I0806 07:58:21.355999   49751 command_runner.go:130] > # rdt_config_file = ""
	I0806 07:58:21.356008   49751 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0806 07:58:21.356018   49751 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0806 07:58:21.356038   49751 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0806 07:58:21.356047   49751 command_runner.go:130] > # separate_pull_cgroup = ""
	I0806 07:58:21.356057   49751 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0806 07:58:21.356069   49751 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0806 07:58:21.356078   49751 command_runner.go:130] > # will be added.
	I0806 07:58:21.356088   49751 command_runner.go:130] > # default_capabilities = [
	I0806 07:58:21.356095   49751 command_runner.go:130] > # 	"CHOWN",
	I0806 07:58:21.356104   49751 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0806 07:58:21.356112   49751 command_runner.go:130] > # 	"FSETID",
	I0806 07:58:21.356124   49751 command_runner.go:130] > # 	"FOWNER",
	I0806 07:58:21.356130   49751 command_runner.go:130] > # 	"SETGID",
	I0806 07:58:21.356138   49751 command_runner.go:130] > # 	"SETUID",
	I0806 07:58:21.356144   49751 command_runner.go:130] > # 	"SETPCAP",
	I0806 07:58:21.356160   49751 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0806 07:58:21.356170   49751 command_runner.go:130] > # 	"KILL",
	I0806 07:58:21.356175   49751 command_runner.go:130] > # ]
	I0806 07:58:21.356190   49751 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0806 07:58:21.356204   49751 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0806 07:58:21.356213   49751 command_runner.go:130] > # add_inheritable_capabilities = false
	I0806 07:58:21.356224   49751 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0806 07:58:21.356237   49751 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0806 07:58:21.356246   49751 command_runner.go:130] > default_sysctls = [
	I0806 07:58:21.356255   49751 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0806 07:58:21.356263   49751 command_runner.go:130] > ]
	I0806 07:58:21.356271   49751 command_runner.go:130] > # List of devices on the host that a
	I0806 07:58:21.356284   49751 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0806 07:58:21.356295   49751 command_runner.go:130] > # allowed_devices = [
	I0806 07:58:21.356301   49751 command_runner.go:130] > # 	"/dev/fuse",
	I0806 07:58:21.356309   49751 command_runner.go:130] > # ]
	I0806 07:58:21.356318   49751 command_runner.go:130] > # List of additional devices. specified as
	I0806 07:58:21.356331   49751 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0806 07:58:21.356342   49751 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0806 07:58:21.356354   49751 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0806 07:58:21.356379   49751 command_runner.go:130] > # additional_devices = [
	I0806 07:58:21.356386   49751 command_runner.go:130] > # ]
	I0806 07:58:21.356397   49751 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0806 07:58:21.356404   49751 command_runner.go:130] > # cdi_spec_dirs = [
	I0806 07:58:21.356413   49751 command_runner.go:130] > # 	"/etc/cdi",
	I0806 07:58:21.356419   49751 command_runner.go:130] > # 	"/var/run/cdi",
	I0806 07:58:21.356428   49751 command_runner.go:130] > # ]
	I0806 07:58:21.356438   49751 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0806 07:58:21.356450   49751 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0806 07:58:21.356457   49751 command_runner.go:130] > # Defaults to false.
	I0806 07:58:21.356471   49751 command_runner.go:130] > # device_ownership_from_security_context = false
	I0806 07:58:21.356484   49751 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0806 07:58:21.356497   49751 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0806 07:58:21.356508   49751 command_runner.go:130] > # hooks_dir = [
	I0806 07:58:21.356518   49751 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0806 07:58:21.356526   49751 command_runner.go:130] > # ]
	I0806 07:58:21.356535   49751 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0806 07:58:21.356548   49751 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0806 07:58:21.356559   49751 command_runner.go:130] > # its default mounts from the following two files:
	I0806 07:58:21.356568   49751 command_runner.go:130] > #
	I0806 07:58:21.356579   49751 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0806 07:58:21.356591   49751 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0806 07:58:21.356602   49751 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0806 07:58:21.356611   49751 command_runner.go:130] > #
	I0806 07:58:21.356620   49751 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0806 07:58:21.356634   49751 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0806 07:58:21.356647   49751 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0806 07:58:21.356657   49751 command_runner.go:130] > #      only add mounts it finds in this file.
	I0806 07:58:21.356662   49751 command_runner.go:130] > #
	I0806 07:58:21.356674   49751 command_runner.go:130] > # default_mounts_file = ""
	I0806 07:58:21.356685   49751 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0806 07:58:21.356701   49751 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0806 07:58:21.356710   49751 command_runner.go:130] > pids_limit = 1024
	I0806 07:58:21.356720   49751 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0806 07:58:21.356732   49751 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0806 07:58:21.356744   49751 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0806 07:58:21.356758   49751 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0806 07:58:21.356768   49751 command_runner.go:130] > # log_size_max = -1
	I0806 07:58:21.356779   49751 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0806 07:58:21.356791   49751 command_runner.go:130] > # log_to_journald = false
	I0806 07:58:21.356803   49751 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0806 07:58:21.356814   49751 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0806 07:58:21.356824   49751 command_runner.go:130] > # Path to directory for container attach sockets.
	I0806 07:58:21.356835   49751 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0806 07:58:21.356871   49751 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0806 07:58:21.356883   49751 command_runner.go:130] > # bind_mount_prefix = ""
	I0806 07:58:21.356892   49751 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0806 07:58:21.356900   49751 command_runner.go:130] > # read_only = false
	I0806 07:58:21.356911   49751 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0806 07:58:21.356920   49751 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0806 07:58:21.356932   49751 command_runner.go:130] > # live configuration reload.
	I0806 07:58:21.356938   49751 command_runner.go:130] > # log_level = "info"
	I0806 07:58:21.356947   49751 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0806 07:58:21.356957   49751 command_runner.go:130] > # This option supports live configuration reload.
	I0806 07:58:21.356966   49751 command_runner.go:130] > # log_filter = ""
	I0806 07:58:21.356976   49751 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0806 07:58:21.356989   49751 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0806 07:58:21.357000   49751 command_runner.go:130] > # separated by comma.
	I0806 07:58:21.357012   49751 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0806 07:58:21.357023   49751 command_runner.go:130] > # uid_mappings = ""
	I0806 07:58:21.357033   49751 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0806 07:58:21.357046   49751 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0806 07:58:21.357055   49751 command_runner.go:130] > # separated by comma.
	I0806 07:58:21.357070   49751 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0806 07:58:21.357079   49751 command_runner.go:130] > # gid_mappings = ""
	I0806 07:58:21.357089   49751 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0806 07:58:21.357101   49751 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0806 07:58:21.357111   49751 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0806 07:58:21.357126   49751 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0806 07:58:21.357135   49751 command_runner.go:130] > # minimum_mappable_uid = -1
	I0806 07:58:21.357145   49751 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0806 07:58:21.357157   49751 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0806 07:58:21.357170   49751 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0806 07:58:21.357184   49751 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0806 07:58:21.357193   49751 command_runner.go:130] > # minimum_mappable_gid = -1
	I0806 07:58:21.357202   49751 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0806 07:58:21.357215   49751 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0806 07:58:21.357227   49751 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0806 07:58:21.357238   49751 command_runner.go:130] > # ctr_stop_timeout = 30
	I0806 07:58:21.357251   49751 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0806 07:58:21.357259   49751 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0806 07:58:21.357269   49751 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0806 07:58:21.357279   49751 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0806 07:58:21.357287   49751 command_runner.go:130] > drop_infra_ctr = false
	I0806 07:58:21.357296   49751 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0806 07:58:21.357307   49751 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0806 07:58:21.357320   49751 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0806 07:58:21.357328   49751 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0806 07:58:21.357342   49751 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0806 07:58:21.357353   49751 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0806 07:58:21.357365   49751 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0806 07:58:21.357378   49751 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0806 07:58:21.357388   49751 command_runner.go:130] > # shared_cpuset = ""
	I0806 07:58:21.357399   49751 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0806 07:58:21.357410   49751 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0806 07:58:21.357419   49751 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0806 07:58:21.357436   49751 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0806 07:58:21.357446   49751 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0806 07:58:21.357454   49751 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0806 07:58:21.357466   49751 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0806 07:58:21.357475   49751 command_runner.go:130] > # enable_criu_support = false
	I0806 07:58:21.357483   49751 command_runner.go:130] > # Enable/disable the generation of the container,
	I0806 07:58:21.357495   49751 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0806 07:58:21.357504   49751 command_runner.go:130] > # enable_pod_events = false
	I0806 07:58:21.357514   49751 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0806 07:58:21.357526   49751 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0806 07:58:21.357538   49751 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0806 07:58:21.357546   49751 command_runner.go:130] > # default_runtime = "runc"
	I0806 07:58:21.357557   49751 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0806 07:58:21.357570   49751 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0806 07:58:21.357589   49751 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0806 07:58:21.357600   49751 command_runner.go:130] > # creation as a file is not desired either.
	I0806 07:58:21.357614   49751 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0806 07:58:21.357626   49751 command_runner.go:130] > # the hostname is being managed dynamically.
	I0806 07:58:21.357636   49751 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0806 07:58:21.357643   49751 command_runner.go:130] > # ]
	I0806 07:58:21.357652   49751 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0806 07:58:21.357665   49751 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0806 07:58:21.357677   49751 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0806 07:58:21.357688   49751 command_runner.go:130] > # Each entry in the table should follow the format:
	I0806 07:58:21.357696   49751 command_runner.go:130] > #
	I0806 07:58:21.357704   49751 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0806 07:58:21.357715   49751 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0806 07:58:21.357764   49751 command_runner.go:130] > # runtime_type = "oci"
	I0806 07:58:21.357775   49751 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0806 07:58:21.357783   49751 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0806 07:58:21.357794   49751 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0806 07:58:21.357803   49751 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0806 07:58:21.357813   49751 command_runner.go:130] > # monitor_env = []
	I0806 07:58:21.357821   49751 command_runner.go:130] > # privileged_without_host_devices = false
	I0806 07:58:21.357832   49751 command_runner.go:130] > # allowed_annotations = []
	I0806 07:58:21.357849   49751 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0806 07:58:21.357858   49751 command_runner.go:130] > # Where:
	I0806 07:58:21.357869   49751 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0806 07:58:21.357881   49751 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0806 07:58:21.357894   49751 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0806 07:58:21.357906   49751 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0806 07:58:21.357912   49751 command_runner.go:130] > #   in $PATH.
	I0806 07:58:21.357925   49751 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0806 07:58:21.357935   49751 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0806 07:58:21.357945   49751 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0806 07:58:21.357954   49751 command_runner.go:130] > #   state.
	I0806 07:58:21.357964   49751 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0806 07:58:21.357976   49751 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0806 07:58:21.357989   49751 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0806 07:58:21.358002   49751 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0806 07:58:21.358016   49751 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0806 07:58:21.358030   49751 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0806 07:58:21.358040   49751 command_runner.go:130] > #   The currently recognized values are:
	I0806 07:58:21.358054   49751 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0806 07:58:21.358068   49751 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0806 07:58:21.358081   49751 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0806 07:58:21.358094   49751 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0806 07:58:21.358110   49751 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0806 07:58:21.358123   49751 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0806 07:58:21.358136   49751 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0806 07:58:21.358147   49751 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0806 07:58:21.358155   49751 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0806 07:58:21.358166   49751 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0806 07:58:21.358172   49751 command_runner.go:130] > #   deprecated option "conmon".
	I0806 07:58:21.358185   49751 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0806 07:58:21.358196   49751 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0806 07:58:21.358206   49751 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0806 07:58:21.358217   49751 command_runner.go:130] > #   should be moved to the container's cgroup
	I0806 07:58:21.358230   49751 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0806 07:58:21.358241   49751 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0806 07:58:21.358253   49751 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0806 07:58:21.358263   49751 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0806 07:58:21.358271   49751 command_runner.go:130] > #
	I0806 07:58:21.358279   49751 command_runner.go:130] > # Using the seccomp notifier feature:
	I0806 07:58:21.358286   49751 command_runner.go:130] > #
	I0806 07:58:21.358295   49751 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0806 07:58:21.358307   49751 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0806 07:58:21.358316   49751 command_runner.go:130] > #
	I0806 07:58:21.358327   49751 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0806 07:58:21.358340   49751 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0806 07:58:21.358348   49751 command_runner.go:130] > #
	I0806 07:58:21.358358   49751 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0806 07:58:21.358366   49751 command_runner.go:130] > # feature.
	I0806 07:58:21.358371   49751 command_runner.go:130] > #
	I0806 07:58:21.358381   49751 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0806 07:58:21.358393   49751 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0806 07:58:21.358408   49751 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0806 07:58:21.358424   49751 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0806 07:58:21.358438   49751 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0806 07:58:21.358445   49751 command_runner.go:130] > #
	I0806 07:58:21.358455   49751 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0806 07:58:21.358471   49751 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0806 07:58:21.358479   49751 command_runner.go:130] > #
	I0806 07:58:21.358488   49751 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0806 07:58:21.358499   49751 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0806 07:58:21.358508   49751 command_runner.go:130] > #
	I0806 07:58:21.358517   49751 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0806 07:58:21.358529   49751 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0806 07:58:21.358536   49751 command_runner.go:130] > # limitation.
	I0806 07:58:21.358547   49751 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0806 07:58:21.358553   49751 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0806 07:58:21.358562   49751 command_runner.go:130] > runtime_type = "oci"
	I0806 07:58:21.358569   49751 command_runner.go:130] > runtime_root = "/run/runc"
	I0806 07:58:21.358579   49751 command_runner.go:130] > runtime_config_path = ""
	I0806 07:58:21.358587   49751 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0806 07:58:21.358596   49751 command_runner.go:130] > monitor_cgroup = "pod"
	I0806 07:58:21.358603   49751 command_runner.go:130] > monitor_exec_cgroup = ""
	I0806 07:58:21.358612   49751 command_runner.go:130] > monitor_env = [
	I0806 07:58:21.358621   49751 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0806 07:58:21.358630   49751 command_runner.go:130] > ]
	I0806 07:58:21.358638   49751 command_runner.go:130] > privileged_without_host_devices = false
	I0806 07:58:21.358650   49751 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0806 07:58:21.358662   49751 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0806 07:58:21.358674   49751 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0806 07:58:21.358689   49751 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0806 07:58:21.358704   49751 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0806 07:58:21.358716   49751 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0806 07:58:21.358733   49751 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0806 07:58:21.358750   49751 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0806 07:58:21.358762   49751 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0806 07:58:21.358776   49751 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0806 07:58:21.358783   49751 command_runner.go:130] > # Example:
	I0806 07:58:21.358797   49751 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0806 07:58:21.358805   49751 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0806 07:58:21.358812   49751 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0806 07:58:21.358820   49751 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0806 07:58:21.358826   49751 command_runner.go:130] > # cpuset = 0
	I0806 07:58:21.358832   49751 command_runner.go:130] > # cpushares = "0-1"
	I0806 07:58:21.358837   49751 command_runner.go:130] > # Where:
	I0806 07:58:21.358850   49751 command_runner.go:130] > # The workload name is workload-type.
	I0806 07:58:21.358862   49751 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0806 07:58:21.358870   49751 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0806 07:58:21.358879   49751 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0806 07:58:21.358891   49751 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0806 07:58:21.358901   49751 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0806 07:58:21.358909   49751 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0806 07:58:21.358919   49751 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0806 07:58:21.358926   49751 command_runner.go:130] > # Default value is set to true
	I0806 07:58:21.358933   49751 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0806 07:58:21.358941   49751 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0806 07:58:21.358948   49751 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0806 07:58:21.358954   49751 command_runner.go:130] > # Default value is set to 'false'
	I0806 07:58:21.358962   49751 command_runner.go:130] > # disable_hostport_mapping = false
	I0806 07:58:21.358970   49751 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0806 07:58:21.358975   49751 command_runner.go:130] > #
	I0806 07:58:21.358983   49751 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0806 07:58:21.358995   49751 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0806 07:58:21.359004   49751 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0806 07:58:21.359017   49751 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0806 07:58:21.359028   49751 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0806 07:58:21.359036   49751 command_runner.go:130] > [crio.image]
	I0806 07:58:21.359045   49751 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0806 07:58:21.359055   49751 command_runner.go:130] > # default_transport = "docker://"
	I0806 07:58:21.359065   49751 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0806 07:58:21.359077   49751 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0806 07:58:21.359085   49751 command_runner.go:130] > # global_auth_file = ""
	I0806 07:58:21.359093   49751 command_runner.go:130] > # The image used to instantiate infra containers.
	I0806 07:58:21.359106   49751 command_runner.go:130] > # This option supports live configuration reload.
	I0806 07:58:21.359113   49751 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0806 07:58:21.359126   49751 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0806 07:58:21.359137   49751 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0806 07:58:21.359148   49751 command_runner.go:130] > # This option supports live configuration reload.
	I0806 07:58:21.359155   49751 command_runner.go:130] > # pause_image_auth_file = ""
	I0806 07:58:21.359167   49751 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0806 07:58:21.359180   49751 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0806 07:58:21.359197   49751 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0806 07:58:21.359209   49751 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0806 07:58:21.359220   49751 command_runner.go:130] > # pause_command = "/pause"
	I0806 07:58:21.359232   49751 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0806 07:58:21.359245   49751 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0806 07:58:21.359259   49751 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0806 07:58:21.359270   49751 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0806 07:58:21.359281   49751 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0806 07:58:21.359292   49751 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0806 07:58:21.359302   49751 command_runner.go:130] > # pinned_images = [
	I0806 07:58:21.359311   49751 command_runner.go:130] > # ]
	I0806 07:58:21.359324   49751 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0806 07:58:21.359337   49751 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0806 07:58:21.359350   49751 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0806 07:58:21.359363   49751 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0806 07:58:21.359375   49751 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0806 07:58:21.359385   49751 command_runner.go:130] > # signature_policy = ""
	I0806 07:58:21.359397   49751 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0806 07:58:21.359411   49751 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0806 07:58:21.359424   49751 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0806 07:58:21.359438   49751 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0806 07:58:21.359451   49751 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0806 07:58:21.359462   49751 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0806 07:58:21.359474   49751 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0806 07:58:21.359494   49751 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0806 07:58:21.359504   49751 command_runner.go:130] > # changing them here.
	I0806 07:58:21.359514   49751 command_runner.go:130] > # insecure_registries = [
	I0806 07:58:21.359522   49751 command_runner.go:130] > # ]
	I0806 07:58:21.359535   49751 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0806 07:58:21.359546   49751 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0806 07:58:21.359556   49751 command_runner.go:130] > # image_volumes = "mkdir"
	I0806 07:58:21.359568   49751 command_runner.go:130] > # Temporary directory to use for storing big files
	I0806 07:58:21.359578   49751 command_runner.go:130] > # big_files_temporary_dir = ""
	I0806 07:58:21.359591   49751 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0806 07:58:21.359600   49751 command_runner.go:130] > # CNI plugins.
	I0806 07:58:21.359609   49751 command_runner.go:130] > [crio.network]
	I0806 07:58:21.359621   49751 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0806 07:58:21.359632   49751 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0806 07:58:21.359641   49751 command_runner.go:130] > # cni_default_network = ""
	I0806 07:58:21.359653   49751 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0806 07:58:21.359662   49751 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0806 07:58:21.359676   49751 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0806 07:58:21.359684   49751 command_runner.go:130] > # plugin_dirs = [
	I0806 07:58:21.359692   49751 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0806 07:58:21.359699   49751 command_runner.go:130] > # ]
	I0806 07:58:21.359708   49751 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0806 07:58:21.359716   49751 command_runner.go:130] > [crio.metrics]
	I0806 07:58:21.359727   49751 command_runner.go:130] > # Globally enable or disable metrics support.
	I0806 07:58:21.359737   49751 command_runner.go:130] > enable_metrics = true
	I0806 07:58:21.359747   49751 command_runner.go:130] > # Specify enabled metrics collectors.
	I0806 07:58:21.359758   49751 command_runner.go:130] > # Per default all metrics are enabled.
	I0806 07:58:21.359771   49751 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0806 07:58:21.359784   49751 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0806 07:58:21.359797   49751 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0806 07:58:21.359807   49751 command_runner.go:130] > # metrics_collectors = [
	I0806 07:58:21.359817   49751 command_runner.go:130] > # 	"operations",
	I0806 07:58:21.359827   49751 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0806 07:58:21.359837   49751 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0806 07:58:21.359854   49751 command_runner.go:130] > # 	"operations_errors",
	I0806 07:58:21.359863   49751 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0806 07:58:21.359869   49751 command_runner.go:130] > # 	"image_pulls_by_name",
	I0806 07:58:21.359879   49751 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0806 07:58:21.359887   49751 command_runner.go:130] > # 	"image_pulls_failures",
	I0806 07:58:21.359895   49751 command_runner.go:130] > # 	"image_pulls_successes",
	I0806 07:58:21.359904   49751 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0806 07:58:21.359910   49751 command_runner.go:130] > # 	"image_layer_reuse",
	I0806 07:58:21.359919   49751 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0806 07:58:21.359928   49751 command_runner.go:130] > # 	"containers_oom_total",
	I0806 07:58:21.359937   49751 command_runner.go:130] > # 	"containers_oom",
	I0806 07:58:21.359945   49751 command_runner.go:130] > # 	"processes_defunct",
	I0806 07:58:21.359950   49751 command_runner.go:130] > # 	"operations_total",
	I0806 07:58:21.359959   49751 command_runner.go:130] > # 	"operations_latency_seconds",
	I0806 07:58:21.359968   49751 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0806 07:58:21.359976   49751 command_runner.go:130] > # 	"operations_errors_total",
	I0806 07:58:21.359985   49751 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0806 07:58:21.359994   49751 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0806 07:58:21.360004   49751 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0806 07:58:21.360014   49751 command_runner.go:130] > # 	"image_pulls_success_total",
	I0806 07:58:21.360022   49751 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0806 07:58:21.360031   49751 command_runner.go:130] > # 	"containers_oom_count_total",
	I0806 07:58:21.360039   49751 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0806 07:58:21.360048   49751 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0806 07:58:21.360063   49751 command_runner.go:130] > # ]
	I0806 07:58:21.360072   49751 command_runner.go:130] > # The port on which the metrics server will listen.
	I0806 07:58:21.360081   49751 command_runner.go:130] > # metrics_port = 9090
	I0806 07:58:21.360090   49751 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0806 07:58:21.360098   49751 command_runner.go:130] > # metrics_socket = ""
	I0806 07:58:21.360108   49751 command_runner.go:130] > # The certificate for the secure metrics server.
	I0806 07:58:21.360119   49751 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0806 07:58:21.360131   49751 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0806 07:58:21.360141   49751 command_runner.go:130] > # certificate on any modification event.
	I0806 07:58:21.360150   49751 command_runner.go:130] > # metrics_cert = ""
	I0806 07:58:21.360160   49751 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0806 07:58:21.360172   49751 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0806 07:58:21.360181   49751 command_runner.go:130] > # metrics_key = ""
	I0806 07:58:21.360192   49751 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0806 07:58:21.360202   49751 command_runner.go:130] > [crio.tracing]
	I0806 07:58:21.360211   49751 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0806 07:58:21.360219   49751 command_runner.go:130] > # enable_tracing = false
	I0806 07:58:21.360229   49751 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0806 07:58:21.360239   49751 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0806 07:58:21.360251   49751 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0806 07:58:21.360260   49751 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0806 07:58:21.360269   49751 command_runner.go:130] > # CRI-O NRI configuration.
	I0806 07:58:21.360277   49751 command_runner.go:130] > [crio.nri]
	I0806 07:58:21.360288   49751 command_runner.go:130] > # Globally enable or disable NRI.
	I0806 07:58:21.360294   49751 command_runner.go:130] > # enable_nri = false
	I0806 07:58:21.360304   49751 command_runner.go:130] > # NRI socket to listen on.
	I0806 07:58:21.360314   49751 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0806 07:58:21.360324   49751 command_runner.go:130] > # NRI plugin directory to use.
	I0806 07:58:21.360334   49751 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0806 07:58:21.360341   49751 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0806 07:58:21.360349   49751 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0806 07:58:21.360358   49751 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0806 07:58:21.360378   49751 command_runner.go:130] > # nri_disable_connections = false
	I0806 07:58:21.360389   49751 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0806 07:58:21.360398   49751 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0806 07:58:21.360405   49751 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0806 07:58:21.360413   49751 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0806 07:58:21.360424   49751 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0806 07:58:21.360432   49751 command_runner.go:130] > [crio.stats]
	I0806 07:58:21.360441   49751 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0806 07:58:21.360452   49751 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0806 07:58:21.360462   49751 command_runner.go:130] > # stats_collection_period = 0
	I0806 07:58:21.360998   49751 command_runner.go:130] ! time="2024-08-06 07:58:21.322762724Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0806 07:58:21.361017   49751 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0806 07:58:21.361130   49751 cni.go:84] Creating CNI manager for ""
	I0806 07:58:21.361139   49751 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0806 07:58:21.361148   49751 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0806 07:58:21.361168   49751 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.186 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-551471 NodeName:multinode-551471 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.186"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.186 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0806 07:58:21.361323   49751 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.186
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-551471"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.186
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.186"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0806 07:58:21.361396   49751 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0806 07:58:21.371367   49751 command_runner.go:130] > kubeadm
	I0806 07:58:21.371388   49751 command_runner.go:130] > kubectl
	I0806 07:58:21.371395   49751 command_runner.go:130] > kubelet
	I0806 07:58:21.371555   49751 binaries.go:44] Found k8s binaries, skipping transfer
	I0806 07:58:21.371621   49751 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0806 07:58:21.381576   49751 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0806 07:58:21.399164   49751 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0806 07:58:21.416552   49751 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0806 07:58:21.433748   49751 ssh_runner.go:195] Run: grep 192.168.39.186	control-plane.minikube.internal$ /etc/hosts
	I0806 07:58:21.437757   49751 command_runner.go:130] > 192.168.39.186	control-plane.minikube.internal
	I0806 07:58:21.437829   49751 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 07:58:21.572421   49751 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 07:58:21.587702   49751 certs.go:68] Setting up /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/multinode-551471 for IP: 192.168.39.186
	I0806 07:58:21.587727   49751 certs.go:194] generating shared ca certs ...
	I0806 07:58:21.587752   49751 certs.go:226] acquiring lock for ca certs: {Name:mkf5c5aed91f4108054d9f2c742cad66c86afbed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 07:58:21.587917   49751 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.key
	I0806 07:58:21.587965   49751 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.key
	I0806 07:58:21.587975   49751 certs.go:256] generating profile certs ...
	I0806 07:58:21.588049   49751 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/multinode-551471/client.key
	I0806 07:58:21.588111   49751 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/multinode-551471/apiserver.key.7045bc00
	I0806 07:58:21.588148   49751 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/multinode-551471/proxy-client.key
	I0806 07:58:21.588160   49751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0806 07:58:21.588171   49751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0806 07:58:21.588186   49751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0806 07:58:21.588195   49751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0806 07:58:21.588204   49751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/multinode-551471/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0806 07:58:21.588227   49751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/multinode-551471/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0806 07:58:21.588243   49751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/multinode-551471/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0806 07:58:21.588255   49751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/multinode-551471/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0806 07:58:21.588307   49751 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246.pem (1338 bytes)
	W0806 07:58:21.588336   49751 certs.go:480] ignoring /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246_empty.pem, impossibly tiny 0 bytes
	I0806 07:58:21.588346   49751 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem (1675 bytes)
	I0806 07:58:21.588386   49751 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem (1078 bytes)
	I0806 07:58:21.588407   49751 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem (1123 bytes)
	I0806 07:58:21.588431   49751 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem (1675 bytes)
	I0806 07:58:21.588467   49751 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem (1708 bytes)
	I0806 07:58:21.588493   49751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0806 07:58:21.588507   49751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246.pem -> /usr/share/ca-certificates/20246.pem
	I0806 07:58:21.588518   49751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem -> /usr/share/ca-certificates/202462.pem
	I0806 07:58:21.589129   49751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0806 07:58:21.616200   49751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0806 07:58:21.642041   49751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0806 07:58:21.666247   49751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0806 07:58:21.690867   49751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/multinode-551471/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0806 07:58:21.715060   49751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/multinode-551471/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0806 07:58:21.738980   49751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/multinode-551471/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0806 07:58:21.762708   49751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/multinode-551471/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0806 07:58:21.788411   49751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0806 07:58:21.813739   49751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246.pem --> /usr/share/ca-certificates/20246.pem (1338 bytes)
	I0806 07:58:21.838768   49751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem --> /usr/share/ca-certificates/202462.pem (1708 bytes)
	I0806 07:58:21.862900   49751 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0806 07:58:21.879557   49751 ssh_runner.go:195] Run: openssl version
	I0806 07:58:21.885712   49751 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0806 07:58:21.885833   49751 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0806 07:58:21.896253   49751 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0806 07:58:21.900892   49751 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug  6 07:07 /usr/share/ca-certificates/minikubeCA.pem
	I0806 07:58:21.900943   49751 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  6 07:07 /usr/share/ca-certificates/minikubeCA.pem
	I0806 07:58:21.900981   49751 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0806 07:58:21.906656   49751 command_runner.go:130] > b5213941
	I0806 07:58:21.906723   49751 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0806 07:58:21.916072   49751 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20246.pem && ln -fs /usr/share/ca-certificates/20246.pem /etc/ssl/certs/20246.pem"
	I0806 07:58:21.926913   49751 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20246.pem
	I0806 07:58:21.931212   49751 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug  6 07:17 /usr/share/ca-certificates/20246.pem
	I0806 07:58:21.931534   49751 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  6 07:17 /usr/share/ca-certificates/20246.pem
	I0806 07:58:21.931610   49751 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20246.pem
	I0806 07:58:21.937229   49751 command_runner.go:130] > 51391683
	I0806 07:58:21.937354   49751 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20246.pem /etc/ssl/certs/51391683.0"
	I0806 07:58:21.947273   49751 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202462.pem && ln -fs /usr/share/ca-certificates/202462.pem /etc/ssl/certs/202462.pem"
	I0806 07:58:21.958269   49751 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202462.pem
	I0806 07:58:21.962934   49751 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug  6 07:17 /usr/share/ca-certificates/202462.pem
	I0806 07:58:21.962962   49751 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  6 07:17 /usr/share/ca-certificates/202462.pem
	I0806 07:58:21.962994   49751 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202462.pem
	I0806 07:58:21.968654   49751 command_runner.go:130] > 3ec20f2e
	I0806 07:58:21.968716   49751 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202462.pem /etc/ssl/certs/3ec20f2e.0"
	I0806 07:58:21.978472   49751 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0806 07:58:21.982815   49751 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0806 07:58:21.982843   49751 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0806 07:58:21.982852   49751 command_runner.go:130] > Device: 253,1	Inode: 1056811     Links: 1
	I0806 07:58:21.982860   49751 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0806 07:58:21.982869   49751 command_runner.go:130] > Access: 2024-08-06 07:51:21.084894930 +0000
	I0806 07:58:21.982876   49751 command_runner.go:130] > Modify: 2024-08-06 07:51:21.084894930 +0000
	I0806 07:58:21.982883   49751 command_runner.go:130] > Change: 2024-08-06 07:51:21.084894930 +0000
	I0806 07:58:21.982894   49751 command_runner.go:130] >  Birth: 2024-08-06 07:51:21.084894930 +0000
	I0806 07:58:21.982938   49751 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0806 07:58:21.988509   49751 command_runner.go:130] > Certificate will not expire
	I0806 07:58:21.988748   49751 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0806 07:58:21.994294   49751 command_runner.go:130] > Certificate will not expire
	I0806 07:58:21.994351   49751 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0806 07:58:21.999843   49751 command_runner.go:130] > Certificate will not expire
	I0806 07:58:22.000049   49751 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0806 07:58:22.005882   49751 command_runner.go:130] > Certificate will not expire
	I0806 07:58:22.005967   49751 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0806 07:58:22.011728   49751 command_runner.go:130] > Certificate will not expire
	I0806 07:58:22.011780   49751 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0806 07:58:22.017150   49751 command_runner.go:130] > Certificate will not expire
	I0806 07:58:22.017391   49751 kubeadm.go:392] StartCluster: {Name:multinode-551471 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:multinode-551471 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.186 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.134 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.189 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 07:58:22.017537   49751 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0806 07:58:22.017579   49751 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0806 07:58:22.055400   49751 command_runner.go:130] > 231998fa150bd615568fd89931d682cd6eb3900552e3e0c414c2ee3a6fb749ad
	I0806 07:58:22.055421   49751 command_runner.go:130] > f1f1d9862a98539cf5a1ed1d209cef2b0e4e7061910368640a15be822397ebb9
	I0806 07:58:22.055428   49751 command_runner.go:130] > d7923487019be61d3f46ec909607d2c85a99a8b64a5c00f89ef7bf1200b38cd4
	I0806 07:58:22.055433   49751 command_runner.go:130] > 74e91b8e25d98b20fd885a292fff4c3200fc87c06bdea6de195993b8d96c6480
	I0806 07:58:22.055439   49751 command_runner.go:130] > 4b56e43215ad57bc3f2aed6d531db767cca6cf0f89b2f01bb78c33d7c3d76692
	I0806 07:58:22.055444   49751 command_runner.go:130] > beb56425a40e2135964c6eb3931935babc4cf81b32a1e8c70735f1995effbb77
	I0806 07:58:22.055449   49751 command_runner.go:130] > 474e29aac6711f79764feef65f30a2d86dcc7d05632b305853bee3f8a4aaecb0
	I0806 07:58:22.055456   49751 command_runner.go:130] > 1cf9151b1941b0e938e94a6d7247c74e3d0e1f26fddaf0350cc2cb6a211618a7
	I0806 07:58:22.055473   49751 cri.go:89] found id: "231998fa150bd615568fd89931d682cd6eb3900552e3e0c414c2ee3a6fb749ad"
	I0806 07:58:22.055480   49751 cri.go:89] found id: "f1f1d9862a98539cf5a1ed1d209cef2b0e4e7061910368640a15be822397ebb9"
	I0806 07:58:22.055485   49751 cri.go:89] found id: "d7923487019be61d3f46ec909607d2c85a99a8b64a5c00f89ef7bf1200b38cd4"
	I0806 07:58:22.055490   49751 cri.go:89] found id: "74e91b8e25d98b20fd885a292fff4c3200fc87c06bdea6de195993b8d96c6480"
	I0806 07:58:22.055494   49751 cri.go:89] found id: "4b56e43215ad57bc3f2aed6d531db767cca6cf0f89b2f01bb78c33d7c3d76692"
	I0806 07:58:22.055505   49751 cri.go:89] found id: "beb56425a40e2135964c6eb3931935babc4cf81b32a1e8c70735f1995effbb77"
	I0806 07:58:22.055509   49751 cri.go:89] found id: "474e29aac6711f79764feef65f30a2d86dcc7d05632b305853bee3f8a4aaecb0"
	I0806 07:58:22.055513   49751 cri.go:89] found id: "1cf9151b1941b0e938e94a6d7247c74e3d0e1f26fddaf0350cc2cb6a211618a7"
	I0806 07:58:22.055517   49751 cri.go:89] found id: ""
	I0806 07:58:22.055556   49751 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 06 08:00:05 multinode-551471 crio[2891]: time="2024-08-06 08:00:05.972165857Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722931205972141586,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143053,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=70612830-1255-484a-8d6a-2fa90e51e778 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 08:00:05 multinode-551471 crio[2891]: time="2024-08-06 08:00:05.972934785Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f79437a7-b3c9-418d-8746-905dd551502c name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:00:05 multinode-551471 crio[2891]: time="2024-08-06 08:00:05.973014800Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f79437a7-b3c9-418d-8746-905dd551502c name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:00:05 multinode-551471 crio[2891]: time="2024-08-06 08:00:05.973431100Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a638ddbb5051a7433b97f22d82ce6388f43365138206ecd8cb6e55eba6cf3e12,PodSandboxId:004df81af52bb64ccc8579bfc09bee86d7d7ba50320b5350755d68a984049c42,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722931142979511780,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-rkhtz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3112feb4-bb59-4877-8fa8-6c47dcb17168,},Annotations:map[string]string{io.kubernetes.container.hash: 5f745e54,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9e8ef0973a44f9939c9abcc18a63a3f102a88e15aaabd57f22a28bab85519b1,PodSandboxId:84abefc52e5b3c26e1d8d05b33bf643540c37c828cd07b41a04926373472f340,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_RUNNING,CreatedAt:1722931109497750212,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9dc7g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51aee9c7-0107-4ffb-a335-2bb3b56f0882,},Annotations:map[string]string{io.kubernetes.container.hash: 6562290,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuber
netes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b90f5bc008f90cfa2fbf2aa3da853823559aa023f76aa906f5246e026bb0f0a0,PodSandboxId:1d94fa01ad36cc4f8b1a37a7f295b0b795c35ce7beae310d48d7e594a2e20216,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722931109425680485,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k2kl8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 434b5086-628b-4c6d-98bf-38e4b2758290,},Annotations:map[string]string{io.kubernetes.container.hash: d8bac289,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\"
:\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7df59d82af251b33479683110323543bab2abee160632a2e863732021d0972e,PodSandboxId:01b7c91f4c977cc0a5bc30555d5e5f88f380f57f178b3d6b8e3d7021ace17ef1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722931109384014548,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b41a9dc-8b36-4c97-ba13-e8d86bbed5b1,},Ann
otations:map[string]string{io.kubernetes.container.hash: fec17a41,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7796edadd615e4a6b72780d99c72833e29376e47f21ace8d1b9846c888ce57ac,PodSandboxId:1e7e6eea84a536a4142ed32fd0956fe4e4ba6cb788ffe82ebffe5d5433d011f3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722931109344692941,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8t88h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8497743e-1315-4ab3-9097-9170fb8a373b,},Annotations:map[string]string{io.kub
ernetes.container.hash: aea06c0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3988dbeb8b095b0f961141d409ed6af31a49a47f792dce6092c49c0f2e02e14a,PodSandboxId:80e8af9bc0b31252bf2950b2a0851edfa325624f738d9ae95a51c9dba426cf21,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722931104440668799,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-551471,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eaa700938e917acefeeb3216b02e55a,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 95688fab,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f179bdc8d4add339e4a2b7c1af524fafe3d8f5fb06c47686220ddf2a8b0ce165,PodSandboxId:bbabc537c895ca5a5e1a204fc3f26c7dff2bbb1a3b73e64f8252fa7971df633d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722931104457689984,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-551471,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce0f55a14e9bac0289c2f4eff6d6d50d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c
8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91ce52b52a5faa28ab026d7d550e9c3edc2df4766c37591a5aa778d3c8b4ffee,PodSandboxId:9409cd35efe496f7cc125d775b3ed284ef00f3e7308d43bd8f582a3eb8810168,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722931104428954111,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-551471,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e39e809d728843cc0e09f10061b40994,},Annotations:map[string]string{io.kubernetes.container.hash: e1eecb92,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91679dd7275cf0c0d0c7264208fe3ea4af2c52d892de4f2536fd9d3a5f8a7ef9,PodSandboxId:a20b86b6ff58c204dc2eb847559803ae9a47943c69abbd6c249bc0acae212687,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722931104345651342,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-551471,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71095cd596aca73ba17d34793adffd9c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93cf5cb7a8319c91c2ef8585a3093e4fc951acd31cb84587ecbf9da395640e8b,PodSandboxId:5321b01ab24b006147f307b73b923098acb5fdde2cdeed6d364facca7b9851d8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722930773814765536,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-rkhtz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3112feb4-bb59-4877-8fa8-6c47dcb17168,},Annotations:map[string]string{io.kubernetes.container.hash: 5f745e54,io.kubernetes.container.resta
rtCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:231998fa150bd615568fd89931d682cd6eb3900552e3e0c414c2ee3a6fb749ad,PodSandboxId:f2f8123ae9c32ea663b58990f1516f0b3653886bc0b8d1c82fda9dffb77f71cb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722930720473670689,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k2kl8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 434b5086-628b-4c6d-98bf-38e4b2758290,},Annotations:map[string]string{io.kubernetes.container.hash: d8bac289,io.kubernetes.container.ports: [{\"name\":\"dns\",\"container
Port\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1f1d9862a98539cf5a1ed1d209cef2b0e4e7061910368640a15be822397ebb9,PodSandboxId:09178607b7fcafb7b5b2e4a1234f64d7351badd06de29ae18d06ed5c4c1405d1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722930720418403516,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 3b41a9dc-8b36-4c97-ba13-e8d86bbed5b1,},Annotations:map[string]string{io.kubernetes.container.hash: fec17a41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7923487019be61d3f46ec909607d2c85a99a8b64a5c00f89ef7bf1200b38cd4,PodSandboxId:dd0f9c7acfd8f65254417dd6e5b967b4595356cfd57f453a91158bfff54e9166,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_EXITED,CreatedAt:1722930708296776676,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9dc7g,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: 51aee9c7-0107-4ffb-a335-2bb3b56f0882,},Annotations:map[string]string{io.kubernetes.container.hash: 6562290,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74e91b8e25d98b20fd885a292fff4c3200fc87c06bdea6de195993b8d96c6480,PodSandboxId:8fb8a4a95cedd5849da9cf5abf6d16e481584589b23c7dc1d0606eaea3566160,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722930705813526750,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8t88h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 8497743e-1315-4ab3-9097-9170fb8a373b,},Annotations:map[string]string{io.kubernetes.container.hash: aea06c0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b56e43215ad57bc3f2aed6d531db767cca6cf0f89b2f01bb78c33d7c3d76692,PodSandboxId:cfbd4fcb42df0f43658221b8e736d4a5e449003e8675e6adf7cd914e51514deb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722930684828005169,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-551471,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e39e809d728843cc0e09f10061b40994,}
,Annotations:map[string]string{io.kubernetes.container.hash: e1eecb92,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beb56425a40e2135964c6eb3931935babc4cf81b32a1e8c70735f1995effbb77,PodSandboxId:807a9beaa4c51df4f993e70e904c5ce01a26723825bd9a04ef78eb566ab34a88,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722930684808536266,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-551471,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71095cd596aca73ba17d34
793adffd9c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cf9151b1941b0e938e94a6d7247c74e3d0e1f26fddaf0350cc2cb6a211618a7,PodSandboxId:a643154dcadf4eadd19864c10b75cd0258590418620383448484ba09d11a46eb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722930684746183306,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-551471,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce0f55a14e9bac0289c2f4eff6d6d50d,},An
notations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:474e29aac6711f79764feef65f30a2d86dcc7d05632b305853bee3f8a4aaecb0,PodSandboxId:bdb6c31cf15795d1a277d665e76704ee8ee04bb0ff0122fd5bec8298726be9be,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722930684752835026,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-551471,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eaa700938e917acefeeb3216b02e55a,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 95688fab,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f79437a7-b3c9-418d-8746-905dd551502c name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:00:06 multinode-551471 crio[2891]: time="2024-08-06 08:00:06.016809910Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8cdf45a2-dd43-4473-95c0-f14ae20cc1e4 name=/runtime.v1.RuntimeService/Version
	Aug 06 08:00:06 multinode-551471 crio[2891]: time="2024-08-06 08:00:06.016912314Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8cdf45a2-dd43-4473-95c0-f14ae20cc1e4 name=/runtime.v1.RuntimeService/Version
	Aug 06 08:00:06 multinode-551471 crio[2891]: time="2024-08-06 08:00:06.018104786Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b4b8b7d6-7d24-43ca-870d-d586b5d57af4 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 08:00:06 multinode-551471 crio[2891]: time="2024-08-06 08:00:06.018605837Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722931206018581469,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143053,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b4b8b7d6-7d24-43ca-870d-d586b5d57af4 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 08:00:06 multinode-551471 crio[2891]: time="2024-08-06 08:00:06.019194459Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3657a2b4-a16d-4973-b91a-caee57a8f8d8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:00:06 multinode-551471 crio[2891]: time="2024-08-06 08:00:06.019256854Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3657a2b4-a16d-4973-b91a-caee57a8f8d8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:00:06 multinode-551471 crio[2891]: time="2024-08-06 08:00:06.020040067Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a638ddbb5051a7433b97f22d82ce6388f43365138206ecd8cb6e55eba6cf3e12,PodSandboxId:004df81af52bb64ccc8579bfc09bee86d7d7ba50320b5350755d68a984049c42,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722931142979511780,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-rkhtz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3112feb4-bb59-4877-8fa8-6c47dcb17168,},Annotations:map[string]string{io.kubernetes.container.hash: 5f745e54,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9e8ef0973a44f9939c9abcc18a63a3f102a88e15aaabd57f22a28bab85519b1,PodSandboxId:84abefc52e5b3c26e1d8d05b33bf643540c37c828cd07b41a04926373472f340,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_RUNNING,CreatedAt:1722931109497750212,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9dc7g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51aee9c7-0107-4ffb-a335-2bb3b56f0882,},Annotations:map[string]string{io.kubernetes.container.hash: 6562290,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuber
netes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b90f5bc008f90cfa2fbf2aa3da853823559aa023f76aa906f5246e026bb0f0a0,PodSandboxId:1d94fa01ad36cc4f8b1a37a7f295b0b795c35ce7beae310d48d7e594a2e20216,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722931109425680485,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k2kl8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 434b5086-628b-4c6d-98bf-38e4b2758290,},Annotations:map[string]string{io.kubernetes.container.hash: d8bac289,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\"
:\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7df59d82af251b33479683110323543bab2abee160632a2e863732021d0972e,PodSandboxId:01b7c91f4c977cc0a5bc30555d5e5f88f380f57f178b3d6b8e3d7021ace17ef1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722931109384014548,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b41a9dc-8b36-4c97-ba13-e8d86bbed5b1,},Ann
otations:map[string]string{io.kubernetes.container.hash: fec17a41,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7796edadd615e4a6b72780d99c72833e29376e47f21ace8d1b9846c888ce57ac,PodSandboxId:1e7e6eea84a536a4142ed32fd0956fe4e4ba6cb788ffe82ebffe5d5433d011f3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722931109344692941,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8t88h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8497743e-1315-4ab3-9097-9170fb8a373b,},Annotations:map[string]string{io.kub
ernetes.container.hash: aea06c0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3988dbeb8b095b0f961141d409ed6af31a49a47f792dce6092c49c0f2e02e14a,PodSandboxId:80e8af9bc0b31252bf2950b2a0851edfa325624f738d9ae95a51c9dba426cf21,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722931104440668799,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-551471,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eaa700938e917acefeeb3216b02e55a,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 95688fab,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f179bdc8d4add339e4a2b7c1af524fafe3d8f5fb06c47686220ddf2a8b0ce165,PodSandboxId:bbabc537c895ca5a5e1a204fc3f26c7dff2bbb1a3b73e64f8252fa7971df633d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722931104457689984,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-551471,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce0f55a14e9bac0289c2f4eff6d6d50d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c
8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91ce52b52a5faa28ab026d7d550e9c3edc2df4766c37591a5aa778d3c8b4ffee,PodSandboxId:9409cd35efe496f7cc125d775b3ed284ef00f3e7308d43bd8f582a3eb8810168,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722931104428954111,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-551471,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e39e809d728843cc0e09f10061b40994,},Annotations:map[string]string{io.kubernetes.container.hash: e1eecb92,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91679dd7275cf0c0d0c7264208fe3ea4af2c52d892de4f2536fd9d3a5f8a7ef9,PodSandboxId:a20b86b6ff58c204dc2eb847559803ae9a47943c69abbd6c249bc0acae212687,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722931104345651342,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-551471,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71095cd596aca73ba17d34793adffd9c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93cf5cb7a8319c91c2ef8585a3093e4fc951acd31cb84587ecbf9da395640e8b,PodSandboxId:5321b01ab24b006147f307b73b923098acb5fdde2cdeed6d364facca7b9851d8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722930773814765536,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-rkhtz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3112feb4-bb59-4877-8fa8-6c47dcb17168,},Annotations:map[string]string{io.kubernetes.container.hash: 5f745e54,io.kubernetes.container.resta
rtCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:231998fa150bd615568fd89931d682cd6eb3900552e3e0c414c2ee3a6fb749ad,PodSandboxId:f2f8123ae9c32ea663b58990f1516f0b3653886bc0b8d1c82fda9dffb77f71cb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722930720473670689,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k2kl8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 434b5086-628b-4c6d-98bf-38e4b2758290,},Annotations:map[string]string{io.kubernetes.container.hash: d8bac289,io.kubernetes.container.ports: [{\"name\":\"dns\",\"container
Port\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1f1d9862a98539cf5a1ed1d209cef2b0e4e7061910368640a15be822397ebb9,PodSandboxId:09178607b7fcafb7b5b2e4a1234f64d7351badd06de29ae18d06ed5c4c1405d1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722930720418403516,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 3b41a9dc-8b36-4c97-ba13-e8d86bbed5b1,},Annotations:map[string]string{io.kubernetes.container.hash: fec17a41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7923487019be61d3f46ec909607d2c85a99a8b64a5c00f89ef7bf1200b38cd4,PodSandboxId:dd0f9c7acfd8f65254417dd6e5b967b4595356cfd57f453a91158bfff54e9166,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_EXITED,CreatedAt:1722930708296776676,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9dc7g,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: 51aee9c7-0107-4ffb-a335-2bb3b56f0882,},Annotations:map[string]string{io.kubernetes.container.hash: 6562290,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74e91b8e25d98b20fd885a292fff4c3200fc87c06bdea6de195993b8d96c6480,PodSandboxId:8fb8a4a95cedd5849da9cf5abf6d16e481584589b23c7dc1d0606eaea3566160,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722930705813526750,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8t88h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 8497743e-1315-4ab3-9097-9170fb8a373b,},Annotations:map[string]string{io.kubernetes.container.hash: aea06c0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b56e43215ad57bc3f2aed6d531db767cca6cf0f89b2f01bb78c33d7c3d76692,PodSandboxId:cfbd4fcb42df0f43658221b8e736d4a5e449003e8675e6adf7cd914e51514deb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722930684828005169,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-551471,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e39e809d728843cc0e09f10061b40994,}
,Annotations:map[string]string{io.kubernetes.container.hash: e1eecb92,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beb56425a40e2135964c6eb3931935babc4cf81b32a1e8c70735f1995effbb77,PodSandboxId:807a9beaa4c51df4f993e70e904c5ce01a26723825bd9a04ef78eb566ab34a88,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722930684808536266,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-551471,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71095cd596aca73ba17d34
793adffd9c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cf9151b1941b0e938e94a6d7247c74e3d0e1f26fddaf0350cc2cb6a211618a7,PodSandboxId:a643154dcadf4eadd19864c10b75cd0258590418620383448484ba09d11a46eb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722930684746183306,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-551471,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce0f55a14e9bac0289c2f4eff6d6d50d,},An
notations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:474e29aac6711f79764feef65f30a2d86dcc7d05632b305853bee3f8a4aaecb0,PodSandboxId:bdb6c31cf15795d1a277d665e76704ee8ee04bb0ff0122fd5bec8298726be9be,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722930684752835026,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-551471,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eaa700938e917acefeeb3216b02e55a,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 95688fab,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3657a2b4-a16d-4973-b91a-caee57a8f8d8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:00:06 multinode-551471 crio[2891]: time="2024-08-06 08:00:06.071383717Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=33fe2995-1392-4d38-83cf-e101590555e9 name=/runtime.v1.RuntimeService/Version
	Aug 06 08:00:06 multinode-551471 crio[2891]: time="2024-08-06 08:00:06.071463164Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=33fe2995-1392-4d38-83cf-e101590555e9 name=/runtime.v1.RuntimeService/Version
	Aug 06 08:00:06 multinode-551471 crio[2891]: time="2024-08-06 08:00:06.072622259Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f051df57-0c04-4710-92cb-5013882363f6 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 08:00:06 multinode-551471 crio[2891]: time="2024-08-06 08:00:06.073020901Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722931206072999984,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143053,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f051df57-0c04-4710-92cb-5013882363f6 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 08:00:06 multinode-551471 crio[2891]: time="2024-08-06 08:00:06.073594672Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b46cdd65-3d09-439f-aea6-4144dab51f73 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:00:06 multinode-551471 crio[2891]: time="2024-08-06 08:00:06.073669394Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b46cdd65-3d09-439f-aea6-4144dab51f73 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:00:06 multinode-551471 crio[2891]: time="2024-08-06 08:00:06.074083934Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a638ddbb5051a7433b97f22d82ce6388f43365138206ecd8cb6e55eba6cf3e12,PodSandboxId:004df81af52bb64ccc8579bfc09bee86d7d7ba50320b5350755d68a984049c42,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722931142979511780,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-rkhtz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3112feb4-bb59-4877-8fa8-6c47dcb17168,},Annotations:map[string]string{io.kubernetes.container.hash: 5f745e54,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9e8ef0973a44f9939c9abcc18a63a3f102a88e15aaabd57f22a28bab85519b1,PodSandboxId:84abefc52e5b3c26e1d8d05b33bf643540c37c828cd07b41a04926373472f340,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_RUNNING,CreatedAt:1722931109497750212,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9dc7g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51aee9c7-0107-4ffb-a335-2bb3b56f0882,},Annotations:map[string]string{io.kubernetes.container.hash: 6562290,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuber
netes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b90f5bc008f90cfa2fbf2aa3da853823559aa023f76aa906f5246e026bb0f0a0,PodSandboxId:1d94fa01ad36cc4f8b1a37a7f295b0b795c35ce7beae310d48d7e594a2e20216,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722931109425680485,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k2kl8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 434b5086-628b-4c6d-98bf-38e4b2758290,},Annotations:map[string]string{io.kubernetes.container.hash: d8bac289,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\"
:\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7df59d82af251b33479683110323543bab2abee160632a2e863732021d0972e,PodSandboxId:01b7c91f4c977cc0a5bc30555d5e5f88f380f57f178b3d6b8e3d7021ace17ef1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722931109384014548,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b41a9dc-8b36-4c97-ba13-e8d86bbed5b1,},Ann
otations:map[string]string{io.kubernetes.container.hash: fec17a41,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7796edadd615e4a6b72780d99c72833e29376e47f21ace8d1b9846c888ce57ac,PodSandboxId:1e7e6eea84a536a4142ed32fd0956fe4e4ba6cb788ffe82ebffe5d5433d011f3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722931109344692941,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8t88h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8497743e-1315-4ab3-9097-9170fb8a373b,},Annotations:map[string]string{io.kub
ernetes.container.hash: aea06c0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3988dbeb8b095b0f961141d409ed6af31a49a47f792dce6092c49c0f2e02e14a,PodSandboxId:80e8af9bc0b31252bf2950b2a0851edfa325624f738d9ae95a51c9dba426cf21,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722931104440668799,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-551471,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eaa700938e917acefeeb3216b02e55a,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 95688fab,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f179bdc8d4add339e4a2b7c1af524fafe3d8f5fb06c47686220ddf2a8b0ce165,PodSandboxId:bbabc537c895ca5a5e1a204fc3f26c7dff2bbb1a3b73e64f8252fa7971df633d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722931104457689984,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-551471,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce0f55a14e9bac0289c2f4eff6d6d50d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c
8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91ce52b52a5faa28ab026d7d550e9c3edc2df4766c37591a5aa778d3c8b4ffee,PodSandboxId:9409cd35efe496f7cc125d775b3ed284ef00f3e7308d43bd8f582a3eb8810168,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722931104428954111,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-551471,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e39e809d728843cc0e09f10061b40994,},Annotations:map[string]string{io.kubernetes.container.hash: e1eecb92,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91679dd7275cf0c0d0c7264208fe3ea4af2c52d892de4f2536fd9d3a5f8a7ef9,PodSandboxId:a20b86b6ff58c204dc2eb847559803ae9a47943c69abbd6c249bc0acae212687,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722931104345651342,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-551471,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71095cd596aca73ba17d34793adffd9c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93cf5cb7a8319c91c2ef8585a3093e4fc951acd31cb84587ecbf9da395640e8b,PodSandboxId:5321b01ab24b006147f307b73b923098acb5fdde2cdeed6d364facca7b9851d8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722930773814765536,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-rkhtz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3112feb4-bb59-4877-8fa8-6c47dcb17168,},Annotations:map[string]string{io.kubernetes.container.hash: 5f745e54,io.kubernetes.container.resta
rtCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:231998fa150bd615568fd89931d682cd6eb3900552e3e0c414c2ee3a6fb749ad,PodSandboxId:f2f8123ae9c32ea663b58990f1516f0b3653886bc0b8d1c82fda9dffb77f71cb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722930720473670689,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k2kl8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 434b5086-628b-4c6d-98bf-38e4b2758290,},Annotations:map[string]string{io.kubernetes.container.hash: d8bac289,io.kubernetes.container.ports: [{\"name\":\"dns\",\"container
Port\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1f1d9862a98539cf5a1ed1d209cef2b0e4e7061910368640a15be822397ebb9,PodSandboxId:09178607b7fcafb7b5b2e4a1234f64d7351badd06de29ae18d06ed5c4c1405d1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722930720418403516,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 3b41a9dc-8b36-4c97-ba13-e8d86bbed5b1,},Annotations:map[string]string{io.kubernetes.container.hash: fec17a41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7923487019be61d3f46ec909607d2c85a99a8b64a5c00f89ef7bf1200b38cd4,PodSandboxId:dd0f9c7acfd8f65254417dd6e5b967b4595356cfd57f453a91158bfff54e9166,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_EXITED,CreatedAt:1722930708296776676,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9dc7g,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: 51aee9c7-0107-4ffb-a335-2bb3b56f0882,},Annotations:map[string]string{io.kubernetes.container.hash: 6562290,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74e91b8e25d98b20fd885a292fff4c3200fc87c06bdea6de195993b8d96c6480,PodSandboxId:8fb8a4a95cedd5849da9cf5abf6d16e481584589b23c7dc1d0606eaea3566160,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722930705813526750,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8t88h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 8497743e-1315-4ab3-9097-9170fb8a373b,},Annotations:map[string]string{io.kubernetes.container.hash: aea06c0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b56e43215ad57bc3f2aed6d531db767cca6cf0f89b2f01bb78c33d7c3d76692,PodSandboxId:cfbd4fcb42df0f43658221b8e736d4a5e449003e8675e6adf7cd914e51514deb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722930684828005169,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-551471,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e39e809d728843cc0e09f10061b40994,}
,Annotations:map[string]string{io.kubernetes.container.hash: e1eecb92,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beb56425a40e2135964c6eb3931935babc4cf81b32a1e8c70735f1995effbb77,PodSandboxId:807a9beaa4c51df4f993e70e904c5ce01a26723825bd9a04ef78eb566ab34a88,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722930684808536266,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-551471,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71095cd596aca73ba17d34
793adffd9c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cf9151b1941b0e938e94a6d7247c74e3d0e1f26fddaf0350cc2cb6a211618a7,PodSandboxId:a643154dcadf4eadd19864c10b75cd0258590418620383448484ba09d11a46eb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722930684746183306,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-551471,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce0f55a14e9bac0289c2f4eff6d6d50d,},An
notations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:474e29aac6711f79764feef65f30a2d86dcc7d05632b305853bee3f8a4aaecb0,PodSandboxId:bdb6c31cf15795d1a277d665e76704ee8ee04bb0ff0122fd5bec8298726be9be,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722930684752835026,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-551471,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eaa700938e917acefeeb3216b02e55a,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 95688fab,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b46cdd65-3d09-439f-aea6-4144dab51f73 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:00:06 multinode-551471 crio[2891]: time="2024-08-06 08:00:06.116961872Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ee5a3be7-0485-46a0-91ac-a85746e14502 name=/runtime.v1.RuntimeService/Version
	Aug 06 08:00:06 multinode-551471 crio[2891]: time="2024-08-06 08:00:06.117040842Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ee5a3be7-0485-46a0-91ac-a85746e14502 name=/runtime.v1.RuntimeService/Version
	Aug 06 08:00:06 multinode-551471 crio[2891]: time="2024-08-06 08:00:06.118239755Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2b547b72-e989-4b9c-a258-252e5f2abfd8 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 08:00:06 multinode-551471 crio[2891]: time="2024-08-06 08:00:06.118789246Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722931206118760195,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143053,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2b547b72-e989-4b9c-a258-252e5f2abfd8 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 08:00:06 multinode-551471 crio[2891]: time="2024-08-06 08:00:06.119369851Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ab436c33-ff5d-4388-b51e-01bf9665d0f7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:00:06 multinode-551471 crio[2891]: time="2024-08-06 08:00:06.119457069Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ab436c33-ff5d-4388-b51e-01bf9665d0f7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:00:06 multinode-551471 crio[2891]: time="2024-08-06 08:00:06.119805684Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a638ddbb5051a7433b97f22d82ce6388f43365138206ecd8cb6e55eba6cf3e12,PodSandboxId:004df81af52bb64ccc8579bfc09bee86d7d7ba50320b5350755d68a984049c42,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722931142979511780,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-rkhtz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3112feb4-bb59-4877-8fa8-6c47dcb17168,},Annotations:map[string]string{io.kubernetes.container.hash: 5f745e54,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9e8ef0973a44f9939c9abcc18a63a3f102a88e15aaabd57f22a28bab85519b1,PodSandboxId:84abefc52e5b3c26e1d8d05b33bf643540c37c828cd07b41a04926373472f340,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_RUNNING,CreatedAt:1722931109497750212,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9dc7g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51aee9c7-0107-4ffb-a335-2bb3b56f0882,},Annotations:map[string]string{io.kubernetes.container.hash: 6562290,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuber
netes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b90f5bc008f90cfa2fbf2aa3da853823559aa023f76aa906f5246e026bb0f0a0,PodSandboxId:1d94fa01ad36cc4f8b1a37a7f295b0b795c35ce7beae310d48d7e594a2e20216,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722931109425680485,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k2kl8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 434b5086-628b-4c6d-98bf-38e4b2758290,},Annotations:map[string]string{io.kubernetes.container.hash: d8bac289,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\"
:\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7df59d82af251b33479683110323543bab2abee160632a2e863732021d0972e,PodSandboxId:01b7c91f4c977cc0a5bc30555d5e5f88f380f57f178b3d6b8e3d7021ace17ef1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722931109384014548,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b41a9dc-8b36-4c97-ba13-e8d86bbed5b1,},Ann
otations:map[string]string{io.kubernetes.container.hash: fec17a41,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7796edadd615e4a6b72780d99c72833e29376e47f21ace8d1b9846c888ce57ac,PodSandboxId:1e7e6eea84a536a4142ed32fd0956fe4e4ba6cb788ffe82ebffe5d5433d011f3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722931109344692941,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8t88h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8497743e-1315-4ab3-9097-9170fb8a373b,},Annotations:map[string]string{io.kub
ernetes.container.hash: aea06c0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3988dbeb8b095b0f961141d409ed6af31a49a47f792dce6092c49c0f2e02e14a,PodSandboxId:80e8af9bc0b31252bf2950b2a0851edfa325624f738d9ae95a51c9dba426cf21,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722931104440668799,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-551471,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eaa700938e917acefeeb3216b02e55a,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 95688fab,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f179bdc8d4add339e4a2b7c1af524fafe3d8f5fb06c47686220ddf2a8b0ce165,PodSandboxId:bbabc537c895ca5a5e1a204fc3f26c7dff2bbb1a3b73e64f8252fa7971df633d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722931104457689984,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-551471,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce0f55a14e9bac0289c2f4eff6d6d50d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c
8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91ce52b52a5faa28ab026d7d550e9c3edc2df4766c37591a5aa778d3c8b4ffee,PodSandboxId:9409cd35efe496f7cc125d775b3ed284ef00f3e7308d43bd8f582a3eb8810168,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722931104428954111,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-551471,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e39e809d728843cc0e09f10061b40994,},Annotations:map[string]string{io.kubernetes.container.hash: e1eecb92,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91679dd7275cf0c0d0c7264208fe3ea4af2c52d892de4f2536fd9d3a5f8a7ef9,PodSandboxId:a20b86b6ff58c204dc2eb847559803ae9a47943c69abbd6c249bc0acae212687,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722931104345651342,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-551471,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71095cd596aca73ba17d34793adffd9c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93cf5cb7a8319c91c2ef8585a3093e4fc951acd31cb84587ecbf9da395640e8b,PodSandboxId:5321b01ab24b006147f307b73b923098acb5fdde2cdeed6d364facca7b9851d8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722930773814765536,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-rkhtz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3112feb4-bb59-4877-8fa8-6c47dcb17168,},Annotations:map[string]string{io.kubernetes.container.hash: 5f745e54,io.kubernetes.container.resta
rtCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:231998fa150bd615568fd89931d682cd6eb3900552e3e0c414c2ee3a6fb749ad,PodSandboxId:f2f8123ae9c32ea663b58990f1516f0b3653886bc0b8d1c82fda9dffb77f71cb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722930720473670689,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k2kl8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 434b5086-628b-4c6d-98bf-38e4b2758290,},Annotations:map[string]string{io.kubernetes.container.hash: d8bac289,io.kubernetes.container.ports: [{\"name\":\"dns\",\"container
Port\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1f1d9862a98539cf5a1ed1d209cef2b0e4e7061910368640a15be822397ebb9,PodSandboxId:09178607b7fcafb7b5b2e4a1234f64d7351badd06de29ae18d06ed5c4c1405d1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722930720418403516,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 3b41a9dc-8b36-4c97-ba13-e8d86bbed5b1,},Annotations:map[string]string{io.kubernetes.container.hash: fec17a41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7923487019be61d3f46ec909607d2c85a99a8b64a5c00f89ef7bf1200b38cd4,PodSandboxId:dd0f9c7acfd8f65254417dd6e5b967b4595356cfd57f453a91158bfff54e9166,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_EXITED,CreatedAt:1722930708296776676,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9dc7g,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: 51aee9c7-0107-4ffb-a335-2bb3b56f0882,},Annotations:map[string]string{io.kubernetes.container.hash: 6562290,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74e91b8e25d98b20fd885a292fff4c3200fc87c06bdea6de195993b8d96c6480,PodSandboxId:8fb8a4a95cedd5849da9cf5abf6d16e481584589b23c7dc1d0606eaea3566160,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722930705813526750,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8t88h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 8497743e-1315-4ab3-9097-9170fb8a373b,},Annotations:map[string]string{io.kubernetes.container.hash: aea06c0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b56e43215ad57bc3f2aed6d531db767cca6cf0f89b2f01bb78c33d7c3d76692,PodSandboxId:cfbd4fcb42df0f43658221b8e736d4a5e449003e8675e6adf7cd914e51514deb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722930684828005169,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-551471,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e39e809d728843cc0e09f10061b40994,}
,Annotations:map[string]string{io.kubernetes.container.hash: e1eecb92,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beb56425a40e2135964c6eb3931935babc4cf81b32a1e8c70735f1995effbb77,PodSandboxId:807a9beaa4c51df4f993e70e904c5ce01a26723825bd9a04ef78eb566ab34a88,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722930684808536266,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-551471,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71095cd596aca73ba17d34
793adffd9c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cf9151b1941b0e938e94a6d7247c74e3d0e1f26fddaf0350cc2cb6a211618a7,PodSandboxId:a643154dcadf4eadd19864c10b75cd0258590418620383448484ba09d11a46eb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722930684746183306,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-551471,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce0f55a14e9bac0289c2f4eff6d6d50d,},An
notations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:474e29aac6711f79764feef65f30a2d86dcc7d05632b305853bee3f8a4aaecb0,PodSandboxId:bdb6c31cf15795d1a277d665e76704ee8ee04bb0ff0122fd5bec8298726be9be,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722930684752835026,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-551471,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eaa700938e917acefeeb3216b02e55a,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 95688fab,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ab436c33-ff5d-4388-b51e-01bf9665d0f7 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	a638ddbb5051a       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   004df81af52bb       busybox-fc5497c4f-rkhtz
	c9e8ef0973a44       917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557                                      About a minute ago   Running             kindnet-cni               1                   84abefc52e5b3       kindnet-9dc7g
	b90f5bc008f90       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Running             coredns                   1                   1d94fa01ad36c       coredns-7db6d8ff4d-k2kl8
	a7df59d82af25       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   01b7c91f4c977       storage-provisioner
	7796edadd615e       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      About a minute ago   Running             kube-proxy                1                   1e7e6eea84a53       kube-proxy-8t88h
	f179bdc8d4add       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      About a minute ago   Running             kube-scheduler            1                   bbabc537c895c       kube-scheduler-multinode-551471
	3988dbeb8b095       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      About a minute ago   Running             kube-apiserver            1                   80e8af9bc0b31       kube-apiserver-multinode-551471
	91ce52b52a5fa       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      About a minute ago   Running             etcd                      1                   9409cd35efe49       etcd-multinode-551471
	91679dd7275cf       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      About a minute ago   Running             kube-controller-manager   1                   a20b86b6ff58c       kube-controller-manager-multinode-551471
	93cf5cb7a8319       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   7 minutes ago        Exited              busybox                   0                   5321b01ab24b0       busybox-fc5497c4f-rkhtz
	231998fa150bd       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      8 minutes ago        Exited              coredns                   0                   f2f8123ae9c32       coredns-7db6d8ff4d-k2kl8
	f1f1d9862a985       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      8 minutes ago        Exited              storage-provisioner       0                   09178607b7fca       storage-provisioner
	d7923487019be       docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3    8 minutes ago        Exited              kindnet-cni               0                   dd0f9c7acfd8f       kindnet-9dc7g
	74e91b8e25d98       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      8 minutes ago        Exited              kube-proxy                0                   8fb8a4a95cedd       kube-proxy-8t88h
	4b56e43215ad5       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      8 minutes ago        Exited              etcd                      0                   cfbd4fcb42df0       etcd-multinode-551471
	beb56425a40e2       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      8 minutes ago        Exited              kube-controller-manager   0                   807a9beaa4c51       kube-controller-manager-multinode-551471
	474e29aac6711       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      8 minutes ago        Exited              kube-apiserver            0                   bdb6c31cf1579       kube-apiserver-multinode-551471
	1cf9151b1941b       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      8 minutes ago        Exited              kube-scheduler            0                   a643154dcadf4       kube-scheduler-multinode-551471
	
	
	==> coredns [231998fa150bd615568fd89931d682cd6eb3900552e3e0c414c2ee3a6fb749ad] <==
	[INFO] 10.244.0.3:35082 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001916689s
	[INFO] 10.244.0.3:33490 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000096442s
	[INFO] 10.244.0.3:60401 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000080282s
	[INFO] 10.244.0.3:51055 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001488103s
	[INFO] 10.244.0.3:58516 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000045192s
	[INFO] 10.244.0.3:55971 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000056963s
	[INFO] 10.244.0.3:58259 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000074335s
	[INFO] 10.244.1.2:51261 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000202716s
	[INFO] 10.244.1.2:56013 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00008516s
	[INFO] 10.244.1.2:37549 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000096782s
	[INFO] 10.244.1.2:53521 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000067543s
	[INFO] 10.244.0.3:44334 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132902s
	[INFO] 10.244.0.3:35687 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000104415s
	[INFO] 10.244.0.3:53100 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000045344s
	[INFO] 10.244.0.3:50135 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000079774s
	[INFO] 10.244.1.2:37210 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000154836s
	[INFO] 10.244.1.2:52512 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000147443s
	[INFO] 10.244.1.2:47941 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00011877s
	[INFO] 10.244.1.2:36295 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000104739s
	[INFO] 10.244.0.3:52327 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000091545s
	[INFO] 10.244.0.3:56632 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000099425s
	[INFO] 10.244.0.3:38284 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000091828s
	[INFO] 10.244.0.3:58697 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000063258s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [b90f5bc008f90cfa2fbf2aa3da853823559aa023f76aa906f5246e026bb0f0a0] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:37963 - 52055 "HINFO IN 2755026096945446625.896548438532864949. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.016096574s
	
	
	==> describe nodes <==
	Name:               multinode-551471
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-551471
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e92cb06692f5ea1ba801d10d148e5e92e807f9c8
	                    minikube.k8s.io/name=multinode-551471
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_06T07_51_31_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 06 Aug 2024 07:51:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-551471
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 06 Aug 2024 08:00:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 06 Aug 2024 07:58:28 +0000   Tue, 06 Aug 2024 07:51:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 06 Aug 2024 07:58:28 +0000   Tue, 06 Aug 2024 07:51:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 06 Aug 2024 07:58:28 +0000   Tue, 06 Aug 2024 07:51:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 06 Aug 2024 07:58:28 +0000   Tue, 06 Aug 2024 07:51:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.186
	  Hostname:    multinode-551471
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 404b782436a441b6a246f834972f2722
	  System UUID:                404b7824-36a4-41b6-a246-f834972f2722
	  Boot ID:                    48ae4806-1015-46ea-99e2-4a672a102a84
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-rkhtz                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m16s
	  kube-system                 coredns-7db6d8ff4d-k2kl8                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     8m23s
	  kube-system                 etcd-multinode-551471                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         8m36s
	  kube-system                 kindnet-9dc7g                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      8m23s
	  kube-system                 kube-apiserver-multinode-551471             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m36s
	  kube-system                 kube-controller-manager-multinode-551471    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m36s
	  kube-system                 kube-proxy-8t88h                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m23s
	  kube-system                 kube-scheduler-multinode-551471             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m36s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m20s                  kube-proxy       
	  Normal  Starting                 96s                    kube-proxy       
	  Normal  NodeAllocatableEnforced  8m36s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m36s (x2 over 8m36s)  kubelet          Node multinode-551471 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m36s (x2 over 8m36s)  kubelet          Node multinode-551471 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m36s (x2 over 8m36s)  kubelet          Node multinode-551471 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m36s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           8m23s                  node-controller  Node multinode-551471 event: Registered Node multinode-551471 in Controller
	  Normal  NodeReady                8m7s                   kubelet          Node multinode-551471 status is now: NodeReady
	  Normal  Starting                 103s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  103s (x8 over 103s)    kubelet          Node multinode-551471 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    103s (x8 over 103s)    kubelet          Node multinode-551471 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     103s (x7 over 103s)    kubelet          Node multinode-551471 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  103s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           85s                    node-controller  Node multinode-551471 event: Registered Node multinode-551471 in Controller
	
	
	Name:               multinode-551471-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-551471-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e92cb06692f5ea1ba801d10d148e5e92e807f9c8
	                    minikube.k8s.io/name=multinode-551471
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_06T07_59_04_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 06 Aug 2024 07:59:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-551471-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 06 Aug 2024 08:00:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 06 Aug 2024 07:59:33 +0000   Tue, 06 Aug 2024 07:59:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 06 Aug 2024 07:59:33 +0000   Tue, 06 Aug 2024 07:59:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 06 Aug 2024 07:59:33 +0000   Tue, 06 Aug 2024 07:59:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 06 Aug 2024 07:59:33 +0000   Tue, 06 Aug 2024 07:59:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.134
	  Hostname:    multinode-551471-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 bf8d76ce939e4437873bfaba68c51663
	  System UUID:                bf8d76ce-939e-4437-873b-faba68c51663
	  Boot ID:                    9f87ecbf-c3ed-443c-9089-34920f426293
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-7qztc    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         66s
	  kube-system                 kindnet-4tl5t              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m38s
	  kube-system                 kube-proxy-7n7gg           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 7m33s                  kube-proxy  
	  Normal  Starting                 58s                    kube-proxy  
	  Normal  NodeHasSufficientMemory  7m38s (x2 over 7m38s)  kubelet     Node multinode-551471-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m38s (x2 over 7m38s)  kubelet     Node multinode-551471-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m38s (x2 over 7m38s)  kubelet     Node multinode-551471-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m38s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                7m18s                  kubelet     Node multinode-551471-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  63s (x2 over 63s)      kubelet     Node multinode-551471-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    63s (x2 over 63s)      kubelet     Node multinode-551471-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     63s (x2 over 63s)      kubelet     Node multinode-551471-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  63s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                44s                    kubelet     Node multinode-551471-m02 status is now: NodeReady
	
	
	Name:               multinode-551471-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-551471-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e92cb06692f5ea1ba801d10d148e5e92e807f9c8
	                    minikube.k8s.io/name=multinode-551471
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_06T07_59_43_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 06 Aug 2024 07:59:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-551471-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 06 Aug 2024 08:00:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 06 Aug 2024 08:00:03 +0000   Tue, 06 Aug 2024 07:59:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 06 Aug 2024 08:00:03 +0000   Tue, 06 Aug 2024 07:59:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 06 Aug 2024 08:00:03 +0000   Tue, 06 Aug 2024 07:59:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 06 Aug 2024 08:00:03 +0000   Tue, 06 Aug 2024 08:00:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.189
	  Hostname:    multinode-551471-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9eb7ccd267234750a9997a8c070d2704
	  System UUID:                9eb7ccd2-6723-4750-a999-7a8c070d2704
	  Boot ID:                    59ce16ca-4b96-404d-bf87-db1da5dd7fda
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-mbm4z       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m39s
	  kube-system                 kube-proxy-m6r5n    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m39s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 6m33s                  kube-proxy  
	  Normal  Starting                 18s                    kube-proxy  
	  Normal  Starting                 5m44s                  kube-proxy  
	  Normal  NodeHasSufficientMemory  6m39s (x2 over 6m39s)  kubelet     Node multinode-551471-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m39s (x2 over 6m39s)  kubelet     Node multinode-551471-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m39s (x2 over 6m39s)  kubelet     Node multinode-551471-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m39s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m19s                  kubelet     Node multinode-551471-m03 status is now: NodeReady
	  Normal  NodeHasNoDiskPressure    5m49s (x2 over 5m49s)  kubelet     Node multinode-551471-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m49s (x2 over 5m49s)  kubelet     Node multinode-551471-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m49s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m49s (x2 over 5m49s)  kubelet     Node multinode-551471-m03 status is now: NodeHasSufficientMemory
	  Normal  Starting                 5m49s                  kubelet     Starting kubelet.
	  Normal  NodeReady                5m29s                  kubelet     Node multinode-551471-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  23s (x2 over 23s)      kubelet     Node multinode-551471-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23s (x2 over 23s)      kubelet     Node multinode-551471-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23s (x2 over 23s)      kubelet     Node multinode-551471-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                3s                     kubelet     Node multinode-551471-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.056065] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.162444] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.127583] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.266054] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +4.266928] systemd-fstab-generator[762]: Ignoring "noauto" option for root device
	[  +4.621521] systemd-fstab-generator[941]: Ignoring "noauto" option for root device
	[  +0.063120] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.513551] systemd-fstab-generator[1283]: Ignoring "noauto" option for root device
	[  +0.090341] kauditd_printk_skb: 69 callbacks suppressed
	[ +14.106804] kauditd_printk_skb: 21 callbacks suppressed
	[  +0.031004] systemd-fstab-generator[1499]: Ignoring "noauto" option for root device
	[  +5.058521] kauditd_printk_skb: 59 callbacks suppressed
	[Aug 6 07:52] kauditd_printk_skb: 14 callbacks suppressed
	[Aug 6 07:58] systemd-fstab-generator[2808]: Ignoring "noauto" option for root device
	[  +0.151533] systemd-fstab-generator[2820]: Ignoring "noauto" option for root device
	[  +0.178308] systemd-fstab-generator[2834]: Ignoring "noauto" option for root device
	[  +0.149957] systemd-fstab-generator[2846]: Ignoring "noauto" option for root device
	[  +0.294232] systemd-fstab-generator[2874]: Ignoring "noauto" option for root device
	[  +6.612031] systemd-fstab-generator[2973]: Ignoring "noauto" option for root device
	[  +0.078076] kauditd_printk_skb: 100 callbacks suppressed
	[  +1.911378] systemd-fstab-generator[3099]: Ignoring "noauto" option for root device
	[  +5.764827] kauditd_printk_skb: 74 callbacks suppressed
	[ +10.450574] systemd-fstab-generator[3885]: Ignoring "noauto" option for root device
	[  +0.106825] kauditd_printk_skb: 32 callbacks suppressed
	[Aug 6 07:59] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [4b56e43215ad57bc3f2aed6d531db767cca6cf0f89b2f01bb78c33d7c3d76692] <==
	{"level":"info","ts":"2024-08-06T07:52:28.96643Z","caller":"traceutil/trace.go:171","msg":"trace[2077002954] transaction","detail":"{read_only:false; response_revision:447; number_of_response:1; }","duration":"235.59679ms","start":"2024-08-06T07:52:28.730798Z","end":"2024-08-06T07:52:28.966394Z","steps":["trace[2077002954] 'process raft request'  (duration: 215.628153ms)","trace[2077002954] 'compare'  (duration: 19.654704ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-06T07:52:28.967713Z","caller":"traceutil/trace.go:171","msg":"trace[321245351] linearizableReadLoop","detail":"{readStateIndex:466; appliedIndex:465; }","duration":"225.620073ms","start":"2024-08-06T07:52:28.742079Z","end":"2024-08-06T07:52:28.967699Z","steps":["trace[321245351] 'read index received'  (duration: 204.464103ms)","trace[321245351] 'applied index is now lower than readState.Index'  (duration: 21.154162ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-06T07:52:28.967985Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"225.808558ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csinodes/multinode-551471-m02\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-06T07:52:28.968048Z","caller":"traceutil/trace.go:171","msg":"trace[1656718840] range","detail":"{range_begin:/registry/csinodes/multinode-551471-m02; range_end:; response_count:0; response_revision:447; }","duration":"225.986238ms","start":"2024-08-06T07:52:28.742054Z","end":"2024-08-06T07:52:28.96804Z","steps":["trace[1656718840] 'agreement among raft nodes before linearized reading'  (duration: 225.763862ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-06T07:52:28.970844Z","caller":"traceutil/trace.go:171","msg":"trace[577163816] transaction","detail":"{read_only:false; response_revision:448; number_of_response:1; }","duration":"200.280697ms","start":"2024-08-06T07:52:28.770555Z","end":"2024-08-06T07:52:28.970836Z","steps":["trace[577163816] 'process raft request'  (duration: 196.222458ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-06T07:52:28.971106Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"202.8304ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1116"}
	{"level":"info","ts":"2024-08-06T07:52:28.978077Z","caller":"traceutil/trace.go:171","msg":"trace[1187038607] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:448; }","duration":"209.823934ms","start":"2024-08-06T07:52:28.76824Z","end":"2024-08-06T07:52:28.978064Z","steps":["trace[1187038607] 'agreement among raft nodes before linearized reading'  (duration: 202.786148ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-06T07:52:28.971165Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"223.038702ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-551471-m02\" ","response":"range_response_count:1 size:1926"}
	{"level":"info","ts":"2024-08-06T07:52:28.978174Z","caller":"traceutil/trace.go:171","msg":"trace[281815515] range","detail":"{range_begin:/registry/minions/multinode-551471-m02; range_end:; response_count:1; response_revision:448; }","duration":"230.079213ms","start":"2024-08-06T07:52:28.748089Z","end":"2024-08-06T07:52:28.978168Z","steps":["trace[281815515] 'agreement among raft nodes before linearized reading'  (duration: 223.016088ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-06T07:53:27.536534Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"156.273853ms","expected-duration":"100ms","prefix":"","request":"header:<ID:12886365504069230174 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:579 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1028 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-08-06T07:53:27.537422Z","caller":"traceutil/trace.go:171","msg":"trace[108812265] transaction","detail":"{read_only:false; response_revision:585; number_of_response:1; }","duration":"232.771963ms","start":"2024-08-06T07:53:27.304617Z","end":"2024-08-06T07:53:27.537389Z","steps":["trace[108812265] 'process raft request'  (duration: 74.833807ms)","trace[108812265] 'compare'  (duration: 156.172305ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-06T07:53:27.537452Z","caller":"traceutil/trace.go:171","msg":"trace[2049791661] linearizableReadLoop","detail":"{readStateIndex:622; appliedIndex:620; }","duration":"136.946587ms","start":"2024-08-06T07:53:27.400494Z","end":"2024-08-06T07:53:27.53744Z","steps":["trace[2049791661] 'read index received'  (duration: 135.091798ms)","trace[2049791661] 'applied index is now lower than readState.Index'  (duration: 1.854167ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-06T07:53:27.537684Z","caller":"traceutil/trace.go:171","msg":"trace[2045106609] transaction","detail":"{read_only:false; response_revision:586; number_of_response:1; }","duration":"165.131551ms","start":"2024-08-06T07:53:27.372544Z","end":"2024-08-06T07:53:27.537676Z","steps":["trace[2045106609] 'process raft request'  (duration: 164.830358ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-06T07:53:27.537839Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"137.331754ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-551471-m03\" ","response":"range_response_count:1 size:1926"}
	{"level":"info","ts":"2024-08-06T07:53:27.537905Z","caller":"traceutil/trace.go:171","msg":"trace[1002770888] range","detail":"{range_begin:/registry/minions/multinode-551471-m03; range_end:; response_count:1; response_revision:586; }","duration":"137.423354ms","start":"2024-08-06T07:53:27.40047Z","end":"2024-08-06T07:53:27.537893Z","steps":["trace[1002770888] 'agreement among raft nodes before linearized reading'  (duration: 137.218666ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-06T07:56:42.64652Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-06T07:56:42.646638Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-551471","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.186:2380"],"advertise-client-urls":["https://192.168.39.186:2379"]}
	{"level":"warn","ts":"2024-08-06T07:56:42.650128Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-06T07:56:42.650268Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-06T07:56:42.707739Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.186:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-06T07:56:42.707876Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.186:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-06T07:56:42.707997Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"1bfd5d64eb00b2d5","current-leader-member-id":"1bfd5d64eb00b2d5"}
	{"level":"info","ts":"2024-08-06T07:56:42.712382Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.186:2380"}
	{"level":"info","ts":"2024-08-06T07:56:42.712535Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.186:2380"}
	{"level":"info","ts":"2024-08-06T07:56:42.712566Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-551471","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.186:2380"],"advertise-client-urls":["https://192.168.39.186:2379"]}
	
	
	==> etcd [91ce52b52a5faa28ab026d7d550e9c3edc2df4766c37591a5aa778d3c8b4ffee] <==
	{"level":"info","ts":"2024-08-06T07:58:24.878101Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1bfd5d64eb00b2d5 switched to configuration voters=(2016870896152654549)"}
	{"level":"info","ts":"2024-08-06T07:58:24.878169Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"7d06a36b1777ee5c","local-member-id":"1bfd5d64eb00b2d5","added-peer-id":"1bfd5d64eb00b2d5","added-peer-peer-urls":["https://192.168.39.186:2380"]}
	{"level":"info","ts":"2024-08-06T07:58:24.878402Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"7d06a36b1777ee5c","local-member-id":"1bfd5d64eb00b2d5","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-06T07:58:24.878455Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-06T07:58:24.889927Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-06T07:58:24.892543Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"1bfd5d64eb00b2d5","initial-advertise-peer-urls":["https://192.168.39.186:2380"],"listen-peer-urls":["https://192.168.39.186:2380"],"advertise-client-urls":["https://192.168.39.186:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.186:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-06T07:58:24.892606Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-06T07:58:24.892756Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.186:2380"}
	{"level":"info","ts":"2024-08-06T07:58:24.892783Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.186:2380"}
	{"level":"info","ts":"2024-08-06T07:58:26.698225Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1bfd5d64eb00b2d5 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-06T07:58:26.698297Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1bfd5d64eb00b2d5 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-06T07:58:26.698418Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1bfd5d64eb00b2d5 received MsgPreVoteResp from 1bfd5d64eb00b2d5 at term 2"}
	{"level":"info","ts":"2024-08-06T07:58:26.698437Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1bfd5d64eb00b2d5 became candidate at term 3"}
	{"level":"info","ts":"2024-08-06T07:58:26.698443Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1bfd5d64eb00b2d5 received MsgVoteResp from 1bfd5d64eb00b2d5 at term 3"}
	{"level":"info","ts":"2024-08-06T07:58:26.698452Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1bfd5d64eb00b2d5 became leader at term 3"}
	{"level":"info","ts":"2024-08-06T07:58:26.698459Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 1bfd5d64eb00b2d5 elected leader 1bfd5d64eb00b2d5 at term 3"}
	{"level":"info","ts":"2024-08-06T07:58:26.705923Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"1bfd5d64eb00b2d5","local-member-attributes":"{Name:multinode-551471 ClientURLs:[https://192.168.39.186:2379]}","request-path":"/0/members/1bfd5d64eb00b2d5/attributes","cluster-id":"7d06a36b1777ee5c","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-06T07:58:26.705933Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-06T07:58:26.706204Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-06T07:58:26.706308Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-06T07:58:26.70638Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-06T07:58:26.708313Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-06T07:58:26.708464Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.186:2379"}
	{"level":"warn","ts":"2024-08-06T07:59:07.426441Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.363881ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-551471-m02\" ","response":"range_response_count:1 size:3118"}
	{"level":"info","ts":"2024-08-06T07:59:07.426721Z","caller":"traceutil/trace.go:171","msg":"trace[446410568] range","detail":"{range_begin:/registry/minions/multinode-551471-m02; range_end:; response_count:1; response_revision:1042; }","duration":"100.702503ms","start":"2024-08-06T07:59:07.325995Z","end":"2024-08-06T07:59:07.426698Z","steps":["trace[446410568] 'range keys from in-memory index tree'  (duration: 100.167791ms)"],"step_count":1}
	
	
	==> kernel <==
	 08:00:06 up 9 min,  0 users,  load average: 0.19, 0.21, 0.11
	Linux multinode-551471 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [c9e8ef0973a44f9939c9abcc18a63a3f102a88e15aaabd57f22a28bab85519b1] <==
	I0806 07:59:20.411993       1 main.go:322] Node multinode-551471-m03 has CIDR [10.244.3.0/24] 
	I0806 07:59:30.411564       1 main.go:295] Handling node with IPs: map[192.168.39.186:{}]
	I0806 07:59:30.411726       1 main.go:299] handling current node
	I0806 07:59:30.411757       1 main.go:295] Handling node with IPs: map[192.168.39.134:{}]
	I0806 07:59:30.411776       1 main.go:322] Node multinode-551471-m02 has CIDR [10.244.1.0/24] 
	I0806 07:59:30.411947       1 main.go:295] Handling node with IPs: map[192.168.39.189:{}]
	I0806 07:59:30.411975       1 main.go:322] Node multinode-551471-m03 has CIDR [10.244.3.0/24] 
	I0806 07:59:40.413617       1 main.go:295] Handling node with IPs: map[192.168.39.186:{}]
	I0806 07:59:40.413741       1 main.go:299] handling current node
	I0806 07:59:40.413772       1 main.go:295] Handling node with IPs: map[192.168.39.134:{}]
	I0806 07:59:40.413800       1 main.go:322] Node multinode-551471-m02 has CIDR [10.244.1.0/24] 
	I0806 07:59:40.414055       1 main.go:295] Handling node with IPs: map[192.168.39.189:{}]
	I0806 07:59:40.414113       1 main.go:322] Node multinode-551471-m03 has CIDR [10.244.3.0/24] 
	I0806 07:59:50.411270       1 main.go:295] Handling node with IPs: map[192.168.39.186:{}]
	I0806 07:59:50.411317       1 main.go:299] handling current node
	I0806 07:59:50.411373       1 main.go:295] Handling node with IPs: map[192.168.39.134:{}]
	I0806 07:59:50.411381       1 main.go:322] Node multinode-551471-m02 has CIDR [10.244.1.0/24] 
	I0806 07:59:50.411521       1 main.go:295] Handling node with IPs: map[192.168.39.189:{}]
	I0806 07:59:50.411542       1 main.go:322] Node multinode-551471-m03 has CIDR [10.244.2.0/24] 
	I0806 08:00:00.414618       1 main.go:295] Handling node with IPs: map[192.168.39.186:{}]
	I0806 08:00:00.414763       1 main.go:299] handling current node
	I0806 08:00:00.414798       1 main.go:295] Handling node with IPs: map[192.168.39.134:{}]
	I0806 08:00:00.414819       1 main.go:322] Node multinode-551471-m02 has CIDR [10.244.1.0/24] 
	I0806 08:00:00.414978       1 main.go:295] Handling node with IPs: map[192.168.39.189:{}]
	I0806 08:00:00.415001       1 main.go:322] Node multinode-551471-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kindnet [d7923487019be61d3f46ec909607d2c85a99a8b64a5c00f89ef7bf1200b38cd4] <==
	I0806 07:55:59.424142       1 main.go:322] Node multinode-551471-m03 has CIDR [10.244.3.0/24] 
	I0806 07:56:09.422456       1 main.go:295] Handling node with IPs: map[192.168.39.189:{}]
	I0806 07:56:09.422500       1 main.go:322] Node multinode-551471-m03 has CIDR [10.244.3.0/24] 
	I0806 07:56:09.422620       1 main.go:295] Handling node with IPs: map[192.168.39.186:{}]
	I0806 07:56:09.422644       1 main.go:299] handling current node
	I0806 07:56:09.422656       1 main.go:295] Handling node with IPs: map[192.168.39.134:{}]
	I0806 07:56:09.422661       1 main.go:322] Node multinode-551471-m02 has CIDR [10.244.1.0/24] 
	I0806 07:56:19.415204       1 main.go:295] Handling node with IPs: map[192.168.39.186:{}]
	I0806 07:56:19.415317       1 main.go:299] handling current node
	I0806 07:56:19.415426       1 main.go:295] Handling node with IPs: map[192.168.39.134:{}]
	I0806 07:56:19.415467       1 main.go:322] Node multinode-551471-m02 has CIDR [10.244.1.0/24] 
	I0806 07:56:19.415619       1 main.go:295] Handling node with IPs: map[192.168.39.189:{}]
	I0806 07:56:19.415642       1 main.go:322] Node multinode-551471-m03 has CIDR [10.244.3.0/24] 
	I0806 07:56:29.421483       1 main.go:295] Handling node with IPs: map[192.168.39.186:{}]
	I0806 07:56:29.421602       1 main.go:299] handling current node
	I0806 07:56:29.421653       1 main.go:295] Handling node with IPs: map[192.168.39.134:{}]
	I0806 07:56:29.421658       1 main.go:322] Node multinode-551471-m02 has CIDR [10.244.1.0/24] 
	I0806 07:56:29.421858       1 main.go:295] Handling node with IPs: map[192.168.39.189:{}]
	I0806 07:56:29.421884       1 main.go:322] Node multinode-551471-m03 has CIDR [10.244.3.0/24] 
	I0806 07:56:39.421617       1 main.go:295] Handling node with IPs: map[192.168.39.134:{}]
	I0806 07:56:39.421855       1 main.go:322] Node multinode-551471-m02 has CIDR [10.244.1.0/24] 
	I0806 07:56:39.422012       1 main.go:295] Handling node with IPs: map[192.168.39.189:{}]
	I0806 07:56:39.422035       1 main.go:322] Node multinode-551471-m03 has CIDR [10.244.3.0/24] 
	I0806 07:56:39.422105       1 main.go:295] Handling node with IPs: map[192.168.39.186:{}]
	I0806 07:56:39.422124       1 main.go:299] handling current node
	
	
	==> kube-apiserver [3988dbeb8b095b0f961141d409ed6af31a49a47f792dce6092c49c0f2e02e14a] <==
	I0806 07:58:28.078165       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0806 07:58:28.092209       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0806 07:58:28.092421       1 policy_source.go:224] refreshing policies
	I0806 07:58:28.093886       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0806 07:58:28.096884       1 shared_informer.go:320] Caches are synced for configmaps
	I0806 07:58:28.099016       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0806 07:58:28.101852       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0806 07:58:28.113200       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0806 07:58:28.118581       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0806 07:58:28.118613       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0806 07:58:28.121498       1 aggregator.go:165] initial CRD sync complete...
	I0806 07:58:28.121543       1 autoregister_controller.go:141] Starting autoregister controller
	I0806 07:58:28.121550       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0806 07:58:28.121556       1 cache.go:39] Caches are synced for autoregister controller
	I0806 07:58:28.125150       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0806 07:58:28.137472       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	E0806 07:58:28.167900       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0806 07:58:29.012103       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0806 07:58:30.278005       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0806 07:58:30.423791       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0806 07:58:30.434209       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0806 07:58:30.500083       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0806 07:58:30.510202       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0806 07:58:41.143308       1 controller.go:615] quota admission added evaluator for: endpoints
	I0806 07:58:41.299758       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [474e29aac6711f79764feef65f30a2d86dcc7d05632b305853bee3f8a4aaecb0] <==
	I0806 07:56:42.673004       1 controller.go:167] Shutting down OpenAPI controller
	I0806 07:56:42.673038       1 naming_controller.go:302] Shutting down NamingConditionController
	I0806 07:56:42.673060       1 apf_controller.go:386] Shutting down API Priority and Fairness config worker
	I0806 07:56:42.673100       1 autoregister_controller.go:165] Shutting down autoregister controller
	I0806 07:56:42.673133       1 system_namespaces_controller.go:77] Shutting down system namespaces controller
	I0806 07:56:42.673162       1 customresource_discovery_controller.go:325] Shutting down DiscoveryController
	I0806 07:56:42.673201       1 crdregistration_controller.go:142] Shutting down crd-autoregister controller
	I0806 07:56:42.673244       1 available_controller.go:439] Shutting down AvailableConditionController
	I0806 07:56:42.673258       1 gc_controller.go:91] Shutting down apiserver lease garbage collector
	I0806 07:56:42.673302       1 controller.go:129] Ending legacy_token_tracking_controller
	I0806 07:56:42.673367       1 controller.go:130] Shutting down legacy_token_tracking_controller
	I0806 07:56:42.673387       1 apiservice_controller.go:131] Shutting down APIServiceRegistrationController
	I0806 07:56:42.673462       1 controller.go:157] Shutting down quota evaluator
	I0806 07:56:42.673496       1 controller.go:176] quota evaluator worker shutdown
	I0806 07:56:42.674085       1 controller.go:176] quota evaluator worker shutdown
	I0806 07:56:42.674119       1 controller.go:176] quota evaluator worker shutdown
	I0806 07:56:42.674125       1 controller.go:176] quota evaluator worker shutdown
	I0806 07:56:42.674140       1 controller.go:176] quota evaluator worker shutdown
	I0806 07:56:42.674513       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0806 07:56:42.674858       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0806 07:56:42.675076       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I0806 07:56:42.675117       1 controller.go:84] Shutting down OpenAPI AggregationController
	I0806 07:56:42.675221       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0806 07:56:42.680557       1 secure_serving.go:258] Stopped listening on [::]:8443
	I0806 07:56:42.680629       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	
	
	==> kube-controller-manager [91679dd7275cf0c0d0c7264208fe3ea4af2c52d892de4f2536fd9d3a5f8a7ef9] <==
	I0806 07:58:41.729021       1 shared_informer.go:320] Caches are synced for garbage collector
	I0806 07:58:41.796840       1 shared_informer.go:320] Caches are synced for garbage collector
	I0806 07:58:41.796889       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0806 07:59:00.028421       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.659613ms"
	I0806 07:59:00.037262       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.784308ms"
	I0806 07:59:00.037393       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="87.668µs"
	I0806 07:59:03.291924       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-551471-m02\" does not exist"
	I0806 07:59:03.304920       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-551471-m02" podCIDRs=["10.244.1.0/24"]
	I0806 07:59:05.178825       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="60.697µs"
	I0806 07:59:05.205643       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.452µs"
	I0806 07:59:05.221186       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="105.666µs"
	I0806 07:59:05.251197       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.889µs"
	I0806 07:59:05.260315       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.95µs"
	I0806 07:59:05.262532       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.298µs"
	I0806 07:59:11.534844       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.483µs"
	I0806 07:59:22.998158       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-551471-m02"
	I0806 07:59:23.021791       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="106.353µs"
	I0806 07:59:23.035957       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="50.016µs"
	I0806 07:59:26.456854       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="5.398406ms"
	I0806 07:59:26.457244       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.818µs"
	I0806 07:59:42.341679       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-551471-m02"
	I0806 07:59:43.336639       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-551471-m03\" does not exist"
	I0806 07:59:43.337180       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-551471-m02"
	I0806 07:59:43.360176       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-551471-m03" podCIDRs=["10.244.2.0/24"]
	I0806 08:00:03.140401       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-551471-m02"
	
	
	==> kube-controller-manager [beb56425a40e2135964c6eb3931935babc4cf81b32a1e8c70735f1995effbb77] <==
	I0806 07:52:28.973595       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-551471-m02\" does not exist"
	I0806 07:52:29.023121       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-551471-m02" podCIDRs=["10.244.1.0/24"]
	I0806 07:52:33.027484       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-551471-m02"
	I0806 07:52:48.053605       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-551471-m02"
	I0806 07:52:50.599828       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.143455ms"
	I0806 07:52:50.644059       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.018782ms"
	I0806 07:52:50.644151       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.932µs"
	I0806 07:52:50.646997       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.389µs"
	I0806 07:52:53.821429       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="19.468204ms"
	I0806 07:52:53.821665       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.322µs"
	I0806 07:52:54.780428       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.985158ms"
	I0806 07:52:54.780639       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.893µs"
	I0806 07:53:27.544709       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-551471-m02"
	I0806 07:53:27.552773       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-551471-m03\" does not exist"
	I0806 07:53:27.580971       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-551471-m03" podCIDRs=["10.244.2.0/24"]
	I0806 07:53:28.046201       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-551471-m03"
	I0806 07:53:47.841607       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-551471-m02"
	I0806 07:54:16.274381       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-551471-m02"
	I0806 07:54:17.307217       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-551471-m02"
	I0806 07:54:17.308838       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-551471-m03\" does not exist"
	I0806 07:54:17.320721       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-551471-m03" podCIDRs=["10.244.3.0/24"]
	I0806 07:54:37.089250       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-551471-m02"
	I0806 07:55:18.102837       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-551471-m02"
	I0806 07:55:23.201497       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.341894ms"
	I0806 07:55:23.201819       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="101.88µs"
	
	
	==> kube-proxy [74e91b8e25d98b20fd885a292fff4c3200fc87c06bdea6de195993b8d96c6480] <==
	I0806 07:51:45.998060       1 server_linux.go:69] "Using iptables proxy"
	I0806 07:51:46.013068       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.186"]
	I0806 07:51:46.049454       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0806 07:51:46.049500       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0806 07:51:46.049516       1 server_linux.go:165] "Using iptables Proxier"
	I0806 07:51:46.052468       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0806 07:51:46.052831       1 server.go:872] "Version info" version="v1.30.3"
	I0806 07:51:46.052844       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0806 07:51:46.054833       1 config.go:192] "Starting service config controller"
	I0806 07:51:46.055188       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0806 07:51:46.055248       1 config.go:101] "Starting endpoint slice config controller"
	I0806 07:51:46.055266       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0806 07:51:46.057212       1 config.go:319] "Starting node config controller"
	I0806 07:51:46.057252       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0806 07:51:46.155924       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0806 07:51:46.155994       1 shared_informer.go:320] Caches are synced for service config
	I0806 07:51:46.157420       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [7796edadd615e4a6b72780d99c72833e29376e47f21ace8d1b9846c888ce57ac] <==
	I0806 07:58:29.738094       1 server_linux.go:69] "Using iptables proxy"
	I0806 07:58:29.767124       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.186"]
	I0806 07:58:29.915698       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0806 07:58:29.915734       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0806 07:58:29.915756       1 server_linux.go:165] "Using iptables Proxier"
	I0806 07:58:29.927588       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0806 07:58:29.927810       1 server.go:872] "Version info" version="v1.30.3"
	I0806 07:58:29.927848       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0806 07:58:29.933640       1 config.go:192] "Starting service config controller"
	I0806 07:58:29.933680       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0806 07:58:29.933707       1 config.go:101] "Starting endpoint slice config controller"
	I0806 07:58:29.933711       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0806 07:58:29.934317       1 config.go:319] "Starting node config controller"
	I0806 07:58:29.939416       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0806 07:58:30.035472       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0806 07:58:30.035490       1 shared_informer.go:320] Caches are synced for service config
	I0806 07:58:30.040422       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [1cf9151b1941b0e938e94a6d7247c74e3d0e1f26fddaf0350cc2cb6a211618a7] <==
	E0806 07:51:27.637881       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0806 07:51:27.633745       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0806 07:51:27.637940       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0806 07:51:28.439025       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0806 07:51:28.439190       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0806 07:51:28.478253       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0806 07:51:28.478406       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0806 07:51:28.485809       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0806 07:51:28.485975       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0806 07:51:28.707587       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0806 07:51:28.707637       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0806 07:51:28.776183       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0806 07:51:28.776275       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0806 07:51:28.815277       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0806 07:51:28.815838       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0806 07:51:28.859575       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0806 07:51:28.859634       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0806 07:51:28.881894       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0806 07:51:28.882001       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0806 07:51:28.966413       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0806 07:51:28.966525       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0806 07:51:31.426619       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0806 07:56:42.642844       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0806 07:56:42.646986       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E0806 07:56:42.651740       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [f179bdc8d4add339e4a2b7c1af524fafe3d8f5fb06c47686220ddf2a8b0ce165] <==
	I0806 07:58:25.814301       1 serving.go:380] Generated self-signed cert in-memory
	W0806 07:58:28.057491       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0806 07:58:28.057578       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0806 07:58:28.057589       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0806 07:58:28.057595       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0806 07:58:28.107875       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0806 07:58:28.107917       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0806 07:58:28.109635       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0806 07:58:28.109789       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0806 07:58:28.109827       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0806 07:58:28.109842       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0806 07:58:28.210668       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 06 07:58:25 multinode-551471 kubelet[3106]: I0806 07:58:25.207364    3106 kubelet_node_status.go:73] "Attempting to register node" node="multinode-551471"
	Aug 06 07:58:28 multinode-551471 kubelet[3106]: I0806 07:58:28.200625    3106 kubelet_node_status.go:112] "Node was previously registered" node="multinode-551471"
	Aug 06 07:58:28 multinode-551471 kubelet[3106]: I0806 07:58:28.200744    3106 kubelet_node_status.go:76] "Successfully registered node" node="multinode-551471"
	Aug 06 07:58:28 multinode-551471 kubelet[3106]: I0806 07:58:28.202481    3106 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Aug 06 07:58:28 multinode-551471 kubelet[3106]: I0806 07:58:28.203633    3106 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Aug 06 07:58:28 multinode-551471 kubelet[3106]: E0806 07:58:28.504784    3106 kubelet.go:1937] "Failed creating a mirror pod for" err="pods \"kube-apiserver-multinode-551471\" already exists" pod="kube-system/kube-apiserver-multinode-551471"
	Aug 06 07:58:28 multinode-551471 kubelet[3106]: I0806 07:58:28.682947    3106 apiserver.go:52] "Watching apiserver"
	Aug 06 07:58:28 multinode-551471 kubelet[3106]: I0806 07:58:28.689229    3106 topology_manager.go:215] "Topology Admit Handler" podUID="51aee9c7-0107-4ffb-a335-2bb3b56f0882" podNamespace="kube-system" podName="kindnet-9dc7g"
	Aug 06 07:58:28 multinode-551471 kubelet[3106]: I0806 07:58:28.690573    3106 topology_manager.go:215] "Topology Admit Handler" podUID="8497743e-1315-4ab3-9097-9170fb8a373b" podNamespace="kube-system" podName="kube-proxy-8t88h"
	Aug 06 07:58:28 multinode-551471 kubelet[3106]: I0806 07:58:28.690664    3106 topology_manager.go:215] "Topology Admit Handler" podUID="434b5086-628b-4c6d-98bf-38e4b2758290" podNamespace="kube-system" podName="coredns-7db6d8ff4d-k2kl8"
	Aug 06 07:58:28 multinode-551471 kubelet[3106]: I0806 07:58:28.690707    3106 topology_manager.go:215] "Topology Admit Handler" podUID="3b41a9dc-8b36-4c97-ba13-e8d86bbed5b1" podNamespace="kube-system" podName="storage-provisioner"
	Aug 06 07:58:28 multinode-551471 kubelet[3106]: I0806 07:58:28.690794    3106 topology_manager.go:215] "Topology Admit Handler" podUID="3112feb4-bb59-4877-8fa8-6c47dcb17168" podNamespace="default" podName="busybox-fc5497c4f-rkhtz"
	Aug 06 07:58:28 multinode-551471 kubelet[3106]: I0806 07:58:28.699926    3106 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Aug 06 07:58:28 multinode-551471 kubelet[3106]: I0806 07:58:28.719631    3106 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/51aee9c7-0107-4ffb-a335-2bb3b56f0882-xtables-lock\") pod \"kindnet-9dc7g\" (UID: \"51aee9c7-0107-4ffb-a335-2bb3b56f0882\") " pod="kube-system/kindnet-9dc7g"
	Aug 06 07:58:28 multinode-551471 kubelet[3106]: I0806 07:58:28.719764    3106 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/51aee9c7-0107-4ffb-a335-2bb3b56f0882-lib-modules\") pod \"kindnet-9dc7g\" (UID: \"51aee9c7-0107-4ffb-a335-2bb3b56f0882\") " pod="kube-system/kindnet-9dc7g"
	Aug 06 07:58:28 multinode-551471 kubelet[3106]: I0806 07:58:28.719814    3106 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8497743e-1315-4ab3-9097-9170fb8a373b-lib-modules\") pod \"kube-proxy-8t88h\" (UID: \"8497743e-1315-4ab3-9097-9170fb8a373b\") " pod="kube-system/kube-proxy-8t88h"
	Aug 06 07:58:28 multinode-551471 kubelet[3106]: I0806 07:58:28.719884    3106 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/3b41a9dc-8b36-4c97-ba13-e8d86bbed5b1-tmp\") pod \"storage-provisioner\" (UID: \"3b41a9dc-8b36-4c97-ba13-e8d86bbed5b1\") " pod="kube-system/storage-provisioner"
	Aug 06 07:58:28 multinode-551471 kubelet[3106]: I0806 07:58:28.719939    3106 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/51aee9c7-0107-4ffb-a335-2bb3b56f0882-cni-cfg\") pod \"kindnet-9dc7g\" (UID: \"51aee9c7-0107-4ffb-a335-2bb3b56f0882\") " pod="kube-system/kindnet-9dc7g"
	Aug 06 07:58:28 multinode-551471 kubelet[3106]: I0806 07:58:28.720117    3106 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8497743e-1315-4ab3-9097-9170fb8a373b-xtables-lock\") pod \"kube-proxy-8t88h\" (UID: \"8497743e-1315-4ab3-9097-9170fb8a373b\") " pod="kube-system/kube-proxy-8t88h"
	Aug 06 07:58:37 multinode-551471 kubelet[3106]: I0806 07:58:37.284625    3106 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Aug 06 07:59:23 multinode-551471 kubelet[3106]: E0806 07:59:23.742436    3106 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 06 07:59:23 multinode-551471 kubelet[3106]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 06 07:59:23 multinode-551471 kubelet[3106]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 06 07:59:23 multinode-551471 kubelet[3106]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 06 07:59:23 multinode-551471 kubelet[3106]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0806 08:00:05.684294   50862 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19370-13057/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-551471 -n multinode-551471
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-551471 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (327.81s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-551471 stop
E0806 08:01:21.825278   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/functional-811691/client.crt: no such file or directory
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-551471 stop: exit status 82 (2m0.481029251s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-551471-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-551471 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-551471 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-551471 status: exit status 3 (18.787421218s)

                                                
                                                
-- stdout --
	multinode-551471
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-551471-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0806 08:02:29.316700   51521 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.134:22: connect: no route to host
	E0806 08:02:29.316735   51521 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.134:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-551471 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-551471 -n multinode-551471
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-551471 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-551471 logs -n 25: (1.509695095s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-551471 ssh -n                                                                 | multinode-551471 | jenkins | v1.33.1 | 06 Aug 24 07:53 UTC | 06 Aug 24 07:53 UTC |
	|         | multinode-551471-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-551471 cp multinode-551471-m02:/home/docker/cp-test.txt                       | multinode-551471 | jenkins | v1.33.1 | 06 Aug 24 07:53 UTC | 06 Aug 24 07:53 UTC |
	|         | multinode-551471:/home/docker/cp-test_multinode-551471-m02_multinode-551471.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-551471 ssh -n                                                                 | multinode-551471 | jenkins | v1.33.1 | 06 Aug 24 07:53 UTC | 06 Aug 24 07:53 UTC |
	|         | multinode-551471-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-551471 ssh -n multinode-551471 sudo cat                                       | multinode-551471 | jenkins | v1.33.1 | 06 Aug 24 07:53 UTC | 06 Aug 24 07:53 UTC |
	|         | /home/docker/cp-test_multinode-551471-m02_multinode-551471.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-551471 cp multinode-551471-m02:/home/docker/cp-test.txt                       | multinode-551471 | jenkins | v1.33.1 | 06 Aug 24 07:53 UTC | 06 Aug 24 07:53 UTC |
	|         | multinode-551471-m03:/home/docker/cp-test_multinode-551471-m02_multinode-551471-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-551471 ssh -n                                                                 | multinode-551471 | jenkins | v1.33.1 | 06 Aug 24 07:53 UTC | 06 Aug 24 07:53 UTC |
	|         | multinode-551471-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-551471 ssh -n multinode-551471-m03 sudo cat                                   | multinode-551471 | jenkins | v1.33.1 | 06 Aug 24 07:53 UTC | 06 Aug 24 07:53 UTC |
	|         | /home/docker/cp-test_multinode-551471-m02_multinode-551471-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-551471 cp testdata/cp-test.txt                                                | multinode-551471 | jenkins | v1.33.1 | 06 Aug 24 07:53 UTC | 06 Aug 24 07:53 UTC |
	|         | multinode-551471-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-551471 ssh -n                                                                 | multinode-551471 | jenkins | v1.33.1 | 06 Aug 24 07:53 UTC | 06 Aug 24 07:53 UTC |
	|         | multinode-551471-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-551471 cp multinode-551471-m03:/home/docker/cp-test.txt                       | multinode-551471 | jenkins | v1.33.1 | 06 Aug 24 07:53 UTC | 06 Aug 24 07:53 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2280163446/001/cp-test_multinode-551471-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-551471 ssh -n                                                                 | multinode-551471 | jenkins | v1.33.1 | 06 Aug 24 07:53 UTC | 06 Aug 24 07:53 UTC |
	|         | multinode-551471-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-551471 cp multinode-551471-m03:/home/docker/cp-test.txt                       | multinode-551471 | jenkins | v1.33.1 | 06 Aug 24 07:53 UTC | 06 Aug 24 07:53 UTC |
	|         | multinode-551471:/home/docker/cp-test_multinode-551471-m03_multinode-551471.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-551471 ssh -n                                                                 | multinode-551471 | jenkins | v1.33.1 | 06 Aug 24 07:53 UTC | 06 Aug 24 07:53 UTC |
	|         | multinode-551471-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-551471 ssh -n multinode-551471 sudo cat                                       | multinode-551471 | jenkins | v1.33.1 | 06 Aug 24 07:53 UTC | 06 Aug 24 07:53 UTC |
	|         | /home/docker/cp-test_multinode-551471-m03_multinode-551471.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-551471 cp multinode-551471-m03:/home/docker/cp-test.txt                       | multinode-551471 | jenkins | v1.33.1 | 06 Aug 24 07:53 UTC | 06 Aug 24 07:53 UTC |
	|         | multinode-551471-m02:/home/docker/cp-test_multinode-551471-m03_multinode-551471-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-551471 ssh -n                                                                 | multinode-551471 | jenkins | v1.33.1 | 06 Aug 24 07:53 UTC | 06 Aug 24 07:53 UTC |
	|         | multinode-551471-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-551471 ssh -n multinode-551471-m02 sudo cat                                   | multinode-551471 | jenkins | v1.33.1 | 06 Aug 24 07:53 UTC | 06 Aug 24 07:53 UTC |
	|         | /home/docker/cp-test_multinode-551471-m03_multinode-551471-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-551471 node stop m03                                                          | multinode-551471 | jenkins | v1.33.1 | 06 Aug 24 07:53 UTC | 06 Aug 24 07:53 UTC |
	| node    | multinode-551471 node start                                                             | multinode-551471 | jenkins | v1.33.1 | 06 Aug 24 07:53 UTC | 06 Aug 24 07:54 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-551471                                                                | multinode-551471 | jenkins | v1.33.1 | 06 Aug 24 07:54 UTC |                     |
	| stop    | -p multinode-551471                                                                     | multinode-551471 | jenkins | v1.33.1 | 06 Aug 24 07:54 UTC |                     |
	| start   | -p multinode-551471                                                                     | multinode-551471 | jenkins | v1.33.1 | 06 Aug 24 07:56 UTC | 06 Aug 24 08:00 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-551471                                                                | multinode-551471 | jenkins | v1.33.1 | 06 Aug 24 08:00 UTC |                     |
	| node    | multinode-551471 node delete                                                            | multinode-551471 | jenkins | v1.33.1 | 06 Aug 24 08:00 UTC | 06 Aug 24 08:00 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-551471 stop                                                                   | multinode-551471 | jenkins | v1.33.1 | 06 Aug 24 08:00 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/06 07:56:41
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0806 07:56:41.682964   49751 out.go:291] Setting OutFile to fd 1 ...
	I0806 07:56:41.683290   49751 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 07:56:41.683302   49751 out.go:304] Setting ErrFile to fd 2...
	I0806 07:56:41.683307   49751 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 07:56:41.683475   49751 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19370-13057/.minikube/bin
	I0806 07:56:41.684020   49751 out.go:298] Setting JSON to false
	I0806 07:56:41.684974   49751 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5948,"bootTime":1722925054,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0806 07:56:41.685034   49751 start.go:139] virtualization: kvm guest
	I0806 07:56:41.687438   49751 out.go:177] * [multinode-551471] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0806 07:56:41.688902   49751 notify.go:220] Checking for updates...
	I0806 07:56:41.688933   49751 out.go:177]   - MINIKUBE_LOCATION=19370
	I0806 07:56:41.690656   49751 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0806 07:56:41.692250   49751 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19370-13057/kubeconfig
	I0806 07:56:41.693871   49751 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19370-13057/.minikube
	I0806 07:56:41.695379   49751 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0806 07:56:41.696899   49751 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0806 07:56:41.698647   49751 config.go:182] Loaded profile config "multinode-551471": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 07:56:41.698780   49751 driver.go:392] Setting default libvirt URI to qemu:///system
	I0806 07:56:41.699223   49751 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:56:41.699279   49751 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:56:41.715418   49751 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45661
	I0806 07:56:41.715800   49751 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:56:41.716348   49751 main.go:141] libmachine: Using API Version  1
	I0806 07:56:41.716395   49751 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:56:41.716905   49751 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:56:41.717151   49751 main.go:141] libmachine: (multinode-551471) Calling .DriverName
	I0806 07:56:41.754160   49751 out.go:177] * Using the kvm2 driver based on existing profile
	I0806 07:56:41.755474   49751 start.go:297] selected driver: kvm2
	I0806 07:56:41.755488   49751 start.go:901] validating driver "kvm2" against &{Name:multinode-551471 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:multinode-551471 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.186 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.134 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.189 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ing
ress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 07:56:41.755711   49751 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0806 07:56:41.756162   49751 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 07:56:41.756246   49751 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19370-13057/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0806 07:56:41.772290   49751 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0806 07:56:41.773027   49751 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0806 07:56:41.773115   49751 cni.go:84] Creating CNI manager for ""
	I0806 07:56:41.773143   49751 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0806 07:56:41.773220   49751 start.go:340] cluster config:
	{Name:multinode-551471 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-551471 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.186 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.134 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.189 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false
kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 07:56:41.773363   49751 iso.go:125] acquiring lock: {Name:mke58d71de28babe305c5eb0c31c274e9713bb3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 07:56:41.775244   49751 out.go:177] * Starting "multinode-551471" primary control-plane node in "multinode-551471" cluster
	I0806 07:56:41.776470   49751 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0806 07:56:41.776513   49751 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0806 07:56:41.776526   49751 cache.go:56] Caching tarball of preloaded images
	I0806 07:56:41.776628   49751 preload.go:172] Found /home/jenkins/minikube-integration/19370-13057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0806 07:56:41.776641   49751 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0806 07:56:41.776774   49751 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/multinode-551471/config.json ...
	I0806 07:56:41.776994   49751 start.go:360] acquireMachinesLock for multinode-551471: {Name:mkc602ac9c8cc3360dcc0fcb8104303837aaab61 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 07:56:41.777044   49751 start.go:364] duration metric: took 30.41µs to acquireMachinesLock for "multinode-551471"
	I0806 07:56:41.777062   49751 start.go:96] Skipping create...Using existing machine configuration
	I0806 07:56:41.777072   49751 fix.go:54] fixHost starting: 
	I0806 07:56:41.777370   49751 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:56:41.777416   49751 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:56:41.792768   49751 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36409
	I0806 07:56:41.793216   49751 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:56:41.793675   49751 main.go:141] libmachine: Using API Version  1
	I0806 07:56:41.793697   49751 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:56:41.794088   49751 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:56:41.794328   49751 main.go:141] libmachine: (multinode-551471) Calling .DriverName
	I0806 07:56:41.794585   49751 main.go:141] libmachine: (multinode-551471) Calling .GetState
	I0806 07:56:41.796564   49751 fix.go:112] recreateIfNeeded on multinode-551471: state=Running err=<nil>
	W0806 07:56:41.796599   49751 fix.go:138] unexpected machine state, will restart: <nil>
	I0806 07:56:41.798536   49751 out.go:177] * Updating the running kvm2 "multinode-551471" VM ...
	I0806 07:56:41.799758   49751 machine.go:94] provisionDockerMachine start ...
	I0806 07:56:41.799785   49751 main.go:141] libmachine: (multinode-551471) Calling .DriverName
	I0806 07:56:41.800027   49751 main.go:141] libmachine: (multinode-551471) Calling .GetSSHHostname
	I0806 07:56:41.803172   49751 main.go:141] libmachine: (multinode-551471) DBG | domain multinode-551471 has defined MAC address 52:54:00:e9:96:6b in network mk-multinode-551471
	I0806 07:56:41.803608   49751 main.go:141] libmachine: (multinode-551471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:96:6b", ip: ""} in network mk-multinode-551471: {Iface:virbr1 ExpiryTime:2024-08-06 08:51:03 +0000 UTC Type:0 Mac:52:54:00:e9:96:6b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:multinode-551471 Clientid:01:52:54:00:e9:96:6b}
	I0806 07:56:41.803634   49751 main.go:141] libmachine: (multinode-551471) DBG | domain multinode-551471 has defined IP address 192.168.39.186 and MAC address 52:54:00:e9:96:6b in network mk-multinode-551471
	I0806 07:56:41.803811   49751 main.go:141] libmachine: (multinode-551471) Calling .GetSSHPort
	I0806 07:56:41.803966   49751 main.go:141] libmachine: (multinode-551471) Calling .GetSSHKeyPath
	I0806 07:56:41.804108   49751 main.go:141] libmachine: (multinode-551471) Calling .GetSSHKeyPath
	I0806 07:56:41.804271   49751 main.go:141] libmachine: (multinode-551471) Calling .GetSSHUsername
	I0806 07:56:41.804459   49751 main.go:141] libmachine: Using SSH client type: native
	I0806 07:56:41.804652   49751 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I0806 07:56:41.804673   49751 main.go:141] libmachine: About to run SSH command:
	hostname
	I0806 07:56:41.914685   49751 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-551471
	
	I0806 07:56:41.914710   49751 main.go:141] libmachine: (multinode-551471) Calling .GetMachineName
	I0806 07:56:41.914978   49751 buildroot.go:166] provisioning hostname "multinode-551471"
	I0806 07:56:41.915002   49751 main.go:141] libmachine: (multinode-551471) Calling .GetMachineName
	I0806 07:56:41.915202   49751 main.go:141] libmachine: (multinode-551471) Calling .GetSSHHostname
	I0806 07:56:41.918165   49751 main.go:141] libmachine: (multinode-551471) DBG | domain multinode-551471 has defined MAC address 52:54:00:e9:96:6b in network mk-multinode-551471
	I0806 07:56:41.918464   49751 main.go:141] libmachine: (multinode-551471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:96:6b", ip: ""} in network mk-multinode-551471: {Iface:virbr1 ExpiryTime:2024-08-06 08:51:03 +0000 UTC Type:0 Mac:52:54:00:e9:96:6b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:multinode-551471 Clientid:01:52:54:00:e9:96:6b}
	I0806 07:56:41.918508   49751 main.go:141] libmachine: (multinode-551471) DBG | domain multinode-551471 has defined IP address 192.168.39.186 and MAC address 52:54:00:e9:96:6b in network mk-multinode-551471
	I0806 07:56:41.918598   49751 main.go:141] libmachine: (multinode-551471) Calling .GetSSHPort
	I0806 07:56:41.918778   49751 main.go:141] libmachine: (multinode-551471) Calling .GetSSHKeyPath
	I0806 07:56:41.918946   49751 main.go:141] libmachine: (multinode-551471) Calling .GetSSHKeyPath
	I0806 07:56:41.919116   49751 main.go:141] libmachine: (multinode-551471) Calling .GetSSHUsername
	I0806 07:56:41.919273   49751 main.go:141] libmachine: Using SSH client type: native
	I0806 07:56:41.919441   49751 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I0806 07:56:41.919452   49751 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-551471 && echo "multinode-551471" | sudo tee /etc/hostname
	I0806 07:56:42.037455   49751 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-551471
	
	I0806 07:56:42.037483   49751 main.go:141] libmachine: (multinode-551471) Calling .GetSSHHostname
	I0806 07:56:42.040203   49751 main.go:141] libmachine: (multinode-551471) DBG | domain multinode-551471 has defined MAC address 52:54:00:e9:96:6b in network mk-multinode-551471
	I0806 07:56:42.040630   49751 main.go:141] libmachine: (multinode-551471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:96:6b", ip: ""} in network mk-multinode-551471: {Iface:virbr1 ExpiryTime:2024-08-06 08:51:03 +0000 UTC Type:0 Mac:52:54:00:e9:96:6b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:multinode-551471 Clientid:01:52:54:00:e9:96:6b}
	I0806 07:56:42.040675   49751 main.go:141] libmachine: (multinode-551471) DBG | domain multinode-551471 has defined IP address 192.168.39.186 and MAC address 52:54:00:e9:96:6b in network mk-multinode-551471
	I0806 07:56:42.040934   49751 main.go:141] libmachine: (multinode-551471) Calling .GetSSHPort
	I0806 07:56:42.041146   49751 main.go:141] libmachine: (multinode-551471) Calling .GetSSHKeyPath
	I0806 07:56:42.041335   49751 main.go:141] libmachine: (multinode-551471) Calling .GetSSHKeyPath
	I0806 07:56:42.041515   49751 main.go:141] libmachine: (multinode-551471) Calling .GetSSHUsername
	I0806 07:56:42.041713   49751 main.go:141] libmachine: Using SSH client type: native
	I0806 07:56:42.041933   49751 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I0806 07:56:42.041962   49751 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-551471' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-551471/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-551471' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0806 07:56:42.145956   49751 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 07:56:42.145991   49751 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19370-13057/.minikube CaCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19370-13057/.minikube}
	I0806 07:56:42.146009   49751 buildroot.go:174] setting up certificates
	I0806 07:56:42.146017   49751 provision.go:84] configureAuth start
	I0806 07:56:42.146025   49751 main.go:141] libmachine: (multinode-551471) Calling .GetMachineName
	I0806 07:56:42.146379   49751 main.go:141] libmachine: (multinode-551471) Calling .GetIP
	I0806 07:56:42.148959   49751 main.go:141] libmachine: (multinode-551471) DBG | domain multinode-551471 has defined MAC address 52:54:00:e9:96:6b in network mk-multinode-551471
	I0806 07:56:42.149347   49751 main.go:141] libmachine: (multinode-551471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:96:6b", ip: ""} in network mk-multinode-551471: {Iface:virbr1 ExpiryTime:2024-08-06 08:51:03 +0000 UTC Type:0 Mac:52:54:00:e9:96:6b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:multinode-551471 Clientid:01:52:54:00:e9:96:6b}
	I0806 07:56:42.149368   49751 main.go:141] libmachine: (multinode-551471) DBG | domain multinode-551471 has defined IP address 192.168.39.186 and MAC address 52:54:00:e9:96:6b in network mk-multinode-551471
	I0806 07:56:42.149526   49751 main.go:141] libmachine: (multinode-551471) Calling .GetSSHHostname
	I0806 07:56:42.151457   49751 main.go:141] libmachine: (multinode-551471) DBG | domain multinode-551471 has defined MAC address 52:54:00:e9:96:6b in network mk-multinode-551471
	I0806 07:56:42.151750   49751 main.go:141] libmachine: (multinode-551471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:96:6b", ip: ""} in network mk-multinode-551471: {Iface:virbr1 ExpiryTime:2024-08-06 08:51:03 +0000 UTC Type:0 Mac:52:54:00:e9:96:6b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:multinode-551471 Clientid:01:52:54:00:e9:96:6b}
	I0806 07:56:42.151779   49751 main.go:141] libmachine: (multinode-551471) DBG | domain multinode-551471 has defined IP address 192.168.39.186 and MAC address 52:54:00:e9:96:6b in network mk-multinode-551471
	I0806 07:56:42.151919   49751 provision.go:143] copyHostCerts
	I0806 07:56:42.151964   49751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem
	I0806 07:56:42.152014   49751 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem, removing ...
	I0806 07:56:42.152026   49751 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem
	I0806 07:56:42.152108   49751 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem (1078 bytes)
	I0806 07:56:42.152200   49751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem
	I0806 07:56:42.152225   49751 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem, removing ...
	I0806 07:56:42.152232   49751 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem
	I0806 07:56:42.152274   49751 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem (1123 bytes)
	I0806 07:56:42.152339   49751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem
	I0806 07:56:42.152377   49751 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem, removing ...
	I0806 07:56:42.152386   49751 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem
	I0806 07:56:42.152421   49751 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem (1675 bytes)
	I0806 07:56:42.152530   49751 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem org=jenkins.multinode-551471 san=[127.0.0.1 192.168.39.186 localhost minikube multinode-551471]
	I0806 07:56:42.349045   49751 provision.go:177] copyRemoteCerts
	I0806 07:56:42.349114   49751 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0806 07:56:42.349144   49751 main.go:141] libmachine: (multinode-551471) Calling .GetSSHHostname
	I0806 07:56:42.352050   49751 main.go:141] libmachine: (multinode-551471) DBG | domain multinode-551471 has defined MAC address 52:54:00:e9:96:6b in network mk-multinode-551471
	I0806 07:56:42.352525   49751 main.go:141] libmachine: (multinode-551471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:96:6b", ip: ""} in network mk-multinode-551471: {Iface:virbr1 ExpiryTime:2024-08-06 08:51:03 +0000 UTC Type:0 Mac:52:54:00:e9:96:6b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:multinode-551471 Clientid:01:52:54:00:e9:96:6b}
	I0806 07:56:42.352552   49751 main.go:141] libmachine: (multinode-551471) DBG | domain multinode-551471 has defined IP address 192.168.39.186 and MAC address 52:54:00:e9:96:6b in network mk-multinode-551471
	I0806 07:56:42.352748   49751 main.go:141] libmachine: (multinode-551471) Calling .GetSSHPort
	I0806 07:56:42.352993   49751 main.go:141] libmachine: (multinode-551471) Calling .GetSSHKeyPath
	I0806 07:56:42.353169   49751 main.go:141] libmachine: (multinode-551471) Calling .GetSSHUsername
	I0806 07:56:42.353275   49751 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/multinode-551471/id_rsa Username:docker}
	I0806 07:56:42.434787   49751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0806 07:56:42.434873   49751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0806 07:56:42.461022   49751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0806 07:56:42.461085   49751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0806 07:56:42.488595   49751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0806 07:56:42.488692   49751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0806 07:56:42.514567   49751 provision.go:87] duration metric: took 368.540738ms to configureAuth
	I0806 07:56:42.514595   49751 buildroot.go:189] setting minikube options for container-runtime
	I0806 07:56:42.514820   49751 config.go:182] Loaded profile config "multinode-551471": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 07:56:42.514901   49751 main.go:141] libmachine: (multinode-551471) Calling .GetSSHHostname
	I0806 07:56:42.518220   49751 main.go:141] libmachine: (multinode-551471) DBG | domain multinode-551471 has defined MAC address 52:54:00:e9:96:6b in network mk-multinode-551471
	I0806 07:56:42.518664   49751 main.go:141] libmachine: (multinode-551471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:96:6b", ip: ""} in network mk-multinode-551471: {Iface:virbr1 ExpiryTime:2024-08-06 08:51:03 +0000 UTC Type:0 Mac:52:54:00:e9:96:6b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:multinode-551471 Clientid:01:52:54:00:e9:96:6b}
	I0806 07:56:42.518691   49751 main.go:141] libmachine: (multinode-551471) DBG | domain multinode-551471 has defined IP address 192.168.39.186 and MAC address 52:54:00:e9:96:6b in network mk-multinode-551471
	I0806 07:56:42.518888   49751 main.go:141] libmachine: (multinode-551471) Calling .GetSSHPort
	I0806 07:56:42.519098   49751 main.go:141] libmachine: (multinode-551471) Calling .GetSSHKeyPath
	I0806 07:56:42.519258   49751 main.go:141] libmachine: (multinode-551471) Calling .GetSSHKeyPath
	I0806 07:56:42.519444   49751 main.go:141] libmachine: (multinode-551471) Calling .GetSSHUsername
	I0806 07:56:42.519654   49751 main.go:141] libmachine: Using SSH client type: native
	I0806 07:56:42.519871   49751 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I0806 07:56:42.519892   49751 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0806 07:58:13.440454   49751 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0806 07:58:13.440502   49751 machine.go:97] duration metric: took 1m31.640725322s to provisionDockerMachine
	I0806 07:58:13.440520   49751 start.go:293] postStartSetup for "multinode-551471" (driver="kvm2")
	I0806 07:58:13.440540   49751 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0806 07:58:13.440565   49751 main.go:141] libmachine: (multinode-551471) Calling .DriverName
	I0806 07:58:13.440987   49751 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0806 07:58:13.441020   49751 main.go:141] libmachine: (multinode-551471) Calling .GetSSHHostname
	I0806 07:58:13.444453   49751 main.go:141] libmachine: (multinode-551471) DBG | domain multinode-551471 has defined MAC address 52:54:00:e9:96:6b in network mk-multinode-551471
	I0806 07:58:13.444884   49751 main.go:141] libmachine: (multinode-551471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:96:6b", ip: ""} in network mk-multinode-551471: {Iface:virbr1 ExpiryTime:2024-08-06 08:51:03 +0000 UTC Type:0 Mac:52:54:00:e9:96:6b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:multinode-551471 Clientid:01:52:54:00:e9:96:6b}
	I0806 07:58:13.444915   49751 main.go:141] libmachine: (multinode-551471) DBG | domain multinode-551471 has defined IP address 192.168.39.186 and MAC address 52:54:00:e9:96:6b in network mk-multinode-551471
	I0806 07:58:13.445111   49751 main.go:141] libmachine: (multinode-551471) Calling .GetSSHPort
	I0806 07:58:13.445362   49751 main.go:141] libmachine: (multinode-551471) Calling .GetSSHKeyPath
	I0806 07:58:13.445567   49751 main.go:141] libmachine: (multinode-551471) Calling .GetSSHUsername
	I0806 07:58:13.445757   49751 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/multinode-551471/id_rsa Username:docker}
	I0806 07:58:13.528642   49751 ssh_runner.go:195] Run: cat /etc/os-release
	I0806 07:58:13.532693   49751 command_runner.go:130] > NAME=Buildroot
	I0806 07:58:13.532718   49751 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0806 07:58:13.532724   49751 command_runner.go:130] > ID=buildroot
	I0806 07:58:13.532740   49751 command_runner.go:130] > VERSION_ID=2023.02.9
	I0806 07:58:13.532748   49751 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0806 07:58:13.532786   49751 info.go:137] Remote host: Buildroot 2023.02.9
	I0806 07:58:13.532802   49751 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-13057/.minikube/addons for local assets ...
	I0806 07:58:13.532893   49751 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-13057/.minikube/files for local assets ...
	I0806 07:58:13.532970   49751 filesync.go:149] local asset: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem -> 202462.pem in /etc/ssl/certs
	I0806 07:58:13.532979   49751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem -> /etc/ssl/certs/202462.pem
	I0806 07:58:13.533062   49751 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0806 07:58:13.542678   49751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem --> /etc/ssl/certs/202462.pem (1708 bytes)
	I0806 07:58:13.568542   49751 start.go:296] duration metric: took 128.007794ms for postStartSetup
	I0806 07:58:13.568609   49751 fix.go:56] duration metric: took 1m31.791538245s for fixHost
	I0806 07:58:13.568638   49751 main.go:141] libmachine: (multinode-551471) Calling .GetSSHHostname
	I0806 07:58:13.571373   49751 main.go:141] libmachine: (multinode-551471) DBG | domain multinode-551471 has defined MAC address 52:54:00:e9:96:6b in network mk-multinode-551471
	I0806 07:58:13.571775   49751 main.go:141] libmachine: (multinode-551471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:96:6b", ip: ""} in network mk-multinode-551471: {Iface:virbr1 ExpiryTime:2024-08-06 08:51:03 +0000 UTC Type:0 Mac:52:54:00:e9:96:6b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:multinode-551471 Clientid:01:52:54:00:e9:96:6b}
	I0806 07:58:13.571800   49751 main.go:141] libmachine: (multinode-551471) DBG | domain multinode-551471 has defined IP address 192.168.39.186 and MAC address 52:54:00:e9:96:6b in network mk-multinode-551471
	I0806 07:58:13.572073   49751 main.go:141] libmachine: (multinode-551471) Calling .GetSSHPort
	I0806 07:58:13.572320   49751 main.go:141] libmachine: (multinode-551471) Calling .GetSSHKeyPath
	I0806 07:58:13.572587   49751 main.go:141] libmachine: (multinode-551471) Calling .GetSSHKeyPath
	I0806 07:58:13.572736   49751 main.go:141] libmachine: (multinode-551471) Calling .GetSSHUsername
	I0806 07:58:13.573057   49751 main.go:141] libmachine: Using SSH client type: native
	I0806 07:58:13.573236   49751 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I0806 07:58:13.573251   49751 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0806 07:58:13.677647   49751 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722931093.655562722
	
	I0806 07:58:13.677676   49751 fix.go:216] guest clock: 1722931093.655562722
	I0806 07:58:13.677693   49751 fix.go:229] Guest: 2024-08-06 07:58:13.655562722 +0000 UTC Remote: 2024-08-06 07:58:13.568617668 +0000 UTC m=+91.922074622 (delta=86.945054ms)
	I0806 07:58:13.677750   49751 fix.go:200] guest clock delta is within tolerance: 86.945054ms
	I0806 07:58:13.677762   49751 start.go:83] releasing machines lock for "multinode-551471", held for 1m31.900708135s
	I0806 07:58:13.677803   49751 main.go:141] libmachine: (multinode-551471) Calling .DriverName
	I0806 07:58:13.678083   49751 main.go:141] libmachine: (multinode-551471) Calling .GetIP
	I0806 07:58:13.681138   49751 main.go:141] libmachine: (multinode-551471) DBG | domain multinode-551471 has defined MAC address 52:54:00:e9:96:6b in network mk-multinode-551471
	I0806 07:58:13.681546   49751 main.go:141] libmachine: (multinode-551471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:96:6b", ip: ""} in network mk-multinode-551471: {Iface:virbr1 ExpiryTime:2024-08-06 08:51:03 +0000 UTC Type:0 Mac:52:54:00:e9:96:6b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:multinode-551471 Clientid:01:52:54:00:e9:96:6b}
	I0806 07:58:13.681574   49751 main.go:141] libmachine: (multinode-551471) DBG | domain multinode-551471 has defined IP address 192.168.39.186 and MAC address 52:54:00:e9:96:6b in network mk-multinode-551471
	I0806 07:58:13.681701   49751 main.go:141] libmachine: (multinode-551471) Calling .DriverName
	I0806 07:58:13.682288   49751 main.go:141] libmachine: (multinode-551471) Calling .DriverName
	I0806 07:58:13.682472   49751 main.go:141] libmachine: (multinode-551471) Calling .DriverName
	I0806 07:58:13.682570   49751 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0806 07:58:13.682615   49751 main.go:141] libmachine: (multinode-551471) Calling .GetSSHHostname
	I0806 07:58:13.682725   49751 ssh_runner.go:195] Run: cat /version.json
	I0806 07:58:13.682758   49751 main.go:141] libmachine: (multinode-551471) Calling .GetSSHHostname
	I0806 07:58:13.685385   49751 main.go:141] libmachine: (multinode-551471) DBG | domain multinode-551471 has defined MAC address 52:54:00:e9:96:6b in network mk-multinode-551471
	I0806 07:58:13.685464   49751 main.go:141] libmachine: (multinode-551471) DBG | domain multinode-551471 has defined MAC address 52:54:00:e9:96:6b in network mk-multinode-551471
	I0806 07:58:13.685707   49751 main.go:141] libmachine: (multinode-551471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:96:6b", ip: ""} in network mk-multinode-551471: {Iface:virbr1 ExpiryTime:2024-08-06 08:51:03 +0000 UTC Type:0 Mac:52:54:00:e9:96:6b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:multinode-551471 Clientid:01:52:54:00:e9:96:6b}
	I0806 07:58:13.685731   49751 main.go:141] libmachine: (multinode-551471) DBG | domain multinode-551471 has defined IP address 192.168.39.186 and MAC address 52:54:00:e9:96:6b in network mk-multinode-551471
	I0806 07:58:13.685879   49751 main.go:141] libmachine: (multinode-551471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:96:6b", ip: ""} in network mk-multinode-551471: {Iface:virbr1 ExpiryTime:2024-08-06 08:51:03 +0000 UTC Type:0 Mac:52:54:00:e9:96:6b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:multinode-551471 Clientid:01:52:54:00:e9:96:6b}
	I0806 07:58:13.685890   49751 main.go:141] libmachine: (multinode-551471) Calling .GetSSHPort
	I0806 07:58:13.685909   49751 main.go:141] libmachine: (multinode-551471) DBG | domain multinode-551471 has defined IP address 192.168.39.186 and MAC address 52:54:00:e9:96:6b in network mk-multinode-551471
	I0806 07:58:13.686030   49751 main.go:141] libmachine: (multinode-551471) Calling .GetSSHPort
	I0806 07:58:13.686099   49751 main.go:141] libmachine: (multinode-551471) Calling .GetSSHKeyPath
	I0806 07:58:13.686273   49751 main.go:141] libmachine: (multinode-551471) Calling .GetSSHKeyPath
	I0806 07:58:13.686452   49751 main.go:141] libmachine: (multinode-551471) Calling .GetSSHUsername
	I0806 07:58:13.686463   49751 main.go:141] libmachine: (multinode-551471) Calling .GetSSHUsername
	I0806 07:58:13.686606   49751 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/multinode-551471/id_rsa Username:docker}
	I0806 07:58:13.686664   49751 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/multinode-551471/id_rsa Username:docker}
	I0806 07:58:13.762247   49751 command_runner.go:130] > {"iso_version": "v1.33.1-1722248113-19339", "kicbase_version": "v0.0.44-1721902582-19326", "minikube_version": "v1.33.1", "commit": "b8389556a97747a5bbaa1906d238251ad536d76e"}
	I0806 07:58:13.762474   49751 ssh_runner.go:195] Run: systemctl --version
	I0806 07:58:13.784898   49751 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0806 07:58:13.785667   49751 command_runner.go:130] > systemd 252 (252)
	I0806 07:58:13.785724   49751 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0806 07:58:13.785801   49751 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0806 07:58:13.947440   49751 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0806 07:58:13.955455   49751 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0806 07:58:13.955736   49751 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0806 07:58:13.955815   49751 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0806 07:58:13.968521   49751 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0806 07:58:13.968553   49751 start.go:495] detecting cgroup driver to use...
	I0806 07:58:13.968613   49751 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0806 07:58:13.990927   49751 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 07:58:14.006987   49751 docker.go:217] disabling cri-docker service (if available) ...
	I0806 07:58:14.007049   49751 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0806 07:58:14.024014   49751 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0806 07:58:14.040416   49751 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0806 07:58:14.195553   49751 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0806 07:58:14.337404   49751 docker.go:233] disabling docker service ...
	I0806 07:58:14.337486   49751 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0806 07:58:14.356268   49751 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0806 07:58:14.374416   49751 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0806 07:58:14.519097   49751 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0806 07:58:14.669629   49751 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0806 07:58:14.685116   49751 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 07:58:14.704986   49751 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0806 07:58:14.705050   49751 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0806 07:58:14.705116   49751 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 07:58:14.717029   49751 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0806 07:58:14.717127   49751 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 07:58:14.729142   49751 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 07:58:14.740593   49751 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 07:58:14.752524   49751 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0806 07:58:14.764629   49751 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 07:58:14.776538   49751 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 07:58:14.788228   49751 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 07:58:14.799904   49751 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0806 07:58:14.810436   49751 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0806 07:58:14.810520   49751 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0806 07:58:14.820939   49751 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 07:58:14.964414   49751 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0806 07:58:21.103667   49751 ssh_runner.go:235] Completed: sudo systemctl restart crio: (6.139212478s)
	I0806 07:58:21.103698   49751 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0806 07:58:21.103753   49751 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0806 07:58:21.109024   49751 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0806 07:58:21.109050   49751 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0806 07:58:21.109059   49751 command_runner.go:130] > Device: 0,22	Inode: 1340        Links: 1
	I0806 07:58:21.109069   49751 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0806 07:58:21.109077   49751 command_runner.go:130] > Access: 2024-08-06 07:58:20.971324906 +0000
	I0806 07:58:21.109086   49751 command_runner.go:130] > Modify: 2024-08-06 07:58:20.971324906 +0000
	I0806 07:58:21.109106   49751 command_runner.go:130] > Change: 2024-08-06 07:58:20.971324906 +0000
	I0806 07:58:21.109118   49751 command_runner.go:130] >  Birth: -
	I0806 07:58:21.109135   49751 start.go:563] Will wait 60s for crictl version
	I0806 07:58:21.109184   49751 ssh_runner.go:195] Run: which crictl
	I0806 07:58:21.113153   49751 command_runner.go:130] > /usr/bin/crictl
	I0806 07:58:21.113232   49751 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0806 07:58:21.156643   49751 command_runner.go:130] > Version:  0.1.0
	I0806 07:58:21.156667   49751 command_runner.go:130] > RuntimeName:  cri-o
	I0806 07:58:21.156675   49751 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0806 07:58:21.156683   49751 command_runner.go:130] > RuntimeApiVersion:  v1
	I0806 07:58:21.156708   49751 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0806 07:58:21.156787   49751 ssh_runner.go:195] Run: crio --version
	I0806 07:58:21.186189   49751 command_runner.go:130] > crio version 1.29.1
	I0806 07:58:21.186216   49751 command_runner.go:130] > Version:        1.29.1
	I0806 07:58:21.186224   49751 command_runner.go:130] > GitCommit:      unknown
	I0806 07:58:21.186232   49751 command_runner.go:130] > GitCommitDate:  unknown
	I0806 07:58:21.186239   49751 command_runner.go:130] > GitTreeState:   clean
	I0806 07:58:21.186249   49751 command_runner.go:130] > BuildDate:      2024-07-29T16:04:01Z
	I0806 07:58:21.186256   49751 command_runner.go:130] > GoVersion:      go1.21.6
	I0806 07:58:21.186264   49751 command_runner.go:130] > Compiler:       gc
	I0806 07:58:21.186271   49751 command_runner.go:130] > Platform:       linux/amd64
	I0806 07:58:21.186275   49751 command_runner.go:130] > Linkmode:       dynamic
	I0806 07:58:21.186279   49751 command_runner.go:130] > BuildTags:      
	I0806 07:58:21.186283   49751 command_runner.go:130] >   containers_image_ostree_stub
	I0806 07:58:21.186288   49751 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0806 07:58:21.186295   49751 command_runner.go:130] >   btrfs_noversion
	I0806 07:58:21.186300   49751 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0806 07:58:21.186305   49751 command_runner.go:130] >   libdm_no_deferred_remove
	I0806 07:58:21.186308   49751 command_runner.go:130] >   seccomp
	I0806 07:58:21.186312   49751 command_runner.go:130] > LDFlags:          unknown
	I0806 07:58:21.186324   49751 command_runner.go:130] > SeccompEnabled:   true
	I0806 07:58:21.186332   49751 command_runner.go:130] > AppArmorEnabled:  false
	I0806 07:58:21.187464   49751 ssh_runner.go:195] Run: crio --version
	I0806 07:58:21.215814   49751 command_runner.go:130] > crio version 1.29.1
	I0806 07:58:21.215838   49751 command_runner.go:130] > Version:        1.29.1
	I0806 07:58:21.215859   49751 command_runner.go:130] > GitCommit:      unknown
	I0806 07:58:21.215865   49751 command_runner.go:130] > GitCommitDate:  unknown
	I0806 07:58:21.215877   49751 command_runner.go:130] > GitTreeState:   clean
	I0806 07:58:21.215886   49751 command_runner.go:130] > BuildDate:      2024-07-29T16:04:01Z
	I0806 07:58:21.215893   49751 command_runner.go:130] > GoVersion:      go1.21.6
	I0806 07:58:21.215899   49751 command_runner.go:130] > Compiler:       gc
	I0806 07:58:21.215905   49751 command_runner.go:130] > Platform:       linux/amd64
	I0806 07:58:21.215913   49751 command_runner.go:130] > Linkmode:       dynamic
	I0806 07:58:21.215917   49751 command_runner.go:130] > BuildTags:      
	I0806 07:58:21.215922   49751 command_runner.go:130] >   containers_image_ostree_stub
	I0806 07:58:21.215927   49751 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0806 07:58:21.215931   49751 command_runner.go:130] >   btrfs_noversion
	I0806 07:58:21.215935   49751 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0806 07:58:21.215940   49751 command_runner.go:130] >   libdm_no_deferred_remove
	I0806 07:58:21.215943   49751 command_runner.go:130] >   seccomp
	I0806 07:58:21.215947   49751 command_runner.go:130] > LDFlags:          unknown
	I0806 07:58:21.215954   49751 command_runner.go:130] > SeccompEnabled:   true
	I0806 07:58:21.215960   49751 command_runner.go:130] > AppArmorEnabled:  false
	I0806 07:58:21.219946   49751 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0806 07:58:21.221262   49751 main.go:141] libmachine: (multinode-551471) Calling .GetIP
	I0806 07:58:21.224168   49751 main.go:141] libmachine: (multinode-551471) DBG | domain multinode-551471 has defined MAC address 52:54:00:e9:96:6b in network mk-multinode-551471
	I0806 07:58:21.224729   49751 main.go:141] libmachine: (multinode-551471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:96:6b", ip: ""} in network mk-multinode-551471: {Iface:virbr1 ExpiryTime:2024-08-06 08:51:03 +0000 UTC Type:0 Mac:52:54:00:e9:96:6b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:multinode-551471 Clientid:01:52:54:00:e9:96:6b}
	I0806 07:58:21.224763   49751 main.go:141] libmachine: (multinode-551471) DBG | domain multinode-551471 has defined IP address 192.168.39.186 and MAC address 52:54:00:e9:96:6b in network mk-multinode-551471
	I0806 07:58:21.224970   49751 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0806 07:58:21.229462   49751 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0806 07:58:21.229565   49751 kubeadm.go:883] updating cluster {Name:multinode-551471 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.3 ClusterName:multinode-551471 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.186 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.134 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.189 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fa
lse inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0806 07:58:21.229691   49751 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0806 07:58:21.229739   49751 ssh_runner.go:195] Run: sudo crictl images --output json
	I0806 07:58:21.271124   49751 command_runner.go:130] > {
	I0806 07:58:21.271145   49751 command_runner.go:130] >   "images": [
	I0806 07:58:21.271149   49751 command_runner.go:130] >     {
	I0806 07:58:21.271157   49751 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0806 07:58:21.271162   49751 command_runner.go:130] >       "repoTags": [
	I0806 07:58:21.271170   49751 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0806 07:58:21.271173   49751 command_runner.go:130] >       ],
	I0806 07:58:21.271177   49751 command_runner.go:130] >       "repoDigests": [
	I0806 07:58:21.271184   49751 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0806 07:58:21.271191   49751 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0806 07:58:21.271194   49751 command_runner.go:130] >       ],
	I0806 07:58:21.271199   49751 command_runner.go:130] >       "size": "87165492",
	I0806 07:58:21.271202   49751 command_runner.go:130] >       "uid": null,
	I0806 07:58:21.271207   49751 command_runner.go:130] >       "username": "",
	I0806 07:58:21.271217   49751 command_runner.go:130] >       "spec": null,
	I0806 07:58:21.271221   49751 command_runner.go:130] >       "pinned": false
	I0806 07:58:21.271224   49751 command_runner.go:130] >     },
	I0806 07:58:21.271234   49751 command_runner.go:130] >     {
	I0806 07:58:21.271240   49751 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0806 07:58:21.271244   49751 command_runner.go:130] >       "repoTags": [
	I0806 07:58:21.271249   49751 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0806 07:58:21.271255   49751 command_runner.go:130] >       ],
	I0806 07:58:21.271260   49751 command_runner.go:130] >       "repoDigests": [
	I0806 07:58:21.271267   49751 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0806 07:58:21.271274   49751 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0806 07:58:21.271278   49751 command_runner.go:130] >       ],
	I0806 07:58:21.271285   49751 command_runner.go:130] >       "size": "87165492",
	I0806 07:58:21.271288   49751 command_runner.go:130] >       "uid": null,
	I0806 07:58:21.271296   49751 command_runner.go:130] >       "username": "",
	I0806 07:58:21.271302   49751 command_runner.go:130] >       "spec": null,
	I0806 07:58:21.271306   49751 command_runner.go:130] >       "pinned": false
	I0806 07:58:21.271309   49751 command_runner.go:130] >     },
	I0806 07:58:21.271312   49751 command_runner.go:130] >     {
	I0806 07:58:21.271318   49751 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0806 07:58:21.271324   49751 command_runner.go:130] >       "repoTags": [
	I0806 07:58:21.271329   49751 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0806 07:58:21.271333   49751 command_runner.go:130] >       ],
	I0806 07:58:21.271337   49751 command_runner.go:130] >       "repoDigests": [
	I0806 07:58:21.271344   49751 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0806 07:58:21.271351   49751 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0806 07:58:21.271355   49751 command_runner.go:130] >       ],
	I0806 07:58:21.271360   49751 command_runner.go:130] >       "size": "1363676",
	I0806 07:58:21.271364   49751 command_runner.go:130] >       "uid": null,
	I0806 07:58:21.271368   49751 command_runner.go:130] >       "username": "",
	I0806 07:58:21.271373   49751 command_runner.go:130] >       "spec": null,
	I0806 07:58:21.271377   49751 command_runner.go:130] >       "pinned": false
	I0806 07:58:21.271381   49751 command_runner.go:130] >     },
	I0806 07:58:21.271385   49751 command_runner.go:130] >     {
	I0806 07:58:21.271391   49751 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0806 07:58:21.271395   49751 command_runner.go:130] >       "repoTags": [
	I0806 07:58:21.271400   49751 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0806 07:58:21.271404   49751 command_runner.go:130] >       ],
	I0806 07:58:21.271410   49751 command_runner.go:130] >       "repoDigests": [
	I0806 07:58:21.271422   49751 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0806 07:58:21.271436   49751 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0806 07:58:21.271442   49751 command_runner.go:130] >       ],
	I0806 07:58:21.271446   49751 command_runner.go:130] >       "size": "31470524",
	I0806 07:58:21.271450   49751 command_runner.go:130] >       "uid": null,
	I0806 07:58:21.271456   49751 command_runner.go:130] >       "username": "",
	I0806 07:58:21.271460   49751 command_runner.go:130] >       "spec": null,
	I0806 07:58:21.271463   49751 command_runner.go:130] >       "pinned": false
	I0806 07:58:21.271469   49751 command_runner.go:130] >     },
	I0806 07:58:21.271472   49751 command_runner.go:130] >     {
	I0806 07:58:21.271478   49751 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0806 07:58:21.271484   49751 command_runner.go:130] >       "repoTags": [
	I0806 07:58:21.271490   49751 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0806 07:58:21.271493   49751 command_runner.go:130] >       ],
	I0806 07:58:21.271497   49751 command_runner.go:130] >       "repoDigests": [
	I0806 07:58:21.271503   49751 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0806 07:58:21.271511   49751 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0806 07:58:21.271516   49751 command_runner.go:130] >       ],
	I0806 07:58:21.271520   49751 command_runner.go:130] >       "size": "61245718",
	I0806 07:58:21.271524   49751 command_runner.go:130] >       "uid": null,
	I0806 07:58:21.271528   49751 command_runner.go:130] >       "username": "nonroot",
	I0806 07:58:21.271532   49751 command_runner.go:130] >       "spec": null,
	I0806 07:58:21.271535   49751 command_runner.go:130] >       "pinned": false
	I0806 07:58:21.271539   49751 command_runner.go:130] >     },
	I0806 07:58:21.271542   49751 command_runner.go:130] >     {
	I0806 07:58:21.271548   49751 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0806 07:58:21.271554   49751 command_runner.go:130] >       "repoTags": [
	I0806 07:58:21.271558   49751 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0806 07:58:21.271561   49751 command_runner.go:130] >       ],
	I0806 07:58:21.271565   49751 command_runner.go:130] >       "repoDigests": [
	I0806 07:58:21.271571   49751 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0806 07:58:21.271581   49751 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0806 07:58:21.271584   49751 command_runner.go:130] >       ],
	I0806 07:58:21.271592   49751 command_runner.go:130] >       "size": "150779692",
	I0806 07:58:21.271595   49751 command_runner.go:130] >       "uid": {
	I0806 07:58:21.271601   49751 command_runner.go:130] >         "value": "0"
	I0806 07:58:21.271609   49751 command_runner.go:130] >       },
	I0806 07:58:21.271615   49751 command_runner.go:130] >       "username": "",
	I0806 07:58:21.271619   49751 command_runner.go:130] >       "spec": null,
	I0806 07:58:21.271627   49751 command_runner.go:130] >       "pinned": false
	I0806 07:58:21.271630   49751 command_runner.go:130] >     },
	I0806 07:58:21.271633   49751 command_runner.go:130] >     {
	I0806 07:58:21.271639   49751 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0806 07:58:21.271645   49751 command_runner.go:130] >       "repoTags": [
	I0806 07:58:21.271649   49751 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0806 07:58:21.271652   49751 command_runner.go:130] >       ],
	I0806 07:58:21.271656   49751 command_runner.go:130] >       "repoDigests": [
	I0806 07:58:21.271663   49751 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0806 07:58:21.271672   49751 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0806 07:58:21.271675   49751 command_runner.go:130] >       ],
	I0806 07:58:21.271679   49751 command_runner.go:130] >       "size": "117609954",
	I0806 07:58:21.271684   49751 command_runner.go:130] >       "uid": {
	I0806 07:58:21.271688   49751 command_runner.go:130] >         "value": "0"
	I0806 07:58:21.271694   49751 command_runner.go:130] >       },
	I0806 07:58:21.271697   49751 command_runner.go:130] >       "username": "",
	I0806 07:58:21.271701   49751 command_runner.go:130] >       "spec": null,
	I0806 07:58:21.271705   49751 command_runner.go:130] >       "pinned": false
	I0806 07:58:21.271709   49751 command_runner.go:130] >     },
	I0806 07:58:21.271712   49751 command_runner.go:130] >     {
	I0806 07:58:21.271718   49751 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0806 07:58:21.271723   49751 command_runner.go:130] >       "repoTags": [
	I0806 07:58:21.271728   49751 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0806 07:58:21.271734   49751 command_runner.go:130] >       ],
	I0806 07:58:21.271738   49751 command_runner.go:130] >       "repoDigests": [
	I0806 07:58:21.271758   49751 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0806 07:58:21.271768   49751 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0806 07:58:21.271772   49751 command_runner.go:130] >       ],
	I0806 07:58:21.271776   49751 command_runner.go:130] >       "size": "112198984",
	I0806 07:58:21.271779   49751 command_runner.go:130] >       "uid": {
	I0806 07:58:21.271783   49751 command_runner.go:130] >         "value": "0"
	I0806 07:58:21.271787   49751 command_runner.go:130] >       },
	I0806 07:58:21.271790   49751 command_runner.go:130] >       "username": "",
	I0806 07:58:21.271798   49751 command_runner.go:130] >       "spec": null,
	I0806 07:58:21.271802   49751 command_runner.go:130] >       "pinned": false
	I0806 07:58:21.271805   49751 command_runner.go:130] >     },
	I0806 07:58:21.271808   49751 command_runner.go:130] >     {
	I0806 07:58:21.271813   49751 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0806 07:58:21.271817   49751 command_runner.go:130] >       "repoTags": [
	I0806 07:58:21.271821   49751 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0806 07:58:21.271824   49751 command_runner.go:130] >       ],
	I0806 07:58:21.271827   49751 command_runner.go:130] >       "repoDigests": [
	I0806 07:58:21.271834   49751 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0806 07:58:21.271842   49751 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0806 07:58:21.271846   49751 command_runner.go:130] >       ],
	I0806 07:58:21.271850   49751 command_runner.go:130] >       "size": "85953945",
	I0806 07:58:21.271854   49751 command_runner.go:130] >       "uid": null,
	I0806 07:58:21.271858   49751 command_runner.go:130] >       "username": "",
	I0806 07:58:21.271861   49751 command_runner.go:130] >       "spec": null,
	I0806 07:58:21.271865   49751 command_runner.go:130] >       "pinned": false
	I0806 07:58:21.271868   49751 command_runner.go:130] >     },
	I0806 07:58:21.271872   49751 command_runner.go:130] >     {
	I0806 07:58:21.271877   49751 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0806 07:58:21.271884   49751 command_runner.go:130] >       "repoTags": [
	I0806 07:58:21.271888   49751 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0806 07:58:21.271892   49751 command_runner.go:130] >       ],
	I0806 07:58:21.271895   49751 command_runner.go:130] >       "repoDigests": [
	I0806 07:58:21.271903   49751 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0806 07:58:21.271912   49751 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0806 07:58:21.271916   49751 command_runner.go:130] >       ],
	I0806 07:58:21.271920   49751 command_runner.go:130] >       "size": "63051080",
	I0806 07:58:21.271923   49751 command_runner.go:130] >       "uid": {
	I0806 07:58:21.271927   49751 command_runner.go:130] >         "value": "0"
	I0806 07:58:21.271931   49751 command_runner.go:130] >       },
	I0806 07:58:21.271935   49751 command_runner.go:130] >       "username": "",
	I0806 07:58:21.271940   49751 command_runner.go:130] >       "spec": null,
	I0806 07:58:21.271944   49751 command_runner.go:130] >       "pinned": false
	I0806 07:58:21.271947   49751 command_runner.go:130] >     },
	I0806 07:58:21.271951   49751 command_runner.go:130] >     {
	I0806 07:58:21.271961   49751 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0806 07:58:21.271967   49751 command_runner.go:130] >       "repoTags": [
	I0806 07:58:21.271971   49751 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0806 07:58:21.271974   49751 command_runner.go:130] >       ],
	I0806 07:58:21.271981   49751 command_runner.go:130] >       "repoDigests": [
	I0806 07:58:21.271989   49751 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0806 07:58:21.271996   49751 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0806 07:58:21.272001   49751 command_runner.go:130] >       ],
	I0806 07:58:21.272005   49751 command_runner.go:130] >       "size": "750414",
	I0806 07:58:21.272008   49751 command_runner.go:130] >       "uid": {
	I0806 07:58:21.272012   49751 command_runner.go:130] >         "value": "65535"
	I0806 07:58:21.272015   49751 command_runner.go:130] >       },
	I0806 07:58:21.272019   49751 command_runner.go:130] >       "username": "",
	I0806 07:58:21.272024   49751 command_runner.go:130] >       "spec": null,
	I0806 07:58:21.272029   49751 command_runner.go:130] >       "pinned": true
	I0806 07:58:21.272033   49751 command_runner.go:130] >     }
	I0806 07:58:21.272036   49751 command_runner.go:130] >   ]
	I0806 07:58:21.272039   49751 command_runner.go:130] > }
	I0806 07:58:21.273129   49751 crio.go:514] all images are preloaded for cri-o runtime.
	I0806 07:58:21.273147   49751 crio.go:433] Images already preloaded, skipping extraction
	I0806 07:58:21.273203   49751 ssh_runner.go:195] Run: sudo crictl images --output json
	I0806 07:58:21.310771   49751 command_runner.go:130] > {
	I0806 07:58:21.310799   49751 command_runner.go:130] >   "images": [
	I0806 07:58:21.310805   49751 command_runner.go:130] >     {
	I0806 07:58:21.310818   49751 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0806 07:58:21.310827   49751 command_runner.go:130] >       "repoTags": [
	I0806 07:58:21.310836   49751 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0806 07:58:21.310841   49751 command_runner.go:130] >       ],
	I0806 07:58:21.310865   49751 command_runner.go:130] >       "repoDigests": [
	I0806 07:58:21.310879   49751 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0806 07:58:21.310890   49751 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0806 07:58:21.310897   49751 command_runner.go:130] >       ],
	I0806 07:58:21.310905   49751 command_runner.go:130] >       "size": "87165492",
	I0806 07:58:21.310915   49751 command_runner.go:130] >       "uid": null,
	I0806 07:58:21.310922   49751 command_runner.go:130] >       "username": "",
	I0806 07:58:21.310933   49751 command_runner.go:130] >       "spec": null,
	I0806 07:58:21.310940   49751 command_runner.go:130] >       "pinned": false
	I0806 07:58:21.310948   49751 command_runner.go:130] >     },
	I0806 07:58:21.310954   49751 command_runner.go:130] >     {
	I0806 07:58:21.310966   49751 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0806 07:58:21.310976   49751 command_runner.go:130] >       "repoTags": [
	I0806 07:58:21.310985   49751 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0806 07:58:21.310995   49751 command_runner.go:130] >       ],
	I0806 07:58:21.311005   49751 command_runner.go:130] >       "repoDigests": [
	I0806 07:58:21.311019   49751 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0806 07:58:21.311036   49751 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0806 07:58:21.311045   49751 command_runner.go:130] >       ],
	I0806 07:58:21.311055   49751 command_runner.go:130] >       "size": "87165492",
	I0806 07:58:21.311064   49751 command_runner.go:130] >       "uid": null,
	I0806 07:58:21.311073   49751 command_runner.go:130] >       "username": "",
	I0806 07:58:21.311082   49751 command_runner.go:130] >       "spec": null,
	I0806 07:58:21.311090   49751 command_runner.go:130] >       "pinned": false
	I0806 07:58:21.311099   49751 command_runner.go:130] >     },
	I0806 07:58:21.311104   49751 command_runner.go:130] >     {
	I0806 07:58:21.311116   49751 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0806 07:58:21.311127   49751 command_runner.go:130] >       "repoTags": [
	I0806 07:58:21.311137   49751 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0806 07:58:21.311145   49751 command_runner.go:130] >       ],
	I0806 07:58:21.311153   49751 command_runner.go:130] >       "repoDigests": [
	I0806 07:58:21.311167   49751 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0806 07:58:21.311180   49751 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0806 07:58:21.311188   49751 command_runner.go:130] >       ],
	I0806 07:58:21.311197   49751 command_runner.go:130] >       "size": "1363676",
	I0806 07:58:21.311205   49751 command_runner.go:130] >       "uid": null,
	I0806 07:58:21.311213   49751 command_runner.go:130] >       "username": "",
	I0806 07:58:21.311222   49751 command_runner.go:130] >       "spec": null,
	I0806 07:58:21.311231   49751 command_runner.go:130] >       "pinned": false
	I0806 07:58:21.311244   49751 command_runner.go:130] >     },
	I0806 07:58:21.311252   49751 command_runner.go:130] >     {
	I0806 07:58:21.311261   49751 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0806 07:58:21.311269   49751 command_runner.go:130] >       "repoTags": [
	I0806 07:58:21.311278   49751 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0806 07:58:21.311286   49751 command_runner.go:130] >       ],
	I0806 07:58:21.311293   49751 command_runner.go:130] >       "repoDigests": [
	I0806 07:58:21.311321   49751 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0806 07:58:21.311341   49751 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0806 07:58:21.311347   49751 command_runner.go:130] >       ],
	I0806 07:58:21.311353   49751 command_runner.go:130] >       "size": "31470524",
	I0806 07:58:21.311361   49751 command_runner.go:130] >       "uid": null,
	I0806 07:58:21.311366   49751 command_runner.go:130] >       "username": "",
	I0806 07:58:21.311375   49751 command_runner.go:130] >       "spec": null,
	I0806 07:58:21.311383   49751 command_runner.go:130] >       "pinned": false
	I0806 07:58:21.311391   49751 command_runner.go:130] >     },
	I0806 07:58:21.311396   49751 command_runner.go:130] >     {
	I0806 07:58:21.311408   49751 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0806 07:58:21.311417   49751 command_runner.go:130] >       "repoTags": [
	I0806 07:58:21.311428   49751 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0806 07:58:21.311436   49751 command_runner.go:130] >       ],
	I0806 07:58:21.311445   49751 command_runner.go:130] >       "repoDigests": [
	I0806 07:58:21.311459   49751 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0806 07:58:21.311475   49751 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0806 07:58:21.311485   49751 command_runner.go:130] >       ],
	I0806 07:58:21.311491   49751 command_runner.go:130] >       "size": "61245718",
	I0806 07:58:21.311500   49751 command_runner.go:130] >       "uid": null,
	I0806 07:58:21.311506   49751 command_runner.go:130] >       "username": "nonroot",
	I0806 07:58:21.311516   49751 command_runner.go:130] >       "spec": null,
	I0806 07:58:21.311523   49751 command_runner.go:130] >       "pinned": false
	I0806 07:58:21.311532   49751 command_runner.go:130] >     },
	I0806 07:58:21.311539   49751 command_runner.go:130] >     {
	I0806 07:58:21.311552   49751 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0806 07:58:21.311562   49751 command_runner.go:130] >       "repoTags": [
	I0806 07:58:21.311572   49751 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0806 07:58:21.311580   49751 command_runner.go:130] >       ],
	I0806 07:58:21.311586   49751 command_runner.go:130] >       "repoDigests": [
	I0806 07:58:21.311600   49751 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0806 07:58:21.311615   49751 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0806 07:58:21.311624   49751 command_runner.go:130] >       ],
	I0806 07:58:21.311633   49751 command_runner.go:130] >       "size": "150779692",
	I0806 07:58:21.311641   49751 command_runner.go:130] >       "uid": {
	I0806 07:58:21.311647   49751 command_runner.go:130] >         "value": "0"
	I0806 07:58:21.311655   49751 command_runner.go:130] >       },
	I0806 07:58:21.311662   49751 command_runner.go:130] >       "username": "",
	I0806 07:58:21.311669   49751 command_runner.go:130] >       "spec": null,
	I0806 07:58:21.311678   49751 command_runner.go:130] >       "pinned": false
	I0806 07:58:21.311687   49751 command_runner.go:130] >     },
	I0806 07:58:21.311693   49751 command_runner.go:130] >     {
	I0806 07:58:21.311706   49751 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0806 07:58:21.311715   49751 command_runner.go:130] >       "repoTags": [
	I0806 07:58:21.311724   49751 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0806 07:58:21.311732   49751 command_runner.go:130] >       ],
	I0806 07:58:21.311739   49751 command_runner.go:130] >       "repoDigests": [
	I0806 07:58:21.311752   49751 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0806 07:58:21.311766   49751 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0806 07:58:21.311775   49751 command_runner.go:130] >       ],
	I0806 07:58:21.311786   49751 command_runner.go:130] >       "size": "117609954",
	I0806 07:58:21.311795   49751 command_runner.go:130] >       "uid": {
	I0806 07:58:21.311804   49751 command_runner.go:130] >         "value": "0"
	I0806 07:58:21.311813   49751 command_runner.go:130] >       },
	I0806 07:58:21.311821   49751 command_runner.go:130] >       "username": "",
	I0806 07:58:21.311827   49751 command_runner.go:130] >       "spec": null,
	I0806 07:58:21.311836   49751 command_runner.go:130] >       "pinned": false
	I0806 07:58:21.311841   49751 command_runner.go:130] >     },
	I0806 07:58:21.311855   49751 command_runner.go:130] >     {
	I0806 07:58:21.311866   49751 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0806 07:58:21.311874   49751 command_runner.go:130] >       "repoTags": [
	I0806 07:58:21.311883   49751 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0806 07:58:21.311890   49751 command_runner.go:130] >       ],
	I0806 07:58:21.311896   49751 command_runner.go:130] >       "repoDigests": [
	I0806 07:58:21.311919   49751 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0806 07:58:21.311933   49751 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0806 07:58:21.311938   49751 command_runner.go:130] >       ],
	I0806 07:58:21.311948   49751 command_runner.go:130] >       "size": "112198984",
	I0806 07:58:21.311953   49751 command_runner.go:130] >       "uid": {
	I0806 07:58:21.311961   49751 command_runner.go:130] >         "value": "0"
	I0806 07:58:21.311967   49751 command_runner.go:130] >       },
	I0806 07:58:21.311976   49751 command_runner.go:130] >       "username": "",
	I0806 07:58:21.311982   49751 command_runner.go:130] >       "spec": null,
	I0806 07:58:21.311990   49751 command_runner.go:130] >       "pinned": false
	I0806 07:58:21.311996   49751 command_runner.go:130] >     },
	I0806 07:58:21.312005   49751 command_runner.go:130] >     {
	I0806 07:58:21.312016   49751 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0806 07:58:21.312022   49751 command_runner.go:130] >       "repoTags": [
	I0806 07:58:21.312027   49751 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0806 07:58:21.312031   49751 command_runner.go:130] >       ],
	I0806 07:58:21.312035   49751 command_runner.go:130] >       "repoDigests": [
	I0806 07:58:21.312042   49751 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0806 07:58:21.312051   49751 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0806 07:58:21.312055   49751 command_runner.go:130] >       ],
	I0806 07:58:21.312059   49751 command_runner.go:130] >       "size": "85953945",
	I0806 07:58:21.312063   49751 command_runner.go:130] >       "uid": null,
	I0806 07:58:21.312066   49751 command_runner.go:130] >       "username": "",
	I0806 07:58:21.312070   49751 command_runner.go:130] >       "spec": null,
	I0806 07:58:21.312074   49751 command_runner.go:130] >       "pinned": false
	I0806 07:58:21.312079   49751 command_runner.go:130] >     },
	I0806 07:58:21.312082   49751 command_runner.go:130] >     {
	I0806 07:58:21.312088   49751 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0806 07:58:21.312092   49751 command_runner.go:130] >       "repoTags": [
	I0806 07:58:21.312099   49751 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0806 07:58:21.312104   49751 command_runner.go:130] >       ],
	I0806 07:58:21.312109   49751 command_runner.go:130] >       "repoDigests": [
	I0806 07:58:21.312116   49751 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0806 07:58:21.312128   49751 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0806 07:58:21.312132   49751 command_runner.go:130] >       ],
	I0806 07:58:21.312136   49751 command_runner.go:130] >       "size": "63051080",
	I0806 07:58:21.312140   49751 command_runner.go:130] >       "uid": {
	I0806 07:58:21.312147   49751 command_runner.go:130] >         "value": "0"
	I0806 07:58:21.312154   49751 command_runner.go:130] >       },
	I0806 07:58:21.312162   49751 command_runner.go:130] >       "username": "",
	I0806 07:58:21.312167   49751 command_runner.go:130] >       "spec": null,
	I0806 07:58:21.312177   49751 command_runner.go:130] >       "pinned": false
	I0806 07:58:21.312182   49751 command_runner.go:130] >     },
	I0806 07:58:21.312190   49751 command_runner.go:130] >     {
	I0806 07:58:21.312198   49751 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0806 07:58:21.312207   49751 command_runner.go:130] >       "repoTags": [
	I0806 07:58:21.312215   49751 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0806 07:58:21.312224   49751 command_runner.go:130] >       ],
	I0806 07:58:21.312230   49751 command_runner.go:130] >       "repoDigests": [
	I0806 07:58:21.312239   49751 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0806 07:58:21.312250   49751 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0806 07:58:21.312255   49751 command_runner.go:130] >       ],
	I0806 07:58:21.312261   49751 command_runner.go:130] >       "size": "750414",
	I0806 07:58:21.312270   49751 command_runner.go:130] >       "uid": {
	I0806 07:58:21.312276   49751 command_runner.go:130] >         "value": "65535"
	I0806 07:58:21.312283   49751 command_runner.go:130] >       },
	I0806 07:58:21.312289   49751 command_runner.go:130] >       "username": "",
	I0806 07:58:21.312299   49751 command_runner.go:130] >       "spec": null,
	I0806 07:58:21.312306   49751 command_runner.go:130] >       "pinned": true
	I0806 07:58:21.312314   49751 command_runner.go:130] >     }
	I0806 07:58:21.312320   49751 command_runner.go:130] >   ]
	I0806 07:58:21.312329   49751 command_runner.go:130] > }
	I0806 07:58:21.312485   49751 crio.go:514] all images are preloaded for cri-o runtime.
	I0806 07:58:21.312499   49751 cache_images.go:84] Images are preloaded, skipping loading
	I0806 07:58:21.312507   49751 kubeadm.go:934] updating node { 192.168.39.186 8443 v1.30.3 crio true true} ...
	I0806 07:58:21.312607   49751 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-551471 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.186
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-551471 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0806 07:58:21.312671   49751 ssh_runner.go:195] Run: crio config
	I0806 07:58:21.354414   49751 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0806 07:58:21.354449   49751 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0806 07:58:21.354460   49751 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0806 07:58:21.354465   49751 command_runner.go:130] > #
	I0806 07:58:21.354476   49751 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0806 07:58:21.354485   49751 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0806 07:58:21.354494   49751 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0806 07:58:21.354512   49751 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0806 07:58:21.354517   49751 command_runner.go:130] > # reload'.
	I0806 07:58:21.354528   49751 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0806 07:58:21.354540   49751 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0806 07:58:21.354551   49751 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0806 07:58:21.354559   49751 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0806 07:58:21.354571   49751 command_runner.go:130] > [crio]
	I0806 07:58:21.354582   49751 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0806 07:58:21.354600   49751 command_runner.go:130] > # containers images, in this directory.
	I0806 07:58:21.354611   49751 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0806 07:58:21.354629   49751 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0806 07:58:21.354641   49751 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0806 07:58:21.354654   49751 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0806 07:58:21.354661   49751 command_runner.go:130] > # imagestore = ""
	I0806 07:58:21.354671   49751 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0806 07:58:21.354683   49751 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0806 07:58:21.354693   49751 command_runner.go:130] > storage_driver = "overlay"
	I0806 07:58:21.354706   49751 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0806 07:58:21.354719   49751 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0806 07:58:21.354729   49751 command_runner.go:130] > storage_option = [
	I0806 07:58:21.354738   49751 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0806 07:58:21.354747   49751 command_runner.go:130] > ]
	I0806 07:58:21.354757   49751 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0806 07:58:21.354769   49751 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0806 07:58:21.354776   49751 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0806 07:58:21.354787   49751 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0806 07:58:21.354799   49751 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0806 07:58:21.354809   49751 command_runner.go:130] > # always happen on a node reboot
	I0806 07:58:21.354818   49751 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0806 07:58:21.354834   49751 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0806 07:58:21.354853   49751 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0806 07:58:21.354864   49751 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0806 07:58:21.354873   49751 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0806 07:58:21.354888   49751 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0806 07:58:21.354904   49751 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0806 07:58:21.354916   49751 command_runner.go:130] > # internal_wipe = true
	I0806 07:58:21.354932   49751 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0806 07:58:21.354944   49751 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0806 07:58:21.354954   49751 command_runner.go:130] > # internal_repair = false
	I0806 07:58:21.354963   49751 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0806 07:58:21.354976   49751 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0806 07:58:21.354987   49751 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0806 07:58:21.354996   49751 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0806 07:58:21.355008   49751 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0806 07:58:21.355017   49751 command_runner.go:130] > [crio.api]
	I0806 07:58:21.355025   49751 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0806 07:58:21.355035   49751 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0806 07:58:21.355043   49751 command_runner.go:130] > # IP address on which the stream server will listen.
	I0806 07:58:21.355053   49751 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0806 07:58:21.355063   49751 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0806 07:58:21.355075   49751 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0806 07:58:21.355083   49751 command_runner.go:130] > # stream_port = "0"
	I0806 07:58:21.355093   49751 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0806 07:58:21.355103   49751 command_runner.go:130] > # stream_enable_tls = false
	I0806 07:58:21.355113   49751 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0806 07:58:21.355122   49751 command_runner.go:130] > # stream_idle_timeout = ""
	I0806 07:58:21.355132   49751 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0806 07:58:21.355145   49751 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0806 07:58:21.355150   49751 command_runner.go:130] > # minutes.
	I0806 07:58:21.355158   49751 command_runner.go:130] > # stream_tls_cert = ""
	I0806 07:58:21.355170   49751 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0806 07:58:21.355181   49751 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0806 07:58:21.355191   49751 command_runner.go:130] > # stream_tls_key = ""
	I0806 07:58:21.355201   49751 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0806 07:58:21.355213   49751 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0806 07:58:21.355231   49751 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0806 07:58:21.355241   49751 command_runner.go:130] > # stream_tls_ca = ""
	I0806 07:58:21.355252   49751 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0806 07:58:21.355262   49751 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0806 07:58:21.355275   49751 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0806 07:58:21.355285   49751 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0806 07:58:21.355296   49751 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0806 07:58:21.355308   49751 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0806 07:58:21.355316   49751 command_runner.go:130] > [crio.runtime]
	I0806 07:58:21.355325   49751 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0806 07:58:21.355336   49751 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0806 07:58:21.355342   49751 command_runner.go:130] > # "nofile=1024:2048"
	I0806 07:58:21.355354   49751 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0806 07:58:21.355364   49751 command_runner.go:130] > # default_ulimits = [
	I0806 07:58:21.355369   49751 command_runner.go:130] > # ]
	I0806 07:58:21.355382   49751 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0806 07:58:21.355390   49751 command_runner.go:130] > # no_pivot = false
	I0806 07:58:21.355398   49751 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0806 07:58:21.355410   49751 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0806 07:58:21.355420   49751 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0806 07:58:21.355430   49751 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0806 07:58:21.355439   49751 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0806 07:58:21.355448   49751 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0806 07:58:21.355458   49751 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0806 07:58:21.355464   49751 command_runner.go:130] > # Cgroup setting for conmon
	I0806 07:58:21.355476   49751 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0806 07:58:21.355486   49751 command_runner.go:130] > conmon_cgroup = "pod"
	I0806 07:58:21.355495   49751 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0806 07:58:21.355505   49751 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0806 07:58:21.355516   49751 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0806 07:58:21.355525   49751 command_runner.go:130] > conmon_env = [
	I0806 07:58:21.355536   49751 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0806 07:58:21.355545   49751 command_runner.go:130] > ]
	I0806 07:58:21.355554   49751 command_runner.go:130] > # Additional environment variables to set for all the
	I0806 07:58:21.355565   49751 command_runner.go:130] > # containers. These are overridden if set in the
	I0806 07:58:21.355575   49751 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0806 07:58:21.355584   49751 command_runner.go:130] > # default_env = [
	I0806 07:58:21.355590   49751 command_runner.go:130] > # ]
	I0806 07:58:21.355603   49751 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0806 07:58:21.355623   49751 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0806 07:58:21.355633   49751 command_runner.go:130] > # selinux = false
	I0806 07:58:21.355644   49751 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0806 07:58:21.355656   49751 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0806 07:58:21.355668   49751 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0806 07:58:21.355677   49751 command_runner.go:130] > # seccomp_profile = ""
	I0806 07:58:21.355686   49751 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0806 07:58:21.355697   49751 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0806 07:58:21.355707   49751 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0806 07:58:21.355715   49751 command_runner.go:130] > # which might increase security.
	I0806 07:58:21.355725   49751 command_runner.go:130] > # This option is currently deprecated,
	I0806 07:58:21.355734   49751 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0806 07:58:21.355747   49751 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0806 07:58:21.355760   49751 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0806 07:58:21.355772   49751 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0806 07:58:21.355785   49751 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0806 07:58:21.355799   49751 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0806 07:58:21.355810   49751 command_runner.go:130] > # This option supports live configuration reload.
	I0806 07:58:21.355822   49751 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0806 07:58:21.355834   49751 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0806 07:58:21.355851   49751 command_runner.go:130] > # the cgroup blockio controller.
	I0806 07:58:21.355861   49751 command_runner.go:130] > # blockio_config_file = ""
	I0806 07:58:21.355872   49751 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0806 07:58:21.355881   49751 command_runner.go:130] > # blockio parameters.
	I0806 07:58:21.355888   49751 command_runner.go:130] > # blockio_reload = false
	I0806 07:58:21.355902   49751 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0806 07:58:21.355911   49751 command_runner.go:130] > # irqbalance daemon.
	I0806 07:58:21.355919   49751 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0806 07:58:21.355930   49751 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0806 07:58:21.355941   49751 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0806 07:58:21.355954   49751 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0806 07:58:21.355966   49751 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0806 07:58:21.355978   49751 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0806 07:58:21.355988   49751 command_runner.go:130] > # This option supports live configuration reload.
	I0806 07:58:21.355999   49751 command_runner.go:130] > # rdt_config_file = ""
	I0806 07:58:21.356008   49751 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0806 07:58:21.356018   49751 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0806 07:58:21.356038   49751 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0806 07:58:21.356047   49751 command_runner.go:130] > # separate_pull_cgroup = ""
	I0806 07:58:21.356057   49751 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0806 07:58:21.356069   49751 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0806 07:58:21.356078   49751 command_runner.go:130] > # will be added.
	I0806 07:58:21.356088   49751 command_runner.go:130] > # default_capabilities = [
	I0806 07:58:21.356095   49751 command_runner.go:130] > # 	"CHOWN",
	I0806 07:58:21.356104   49751 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0806 07:58:21.356112   49751 command_runner.go:130] > # 	"FSETID",
	I0806 07:58:21.356124   49751 command_runner.go:130] > # 	"FOWNER",
	I0806 07:58:21.356130   49751 command_runner.go:130] > # 	"SETGID",
	I0806 07:58:21.356138   49751 command_runner.go:130] > # 	"SETUID",
	I0806 07:58:21.356144   49751 command_runner.go:130] > # 	"SETPCAP",
	I0806 07:58:21.356160   49751 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0806 07:58:21.356170   49751 command_runner.go:130] > # 	"KILL",
	I0806 07:58:21.356175   49751 command_runner.go:130] > # ]
	I0806 07:58:21.356190   49751 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0806 07:58:21.356204   49751 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0806 07:58:21.356213   49751 command_runner.go:130] > # add_inheritable_capabilities = false
	I0806 07:58:21.356224   49751 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0806 07:58:21.356237   49751 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0806 07:58:21.356246   49751 command_runner.go:130] > default_sysctls = [
	I0806 07:58:21.356255   49751 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0806 07:58:21.356263   49751 command_runner.go:130] > ]
	I0806 07:58:21.356271   49751 command_runner.go:130] > # List of devices on the host that a
	I0806 07:58:21.356284   49751 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0806 07:58:21.356295   49751 command_runner.go:130] > # allowed_devices = [
	I0806 07:58:21.356301   49751 command_runner.go:130] > # 	"/dev/fuse",
	I0806 07:58:21.356309   49751 command_runner.go:130] > # ]
	I0806 07:58:21.356318   49751 command_runner.go:130] > # List of additional devices. specified as
	I0806 07:58:21.356331   49751 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0806 07:58:21.356342   49751 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0806 07:58:21.356354   49751 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0806 07:58:21.356379   49751 command_runner.go:130] > # additional_devices = [
	I0806 07:58:21.356386   49751 command_runner.go:130] > # ]
	I0806 07:58:21.356397   49751 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0806 07:58:21.356404   49751 command_runner.go:130] > # cdi_spec_dirs = [
	I0806 07:58:21.356413   49751 command_runner.go:130] > # 	"/etc/cdi",
	I0806 07:58:21.356419   49751 command_runner.go:130] > # 	"/var/run/cdi",
	I0806 07:58:21.356428   49751 command_runner.go:130] > # ]
	I0806 07:58:21.356438   49751 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0806 07:58:21.356450   49751 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0806 07:58:21.356457   49751 command_runner.go:130] > # Defaults to false.
	I0806 07:58:21.356471   49751 command_runner.go:130] > # device_ownership_from_security_context = false
	I0806 07:58:21.356484   49751 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0806 07:58:21.356497   49751 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0806 07:58:21.356508   49751 command_runner.go:130] > # hooks_dir = [
	I0806 07:58:21.356518   49751 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0806 07:58:21.356526   49751 command_runner.go:130] > # ]
	I0806 07:58:21.356535   49751 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0806 07:58:21.356548   49751 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0806 07:58:21.356559   49751 command_runner.go:130] > # its default mounts from the following two files:
	I0806 07:58:21.356568   49751 command_runner.go:130] > #
	I0806 07:58:21.356579   49751 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0806 07:58:21.356591   49751 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0806 07:58:21.356602   49751 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0806 07:58:21.356611   49751 command_runner.go:130] > #
	I0806 07:58:21.356620   49751 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0806 07:58:21.356634   49751 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0806 07:58:21.356647   49751 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0806 07:58:21.356657   49751 command_runner.go:130] > #      only add mounts it finds in this file.
	I0806 07:58:21.356662   49751 command_runner.go:130] > #
	I0806 07:58:21.356674   49751 command_runner.go:130] > # default_mounts_file = ""
	I0806 07:58:21.356685   49751 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0806 07:58:21.356701   49751 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0806 07:58:21.356710   49751 command_runner.go:130] > pids_limit = 1024
	I0806 07:58:21.356720   49751 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0806 07:58:21.356732   49751 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0806 07:58:21.356744   49751 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0806 07:58:21.356758   49751 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0806 07:58:21.356768   49751 command_runner.go:130] > # log_size_max = -1
	I0806 07:58:21.356779   49751 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0806 07:58:21.356791   49751 command_runner.go:130] > # log_to_journald = false
	I0806 07:58:21.356803   49751 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0806 07:58:21.356814   49751 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0806 07:58:21.356824   49751 command_runner.go:130] > # Path to directory for container attach sockets.
	I0806 07:58:21.356835   49751 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0806 07:58:21.356871   49751 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0806 07:58:21.356883   49751 command_runner.go:130] > # bind_mount_prefix = ""
	I0806 07:58:21.356892   49751 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0806 07:58:21.356900   49751 command_runner.go:130] > # read_only = false
	I0806 07:58:21.356911   49751 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0806 07:58:21.356920   49751 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0806 07:58:21.356932   49751 command_runner.go:130] > # live configuration reload.
	I0806 07:58:21.356938   49751 command_runner.go:130] > # log_level = "info"
	I0806 07:58:21.356947   49751 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0806 07:58:21.356957   49751 command_runner.go:130] > # This option supports live configuration reload.
	I0806 07:58:21.356966   49751 command_runner.go:130] > # log_filter = ""
	I0806 07:58:21.356976   49751 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0806 07:58:21.356989   49751 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0806 07:58:21.357000   49751 command_runner.go:130] > # separated by comma.
	I0806 07:58:21.357012   49751 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0806 07:58:21.357023   49751 command_runner.go:130] > # uid_mappings = ""
	I0806 07:58:21.357033   49751 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0806 07:58:21.357046   49751 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0806 07:58:21.357055   49751 command_runner.go:130] > # separated by comma.
	I0806 07:58:21.357070   49751 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0806 07:58:21.357079   49751 command_runner.go:130] > # gid_mappings = ""
	I0806 07:58:21.357089   49751 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0806 07:58:21.357101   49751 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0806 07:58:21.357111   49751 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0806 07:58:21.357126   49751 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0806 07:58:21.357135   49751 command_runner.go:130] > # minimum_mappable_uid = -1
	I0806 07:58:21.357145   49751 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0806 07:58:21.357157   49751 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0806 07:58:21.357170   49751 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0806 07:58:21.357184   49751 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0806 07:58:21.357193   49751 command_runner.go:130] > # minimum_mappable_gid = -1
	I0806 07:58:21.357202   49751 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0806 07:58:21.357215   49751 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0806 07:58:21.357227   49751 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0806 07:58:21.357238   49751 command_runner.go:130] > # ctr_stop_timeout = 30
	I0806 07:58:21.357251   49751 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0806 07:58:21.357259   49751 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0806 07:58:21.357269   49751 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0806 07:58:21.357279   49751 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0806 07:58:21.357287   49751 command_runner.go:130] > drop_infra_ctr = false
	I0806 07:58:21.357296   49751 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0806 07:58:21.357307   49751 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0806 07:58:21.357320   49751 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0806 07:58:21.357328   49751 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0806 07:58:21.357342   49751 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0806 07:58:21.357353   49751 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0806 07:58:21.357365   49751 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0806 07:58:21.357378   49751 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0806 07:58:21.357388   49751 command_runner.go:130] > # shared_cpuset = ""
	I0806 07:58:21.357399   49751 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0806 07:58:21.357410   49751 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0806 07:58:21.357419   49751 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0806 07:58:21.357436   49751 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0806 07:58:21.357446   49751 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0806 07:58:21.357454   49751 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0806 07:58:21.357466   49751 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0806 07:58:21.357475   49751 command_runner.go:130] > # enable_criu_support = false
	I0806 07:58:21.357483   49751 command_runner.go:130] > # Enable/disable the generation of the container,
	I0806 07:58:21.357495   49751 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0806 07:58:21.357504   49751 command_runner.go:130] > # enable_pod_events = false
	I0806 07:58:21.357514   49751 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0806 07:58:21.357526   49751 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0806 07:58:21.357538   49751 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0806 07:58:21.357546   49751 command_runner.go:130] > # default_runtime = "runc"
	I0806 07:58:21.357557   49751 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0806 07:58:21.357570   49751 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0806 07:58:21.357589   49751 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0806 07:58:21.357600   49751 command_runner.go:130] > # creation as a file is not desired either.
	I0806 07:58:21.357614   49751 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0806 07:58:21.357626   49751 command_runner.go:130] > # the hostname is being managed dynamically.
	I0806 07:58:21.357636   49751 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0806 07:58:21.357643   49751 command_runner.go:130] > # ]
	I0806 07:58:21.357652   49751 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0806 07:58:21.357665   49751 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0806 07:58:21.357677   49751 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0806 07:58:21.357688   49751 command_runner.go:130] > # Each entry in the table should follow the format:
	I0806 07:58:21.357696   49751 command_runner.go:130] > #
	I0806 07:58:21.357704   49751 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0806 07:58:21.357715   49751 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0806 07:58:21.357764   49751 command_runner.go:130] > # runtime_type = "oci"
	I0806 07:58:21.357775   49751 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0806 07:58:21.357783   49751 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0806 07:58:21.357794   49751 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0806 07:58:21.357803   49751 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0806 07:58:21.357813   49751 command_runner.go:130] > # monitor_env = []
	I0806 07:58:21.357821   49751 command_runner.go:130] > # privileged_without_host_devices = false
	I0806 07:58:21.357832   49751 command_runner.go:130] > # allowed_annotations = []
	I0806 07:58:21.357849   49751 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0806 07:58:21.357858   49751 command_runner.go:130] > # Where:
	I0806 07:58:21.357869   49751 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0806 07:58:21.357881   49751 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0806 07:58:21.357894   49751 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0806 07:58:21.357906   49751 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0806 07:58:21.357912   49751 command_runner.go:130] > #   in $PATH.
	I0806 07:58:21.357925   49751 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0806 07:58:21.357935   49751 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0806 07:58:21.357945   49751 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0806 07:58:21.357954   49751 command_runner.go:130] > #   state.
	I0806 07:58:21.357964   49751 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0806 07:58:21.357976   49751 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0806 07:58:21.357989   49751 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0806 07:58:21.358002   49751 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0806 07:58:21.358016   49751 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0806 07:58:21.358030   49751 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0806 07:58:21.358040   49751 command_runner.go:130] > #   The currently recognized values are:
	I0806 07:58:21.358054   49751 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0806 07:58:21.358068   49751 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0806 07:58:21.358081   49751 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0806 07:58:21.358094   49751 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0806 07:58:21.358110   49751 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0806 07:58:21.358123   49751 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0806 07:58:21.358136   49751 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0806 07:58:21.358147   49751 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0806 07:58:21.358155   49751 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0806 07:58:21.358166   49751 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0806 07:58:21.358172   49751 command_runner.go:130] > #   deprecated option "conmon".
	I0806 07:58:21.358185   49751 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0806 07:58:21.358196   49751 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0806 07:58:21.358206   49751 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0806 07:58:21.358217   49751 command_runner.go:130] > #   should be moved to the container's cgroup
	I0806 07:58:21.358230   49751 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0806 07:58:21.358241   49751 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0806 07:58:21.358253   49751 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0806 07:58:21.358263   49751 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0806 07:58:21.358271   49751 command_runner.go:130] > #
	I0806 07:58:21.358279   49751 command_runner.go:130] > # Using the seccomp notifier feature:
	I0806 07:58:21.358286   49751 command_runner.go:130] > #
	I0806 07:58:21.358295   49751 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0806 07:58:21.358307   49751 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0806 07:58:21.358316   49751 command_runner.go:130] > #
	I0806 07:58:21.358327   49751 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0806 07:58:21.358340   49751 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0806 07:58:21.358348   49751 command_runner.go:130] > #
	I0806 07:58:21.358358   49751 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0806 07:58:21.358366   49751 command_runner.go:130] > # feature.
	I0806 07:58:21.358371   49751 command_runner.go:130] > #
	I0806 07:58:21.358381   49751 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0806 07:58:21.358393   49751 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0806 07:58:21.358408   49751 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0806 07:58:21.358424   49751 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0806 07:58:21.358438   49751 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0806 07:58:21.358445   49751 command_runner.go:130] > #
	I0806 07:58:21.358455   49751 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0806 07:58:21.358471   49751 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0806 07:58:21.358479   49751 command_runner.go:130] > #
	I0806 07:58:21.358488   49751 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0806 07:58:21.358499   49751 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0806 07:58:21.358508   49751 command_runner.go:130] > #
	I0806 07:58:21.358517   49751 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0806 07:58:21.358529   49751 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0806 07:58:21.358536   49751 command_runner.go:130] > # limitation.
	I0806 07:58:21.358547   49751 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0806 07:58:21.358553   49751 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0806 07:58:21.358562   49751 command_runner.go:130] > runtime_type = "oci"
	I0806 07:58:21.358569   49751 command_runner.go:130] > runtime_root = "/run/runc"
	I0806 07:58:21.358579   49751 command_runner.go:130] > runtime_config_path = ""
	I0806 07:58:21.358587   49751 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0806 07:58:21.358596   49751 command_runner.go:130] > monitor_cgroup = "pod"
	I0806 07:58:21.358603   49751 command_runner.go:130] > monitor_exec_cgroup = ""
	I0806 07:58:21.358612   49751 command_runner.go:130] > monitor_env = [
	I0806 07:58:21.358621   49751 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0806 07:58:21.358630   49751 command_runner.go:130] > ]
	I0806 07:58:21.358638   49751 command_runner.go:130] > privileged_without_host_devices = false
	I0806 07:58:21.358650   49751 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0806 07:58:21.358662   49751 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0806 07:58:21.358674   49751 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0806 07:58:21.358689   49751 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0806 07:58:21.358704   49751 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0806 07:58:21.358716   49751 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0806 07:58:21.358733   49751 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0806 07:58:21.358750   49751 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0806 07:58:21.358762   49751 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0806 07:58:21.358776   49751 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0806 07:58:21.358783   49751 command_runner.go:130] > # Example:
	I0806 07:58:21.358797   49751 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0806 07:58:21.358805   49751 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0806 07:58:21.358812   49751 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0806 07:58:21.358820   49751 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0806 07:58:21.358826   49751 command_runner.go:130] > # cpuset = 0
	I0806 07:58:21.358832   49751 command_runner.go:130] > # cpushares = "0-1"
	I0806 07:58:21.358837   49751 command_runner.go:130] > # Where:
	I0806 07:58:21.358850   49751 command_runner.go:130] > # The workload name is workload-type.
	I0806 07:58:21.358862   49751 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0806 07:58:21.358870   49751 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0806 07:58:21.358879   49751 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0806 07:58:21.358891   49751 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0806 07:58:21.358901   49751 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0806 07:58:21.358909   49751 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0806 07:58:21.358919   49751 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0806 07:58:21.358926   49751 command_runner.go:130] > # Default value is set to true
	I0806 07:58:21.358933   49751 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0806 07:58:21.358941   49751 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0806 07:58:21.358948   49751 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0806 07:58:21.358954   49751 command_runner.go:130] > # Default value is set to 'false'
	I0806 07:58:21.358962   49751 command_runner.go:130] > # disable_hostport_mapping = false
	I0806 07:58:21.358970   49751 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0806 07:58:21.358975   49751 command_runner.go:130] > #
	I0806 07:58:21.358983   49751 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0806 07:58:21.358995   49751 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0806 07:58:21.359004   49751 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0806 07:58:21.359017   49751 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0806 07:58:21.359028   49751 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0806 07:58:21.359036   49751 command_runner.go:130] > [crio.image]
	I0806 07:58:21.359045   49751 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0806 07:58:21.359055   49751 command_runner.go:130] > # default_transport = "docker://"
	I0806 07:58:21.359065   49751 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0806 07:58:21.359077   49751 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0806 07:58:21.359085   49751 command_runner.go:130] > # global_auth_file = ""
	I0806 07:58:21.359093   49751 command_runner.go:130] > # The image used to instantiate infra containers.
	I0806 07:58:21.359106   49751 command_runner.go:130] > # This option supports live configuration reload.
	I0806 07:58:21.359113   49751 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0806 07:58:21.359126   49751 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0806 07:58:21.359137   49751 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0806 07:58:21.359148   49751 command_runner.go:130] > # This option supports live configuration reload.
	I0806 07:58:21.359155   49751 command_runner.go:130] > # pause_image_auth_file = ""
	I0806 07:58:21.359167   49751 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0806 07:58:21.359180   49751 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0806 07:58:21.359197   49751 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0806 07:58:21.359209   49751 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0806 07:58:21.359220   49751 command_runner.go:130] > # pause_command = "/pause"
	I0806 07:58:21.359232   49751 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0806 07:58:21.359245   49751 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0806 07:58:21.359259   49751 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0806 07:58:21.359270   49751 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0806 07:58:21.359281   49751 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0806 07:58:21.359292   49751 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0806 07:58:21.359302   49751 command_runner.go:130] > # pinned_images = [
	I0806 07:58:21.359311   49751 command_runner.go:130] > # ]
	I0806 07:58:21.359324   49751 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0806 07:58:21.359337   49751 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0806 07:58:21.359350   49751 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0806 07:58:21.359363   49751 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0806 07:58:21.359375   49751 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0806 07:58:21.359385   49751 command_runner.go:130] > # signature_policy = ""
	I0806 07:58:21.359397   49751 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0806 07:58:21.359411   49751 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0806 07:58:21.359424   49751 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0806 07:58:21.359438   49751 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0806 07:58:21.359451   49751 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0806 07:58:21.359462   49751 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0806 07:58:21.359474   49751 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0806 07:58:21.359494   49751 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0806 07:58:21.359504   49751 command_runner.go:130] > # changing them here.
	I0806 07:58:21.359514   49751 command_runner.go:130] > # insecure_registries = [
	I0806 07:58:21.359522   49751 command_runner.go:130] > # ]
	I0806 07:58:21.359535   49751 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0806 07:58:21.359546   49751 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0806 07:58:21.359556   49751 command_runner.go:130] > # image_volumes = "mkdir"
	I0806 07:58:21.359568   49751 command_runner.go:130] > # Temporary directory to use for storing big files
	I0806 07:58:21.359578   49751 command_runner.go:130] > # big_files_temporary_dir = ""
	I0806 07:58:21.359591   49751 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0806 07:58:21.359600   49751 command_runner.go:130] > # CNI plugins.
	I0806 07:58:21.359609   49751 command_runner.go:130] > [crio.network]
	I0806 07:58:21.359621   49751 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0806 07:58:21.359632   49751 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0806 07:58:21.359641   49751 command_runner.go:130] > # cni_default_network = ""
	I0806 07:58:21.359653   49751 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0806 07:58:21.359662   49751 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0806 07:58:21.359676   49751 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0806 07:58:21.359684   49751 command_runner.go:130] > # plugin_dirs = [
	I0806 07:58:21.359692   49751 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0806 07:58:21.359699   49751 command_runner.go:130] > # ]
	I0806 07:58:21.359708   49751 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0806 07:58:21.359716   49751 command_runner.go:130] > [crio.metrics]
	I0806 07:58:21.359727   49751 command_runner.go:130] > # Globally enable or disable metrics support.
	I0806 07:58:21.359737   49751 command_runner.go:130] > enable_metrics = true
	I0806 07:58:21.359747   49751 command_runner.go:130] > # Specify enabled metrics collectors.
	I0806 07:58:21.359758   49751 command_runner.go:130] > # Per default all metrics are enabled.
	I0806 07:58:21.359771   49751 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0806 07:58:21.359784   49751 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0806 07:58:21.359797   49751 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0806 07:58:21.359807   49751 command_runner.go:130] > # metrics_collectors = [
	I0806 07:58:21.359817   49751 command_runner.go:130] > # 	"operations",
	I0806 07:58:21.359827   49751 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0806 07:58:21.359837   49751 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0806 07:58:21.359854   49751 command_runner.go:130] > # 	"operations_errors",
	I0806 07:58:21.359863   49751 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0806 07:58:21.359869   49751 command_runner.go:130] > # 	"image_pulls_by_name",
	I0806 07:58:21.359879   49751 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0806 07:58:21.359887   49751 command_runner.go:130] > # 	"image_pulls_failures",
	I0806 07:58:21.359895   49751 command_runner.go:130] > # 	"image_pulls_successes",
	I0806 07:58:21.359904   49751 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0806 07:58:21.359910   49751 command_runner.go:130] > # 	"image_layer_reuse",
	I0806 07:58:21.359919   49751 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0806 07:58:21.359928   49751 command_runner.go:130] > # 	"containers_oom_total",
	I0806 07:58:21.359937   49751 command_runner.go:130] > # 	"containers_oom",
	I0806 07:58:21.359945   49751 command_runner.go:130] > # 	"processes_defunct",
	I0806 07:58:21.359950   49751 command_runner.go:130] > # 	"operations_total",
	I0806 07:58:21.359959   49751 command_runner.go:130] > # 	"operations_latency_seconds",
	I0806 07:58:21.359968   49751 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0806 07:58:21.359976   49751 command_runner.go:130] > # 	"operations_errors_total",
	I0806 07:58:21.359985   49751 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0806 07:58:21.359994   49751 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0806 07:58:21.360004   49751 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0806 07:58:21.360014   49751 command_runner.go:130] > # 	"image_pulls_success_total",
	I0806 07:58:21.360022   49751 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0806 07:58:21.360031   49751 command_runner.go:130] > # 	"containers_oom_count_total",
	I0806 07:58:21.360039   49751 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0806 07:58:21.360048   49751 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0806 07:58:21.360063   49751 command_runner.go:130] > # ]
	I0806 07:58:21.360072   49751 command_runner.go:130] > # The port on which the metrics server will listen.
	I0806 07:58:21.360081   49751 command_runner.go:130] > # metrics_port = 9090
	I0806 07:58:21.360090   49751 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0806 07:58:21.360098   49751 command_runner.go:130] > # metrics_socket = ""
	I0806 07:58:21.360108   49751 command_runner.go:130] > # The certificate for the secure metrics server.
	I0806 07:58:21.360119   49751 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0806 07:58:21.360131   49751 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0806 07:58:21.360141   49751 command_runner.go:130] > # certificate on any modification event.
	I0806 07:58:21.360150   49751 command_runner.go:130] > # metrics_cert = ""
	I0806 07:58:21.360160   49751 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0806 07:58:21.360172   49751 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0806 07:58:21.360181   49751 command_runner.go:130] > # metrics_key = ""
	I0806 07:58:21.360192   49751 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0806 07:58:21.360202   49751 command_runner.go:130] > [crio.tracing]
	I0806 07:58:21.360211   49751 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0806 07:58:21.360219   49751 command_runner.go:130] > # enable_tracing = false
	I0806 07:58:21.360229   49751 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0806 07:58:21.360239   49751 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0806 07:58:21.360251   49751 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0806 07:58:21.360260   49751 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0806 07:58:21.360269   49751 command_runner.go:130] > # CRI-O NRI configuration.
	I0806 07:58:21.360277   49751 command_runner.go:130] > [crio.nri]
	I0806 07:58:21.360288   49751 command_runner.go:130] > # Globally enable or disable NRI.
	I0806 07:58:21.360294   49751 command_runner.go:130] > # enable_nri = false
	I0806 07:58:21.360304   49751 command_runner.go:130] > # NRI socket to listen on.
	I0806 07:58:21.360314   49751 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0806 07:58:21.360324   49751 command_runner.go:130] > # NRI plugin directory to use.
	I0806 07:58:21.360334   49751 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0806 07:58:21.360341   49751 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0806 07:58:21.360349   49751 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0806 07:58:21.360358   49751 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0806 07:58:21.360378   49751 command_runner.go:130] > # nri_disable_connections = false
	I0806 07:58:21.360389   49751 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0806 07:58:21.360398   49751 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0806 07:58:21.360405   49751 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0806 07:58:21.360413   49751 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0806 07:58:21.360424   49751 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0806 07:58:21.360432   49751 command_runner.go:130] > [crio.stats]
	I0806 07:58:21.360441   49751 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0806 07:58:21.360452   49751 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0806 07:58:21.360462   49751 command_runner.go:130] > # stats_collection_period = 0
	I0806 07:58:21.360998   49751 command_runner.go:130] ! time="2024-08-06 07:58:21.322762724Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0806 07:58:21.361017   49751 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0806 07:58:21.361130   49751 cni.go:84] Creating CNI manager for ""
	I0806 07:58:21.361139   49751 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0806 07:58:21.361148   49751 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0806 07:58:21.361168   49751 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.186 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-551471 NodeName:multinode-551471 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.186"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.186 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0806 07:58:21.361323   49751 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.186
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-551471"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.186
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.186"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0806 07:58:21.361396   49751 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0806 07:58:21.371367   49751 command_runner.go:130] > kubeadm
	I0806 07:58:21.371388   49751 command_runner.go:130] > kubectl
	I0806 07:58:21.371395   49751 command_runner.go:130] > kubelet
	I0806 07:58:21.371555   49751 binaries.go:44] Found k8s binaries, skipping transfer
	I0806 07:58:21.371621   49751 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0806 07:58:21.381576   49751 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0806 07:58:21.399164   49751 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0806 07:58:21.416552   49751 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0806 07:58:21.433748   49751 ssh_runner.go:195] Run: grep 192.168.39.186	control-plane.minikube.internal$ /etc/hosts
	I0806 07:58:21.437757   49751 command_runner.go:130] > 192.168.39.186	control-plane.minikube.internal
	I0806 07:58:21.437829   49751 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 07:58:21.572421   49751 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 07:58:21.587702   49751 certs.go:68] Setting up /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/multinode-551471 for IP: 192.168.39.186
	I0806 07:58:21.587727   49751 certs.go:194] generating shared ca certs ...
	I0806 07:58:21.587752   49751 certs.go:226] acquiring lock for ca certs: {Name:mkf5c5aed91f4108054d9f2c742cad66c86afbed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 07:58:21.587917   49751 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.key
	I0806 07:58:21.587965   49751 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.key
	I0806 07:58:21.587975   49751 certs.go:256] generating profile certs ...
	I0806 07:58:21.588049   49751 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/multinode-551471/client.key
	I0806 07:58:21.588111   49751 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/multinode-551471/apiserver.key.7045bc00
	I0806 07:58:21.588148   49751 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/multinode-551471/proxy-client.key
	I0806 07:58:21.588160   49751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0806 07:58:21.588171   49751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0806 07:58:21.588186   49751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0806 07:58:21.588195   49751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0806 07:58:21.588204   49751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/multinode-551471/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0806 07:58:21.588227   49751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/multinode-551471/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0806 07:58:21.588243   49751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/multinode-551471/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0806 07:58:21.588255   49751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/multinode-551471/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0806 07:58:21.588307   49751 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246.pem (1338 bytes)
	W0806 07:58:21.588336   49751 certs.go:480] ignoring /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246_empty.pem, impossibly tiny 0 bytes
	I0806 07:58:21.588346   49751 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem (1675 bytes)
	I0806 07:58:21.588386   49751 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem (1078 bytes)
	I0806 07:58:21.588407   49751 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem (1123 bytes)
	I0806 07:58:21.588431   49751 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem (1675 bytes)
	I0806 07:58:21.588467   49751 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem (1708 bytes)
	I0806 07:58:21.588493   49751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0806 07:58:21.588507   49751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246.pem -> /usr/share/ca-certificates/20246.pem
	I0806 07:58:21.588518   49751 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem -> /usr/share/ca-certificates/202462.pem
	I0806 07:58:21.589129   49751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0806 07:58:21.616200   49751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0806 07:58:21.642041   49751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0806 07:58:21.666247   49751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0806 07:58:21.690867   49751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/multinode-551471/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0806 07:58:21.715060   49751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/multinode-551471/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0806 07:58:21.738980   49751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/multinode-551471/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0806 07:58:21.762708   49751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/multinode-551471/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0806 07:58:21.788411   49751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0806 07:58:21.813739   49751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246.pem --> /usr/share/ca-certificates/20246.pem (1338 bytes)
	I0806 07:58:21.838768   49751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem --> /usr/share/ca-certificates/202462.pem (1708 bytes)
	I0806 07:58:21.862900   49751 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0806 07:58:21.879557   49751 ssh_runner.go:195] Run: openssl version
	I0806 07:58:21.885712   49751 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0806 07:58:21.885833   49751 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0806 07:58:21.896253   49751 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0806 07:58:21.900892   49751 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug  6 07:07 /usr/share/ca-certificates/minikubeCA.pem
	I0806 07:58:21.900943   49751 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  6 07:07 /usr/share/ca-certificates/minikubeCA.pem
	I0806 07:58:21.900981   49751 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0806 07:58:21.906656   49751 command_runner.go:130] > b5213941
	I0806 07:58:21.906723   49751 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0806 07:58:21.916072   49751 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20246.pem && ln -fs /usr/share/ca-certificates/20246.pem /etc/ssl/certs/20246.pem"
	I0806 07:58:21.926913   49751 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20246.pem
	I0806 07:58:21.931212   49751 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug  6 07:17 /usr/share/ca-certificates/20246.pem
	I0806 07:58:21.931534   49751 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  6 07:17 /usr/share/ca-certificates/20246.pem
	I0806 07:58:21.931610   49751 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20246.pem
	I0806 07:58:21.937229   49751 command_runner.go:130] > 51391683
	I0806 07:58:21.937354   49751 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20246.pem /etc/ssl/certs/51391683.0"
	I0806 07:58:21.947273   49751 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202462.pem && ln -fs /usr/share/ca-certificates/202462.pem /etc/ssl/certs/202462.pem"
	I0806 07:58:21.958269   49751 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202462.pem
	I0806 07:58:21.962934   49751 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug  6 07:17 /usr/share/ca-certificates/202462.pem
	I0806 07:58:21.962962   49751 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  6 07:17 /usr/share/ca-certificates/202462.pem
	I0806 07:58:21.962994   49751 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202462.pem
	I0806 07:58:21.968654   49751 command_runner.go:130] > 3ec20f2e
	I0806 07:58:21.968716   49751 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202462.pem /etc/ssl/certs/3ec20f2e.0"
	I0806 07:58:21.978472   49751 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0806 07:58:21.982815   49751 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0806 07:58:21.982843   49751 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0806 07:58:21.982852   49751 command_runner.go:130] > Device: 253,1	Inode: 1056811     Links: 1
	I0806 07:58:21.982860   49751 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0806 07:58:21.982869   49751 command_runner.go:130] > Access: 2024-08-06 07:51:21.084894930 +0000
	I0806 07:58:21.982876   49751 command_runner.go:130] > Modify: 2024-08-06 07:51:21.084894930 +0000
	I0806 07:58:21.982883   49751 command_runner.go:130] > Change: 2024-08-06 07:51:21.084894930 +0000
	I0806 07:58:21.982894   49751 command_runner.go:130] >  Birth: 2024-08-06 07:51:21.084894930 +0000
	I0806 07:58:21.982938   49751 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0806 07:58:21.988509   49751 command_runner.go:130] > Certificate will not expire
	I0806 07:58:21.988748   49751 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0806 07:58:21.994294   49751 command_runner.go:130] > Certificate will not expire
	I0806 07:58:21.994351   49751 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0806 07:58:21.999843   49751 command_runner.go:130] > Certificate will not expire
	I0806 07:58:22.000049   49751 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0806 07:58:22.005882   49751 command_runner.go:130] > Certificate will not expire
	I0806 07:58:22.005967   49751 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0806 07:58:22.011728   49751 command_runner.go:130] > Certificate will not expire
	I0806 07:58:22.011780   49751 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0806 07:58:22.017150   49751 command_runner.go:130] > Certificate will not expire
	I0806 07:58:22.017391   49751 kubeadm.go:392] StartCluster: {Name:multinode-551471 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:multinode-551471 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.186 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.134 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.189 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 07:58:22.017537   49751 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0806 07:58:22.017579   49751 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0806 07:58:22.055400   49751 command_runner.go:130] > 231998fa150bd615568fd89931d682cd6eb3900552e3e0c414c2ee3a6fb749ad
	I0806 07:58:22.055421   49751 command_runner.go:130] > f1f1d9862a98539cf5a1ed1d209cef2b0e4e7061910368640a15be822397ebb9
	I0806 07:58:22.055428   49751 command_runner.go:130] > d7923487019be61d3f46ec909607d2c85a99a8b64a5c00f89ef7bf1200b38cd4
	I0806 07:58:22.055433   49751 command_runner.go:130] > 74e91b8e25d98b20fd885a292fff4c3200fc87c06bdea6de195993b8d96c6480
	I0806 07:58:22.055439   49751 command_runner.go:130] > 4b56e43215ad57bc3f2aed6d531db767cca6cf0f89b2f01bb78c33d7c3d76692
	I0806 07:58:22.055444   49751 command_runner.go:130] > beb56425a40e2135964c6eb3931935babc4cf81b32a1e8c70735f1995effbb77
	I0806 07:58:22.055449   49751 command_runner.go:130] > 474e29aac6711f79764feef65f30a2d86dcc7d05632b305853bee3f8a4aaecb0
	I0806 07:58:22.055456   49751 command_runner.go:130] > 1cf9151b1941b0e938e94a6d7247c74e3d0e1f26fddaf0350cc2cb6a211618a7
	I0806 07:58:22.055473   49751 cri.go:89] found id: "231998fa150bd615568fd89931d682cd6eb3900552e3e0c414c2ee3a6fb749ad"
	I0806 07:58:22.055480   49751 cri.go:89] found id: "f1f1d9862a98539cf5a1ed1d209cef2b0e4e7061910368640a15be822397ebb9"
	I0806 07:58:22.055485   49751 cri.go:89] found id: "d7923487019be61d3f46ec909607d2c85a99a8b64a5c00f89ef7bf1200b38cd4"
	I0806 07:58:22.055490   49751 cri.go:89] found id: "74e91b8e25d98b20fd885a292fff4c3200fc87c06bdea6de195993b8d96c6480"
	I0806 07:58:22.055494   49751 cri.go:89] found id: "4b56e43215ad57bc3f2aed6d531db767cca6cf0f89b2f01bb78c33d7c3d76692"
	I0806 07:58:22.055505   49751 cri.go:89] found id: "beb56425a40e2135964c6eb3931935babc4cf81b32a1e8c70735f1995effbb77"
	I0806 07:58:22.055509   49751 cri.go:89] found id: "474e29aac6711f79764feef65f30a2d86dcc7d05632b305853bee3f8a4aaecb0"
	I0806 07:58:22.055513   49751 cri.go:89] found id: "1cf9151b1941b0e938e94a6d7247c74e3d0e1f26fddaf0350cc2cb6a211618a7"
	I0806 07:58:22.055517   49751 cri.go:89] found id: ""
	I0806 07:58:22.055556   49751 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 06 08:02:29 multinode-551471 crio[2891]: time="2024-08-06 08:02:29.957195756Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722931349957173102,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143053,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c025d413-c090-44ff-b3bd-aedee5450f5c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 08:02:29 multinode-551471 crio[2891]: time="2024-08-06 08:02:29.957682494Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=05343a56-a716-4e67-af83-78776f5f2438 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:02:29 multinode-551471 crio[2891]: time="2024-08-06 08:02:29.957755996Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=05343a56-a716-4e67-af83-78776f5f2438 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:02:29 multinode-551471 crio[2891]: time="2024-08-06 08:02:29.958120277Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a638ddbb5051a7433b97f22d82ce6388f43365138206ecd8cb6e55eba6cf3e12,PodSandboxId:004df81af52bb64ccc8579bfc09bee86d7d7ba50320b5350755d68a984049c42,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722931142979511780,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-rkhtz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3112feb4-bb59-4877-8fa8-6c47dcb17168,},Annotations:map[string]string{io.kubernetes.container.hash: 5f745e54,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9e8ef0973a44f9939c9abcc18a63a3f102a88e15aaabd57f22a28bab85519b1,PodSandboxId:84abefc52e5b3c26e1d8d05b33bf643540c37c828cd07b41a04926373472f340,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_RUNNING,CreatedAt:1722931109497750212,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9dc7g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51aee9c7-0107-4ffb-a335-2bb3b56f0882,},Annotations:map[string]string{io.kubernetes.container.hash: 6562290,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuber
netes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b90f5bc008f90cfa2fbf2aa3da853823559aa023f76aa906f5246e026bb0f0a0,PodSandboxId:1d94fa01ad36cc4f8b1a37a7f295b0b795c35ce7beae310d48d7e594a2e20216,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722931109425680485,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k2kl8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 434b5086-628b-4c6d-98bf-38e4b2758290,},Annotations:map[string]string{io.kubernetes.container.hash: d8bac289,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\"
:\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7df59d82af251b33479683110323543bab2abee160632a2e863732021d0972e,PodSandboxId:01b7c91f4c977cc0a5bc30555d5e5f88f380f57f178b3d6b8e3d7021ace17ef1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722931109384014548,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b41a9dc-8b36-4c97-ba13-e8d86bbed5b1,},Ann
otations:map[string]string{io.kubernetes.container.hash: fec17a41,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7796edadd615e4a6b72780d99c72833e29376e47f21ace8d1b9846c888ce57ac,PodSandboxId:1e7e6eea84a536a4142ed32fd0956fe4e4ba6cb788ffe82ebffe5d5433d011f3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722931109344692941,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8t88h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8497743e-1315-4ab3-9097-9170fb8a373b,},Annotations:map[string]string{io.kub
ernetes.container.hash: aea06c0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3988dbeb8b095b0f961141d409ed6af31a49a47f792dce6092c49c0f2e02e14a,PodSandboxId:80e8af9bc0b31252bf2950b2a0851edfa325624f738d9ae95a51c9dba426cf21,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722931104440668799,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-551471,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eaa700938e917acefeeb3216b02e55a,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 95688fab,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f179bdc8d4add339e4a2b7c1af524fafe3d8f5fb06c47686220ddf2a8b0ce165,PodSandboxId:bbabc537c895ca5a5e1a204fc3f26c7dff2bbb1a3b73e64f8252fa7971df633d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722931104457689984,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-551471,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce0f55a14e9bac0289c2f4eff6d6d50d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c
8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91ce52b52a5faa28ab026d7d550e9c3edc2df4766c37591a5aa778d3c8b4ffee,PodSandboxId:9409cd35efe496f7cc125d775b3ed284ef00f3e7308d43bd8f582a3eb8810168,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722931104428954111,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-551471,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e39e809d728843cc0e09f10061b40994,},Annotations:map[string]string{io.kubernetes.container.hash: e1eecb92,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91679dd7275cf0c0d0c7264208fe3ea4af2c52d892de4f2536fd9d3a5f8a7ef9,PodSandboxId:a20b86b6ff58c204dc2eb847559803ae9a47943c69abbd6c249bc0acae212687,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722931104345651342,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-551471,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71095cd596aca73ba17d34793adffd9c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93cf5cb7a8319c91c2ef8585a3093e4fc951acd31cb84587ecbf9da395640e8b,PodSandboxId:5321b01ab24b006147f307b73b923098acb5fdde2cdeed6d364facca7b9851d8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722930773814765536,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-rkhtz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3112feb4-bb59-4877-8fa8-6c47dcb17168,},Annotations:map[string]string{io.kubernetes.container.hash: 5f745e54,io.kubernetes.container.resta
rtCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:231998fa150bd615568fd89931d682cd6eb3900552e3e0c414c2ee3a6fb749ad,PodSandboxId:f2f8123ae9c32ea663b58990f1516f0b3653886bc0b8d1c82fda9dffb77f71cb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722930720473670689,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k2kl8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 434b5086-628b-4c6d-98bf-38e4b2758290,},Annotations:map[string]string{io.kubernetes.container.hash: d8bac289,io.kubernetes.container.ports: [{\"name\":\"dns\",\"container
Port\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1f1d9862a98539cf5a1ed1d209cef2b0e4e7061910368640a15be822397ebb9,PodSandboxId:09178607b7fcafb7b5b2e4a1234f64d7351badd06de29ae18d06ed5c4c1405d1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722930720418403516,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 3b41a9dc-8b36-4c97-ba13-e8d86bbed5b1,},Annotations:map[string]string{io.kubernetes.container.hash: fec17a41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7923487019be61d3f46ec909607d2c85a99a8b64a5c00f89ef7bf1200b38cd4,PodSandboxId:dd0f9c7acfd8f65254417dd6e5b967b4595356cfd57f453a91158bfff54e9166,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_EXITED,CreatedAt:1722930708296776676,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9dc7g,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: 51aee9c7-0107-4ffb-a335-2bb3b56f0882,},Annotations:map[string]string{io.kubernetes.container.hash: 6562290,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74e91b8e25d98b20fd885a292fff4c3200fc87c06bdea6de195993b8d96c6480,PodSandboxId:8fb8a4a95cedd5849da9cf5abf6d16e481584589b23c7dc1d0606eaea3566160,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722930705813526750,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8t88h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 8497743e-1315-4ab3-9097-9170fb8a373b,},Annotations:map[string]string{io.kubernetes.container.hash: aea06c0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b56e43215ad57bc3f2aed6d531db767cca6cf0f89b2f01bb78c33d7c3d76692,PodSandboxId:cfbd4fcb42df0f43658221b8e736d4a5e449003e8675e6adf7cd914e51514deb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722930684828005169,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-551471,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e39e809d728843cc0e09f10061b40994,}
,Annotations:map[string]string{io.kubernetes.container.hash: e1eecb92,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beb56425a40e2135964c6eb3931935babc4cf81b32a1e8c70735f1995effbb77,PodSandboxId:807a9beaa4c51df4f993e70e904c5ce01a26723825bd9a04ef78eb566ab34a88,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722930684808536266,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-551471,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71095cd596aca73ba17d34
793adffd9c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cf9151b1941b0e938e94a6d7247c74e3d0e1f26fddaf0350cc2cb6a211618a7,PodSandboxId:a643154dcadf4eadd19864c10b75cd0258590418620383448484ba09d11a46eb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722930684746183306,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-551471,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce0f55a14e9bac0289c2f4eff6d6d50d,},An
notations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:474e29aac6711f79764feef65f30a2d86dcc7d05632b305853bee3f8a4aaecb0,PodSandboxId:bdb6c31cf15795d1a277d665e76704ee8ee04bb0ff0122fd5bec8298726be9be,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722930684752835026,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-551471,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eaa700938e917acefeeb3216b02e55a,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 95688fab,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=05343a56-a716-4e67-af83-78776f5f2438 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:02:30 multinode-551471 crio[2891]: time="2024-08-06 08:02:30.002620814Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dae87096-dcc1-4b5e-aa9b-724b716293c1 name=/runtime.v1.RuntimeService/Version
	Aug 06 08:02:30 multinode-551471 crio[2891]: time="2024-08-06 08:02:30.002717384Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dae87096-dcc1-4b5e-aa9b-724b716293c1 name=/runtime.v1.RuntimeService/Version
	Aug 06 08:02:30 multinode-551471 crio[2891]: time="2024-08-06 08:02:30.003997343Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9eb804b9-fc53-476b-ba00-2d948b1aef5d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 08:02:30 multinode-551471 crio[2891]: time="2024-08-06 08:02:30.004504944Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722931350004479664,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143053,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9eb804b9-fc53-476b-ba00-2d948b1aef5d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 08:02:30 multinode-551471 crio[2891]: time="2024-08-06 08:02:30.005071518Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=97596536-984b-43c5-83b2-adc62c8e58b3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:02:30 multinode-551471 crio[2891]: time="2024-08-06 08:02:30.005142187Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=97596536-984b-43c5-83b2-adc62c8e58b3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:02:30 multinode-551471 crio[2891]: time="2024-08-06 08:02:30.005614811Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a638ddbb5051a7433b97f22d82ce6388f43365138206ecd8cb6e55eba6cf3e12,PodSandboxId:004df81af52bb64ccc8579bfc09bee86d7d7ba50320b5350755d68a984049c42,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722931142979511780,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-rkhtz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3112feb4-bb59-4877-8fa8-6c47dcb17168,},Annotations:map[string]string{io.kubernetes.container.hash: 5f745e54,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9e8ef0973a44f9939c9abcc18a63a3f102a88e15aaabd57f22a28bab85519b1,PodSandboxId:84abefc52e5b3c26e1d8d05b33bf643540c37c828cd07b41a04926373472f340,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_RUNNING,CreatedAt:1722931109497750212,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9dc7g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51aee9c7-0107-4ffb-a335-2bb3b56f0882,},Annotations:map[string]string{io.kubernetes.container.hash: 6562290,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuber
netes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b90f5bc008f90cfa2fbf2aa3da853823559aa023f76aa906f5246e026bb0f0a0,PodSandboxId:1d94fa01ad36cc4f8b1a37a7f295b0b795c35ce7beae310d48d7e594a2e20216,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722931109425680485,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k2kl8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 434b5086-628b-4c6d-98bf-38e4b2758290,},Annotations:map[string]string{io.kubernetes.container.hash: d8bac289,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\"
:\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7df59d82af251b33479683110323543bab2abee160632a2e863732021d0972e,PodSandboxId:01b7c91f4c977cc0a5bc30555d5e5f88f380f57f178b3d6b8e3d7021ace17ef1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722931109384014548,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b41a9dc-8b36-4c97-ba13-e8d86bbed5b1,},Ann
otations:map[string]string{io.kubernetes.container.hash: fec17a41,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7796edadd615e4a6b72780d99c72833e29376e47f21ace8d1b9846c888ce57ac,PodSandboxId:1e7e6eea84a536a4142ed32fd0956fe4e4ba6cb788ffe82ebffe5d5433d011f3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722931109344692941,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8t88h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8497743e-1315-4ab3-9097-9170fb8a373b,},Annotations:map[string]string{io.kub
ernetes.container.hash: aea06c0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3988dbeb8b095b0f961141d409ed6af31a49a47f792dce6092c49c0f2e02e14a,PodSandboxId:80e8af9bc0b31252bf2950b2a0851edfa325624f738d9ae95a51c9dba426cf21,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722931104440668799,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-551471,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eaa700938e917acefeeb3216b02e55a,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 95688fab,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f179bdc8d4add339e4a2b7c1af524fafe3d8f5fb06c47686220ddf2a8b0ce165,PodSandboxId:bbabc537c895ca5a5e1a204fc3f26c7dff2bbb1a3b73e64f8252fa7971df633d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722931104457689984,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-551471,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce0f55a14e9bac0289c2f4eff6d6d50d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c
8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91ce52b52a5faa28ab026d7d550e9c3edc2df4766c37591a5aa778d3c8b4ffee,PodSandboxId:9409cd35efe496f7cc125d775b3ed284ef00f3e7308d43bd8f582a3eb8810168,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722931104428954111,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-551471,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e39e809d728843cc0e09f10061b40994,},Annotations:map[string]string{io.kubernetes.container.hash: e1eecb92,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91679dd7275cf0c0d0c7264208fe3ea4af2c52d892de4f2536fd9d3a5f8a7ef9,PodSandboxId:a20b86b6ff58c204dc2eb847559803ae9a47943c69abbd6c249bc0acae212687,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722931104345651342,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-551471,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71095cd596aca73ba17d34793adffd9c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93cf5cb7a8319c91c2ef8585a3093e4fc951acd31cb84587ecbf9da395640e8b,PodSandboxId:5321b01ab24b006147f307b73b923098acb5fdde2cdeed6d364facca7b9851d8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722930773814765536,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-rkhtz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3112feb4-bb59-4877-8fa8-6c47dcb17168,},Annotations:map[string]string{io.kubernetes.container.hash: 5f745e54,io.kubernetes.container.resta
rtCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:231998fa150bd615568fd89931d682cd6eb3900552e3e0c414c2ee3a6fb749ad,PodSandboxId:f2f8123ae9c32ea663b58990f1516f0b3653886bc0b8d1c82fda9dffb77f71cb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722930720473670689,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k2kl8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 434b5086-628b-4c6d-98bf-38e4b2758290,},Annotations:map[string]string{io.kubernetes.container.hash: d8bac289,io.kubernetes.container.ports: [{\"name\":\"dns\",\"container
Port\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1f1d9862a98539cf5a1ed1d209cef2b0e4e7061910368640a15be822397ebb9,PodSandboxId:09178607b7fcafb7b5b2e4a1234f64d7351badd06de29ae18d06ed5c4c1405d1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722930720418403516,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 3b41a9dc-8b36-4c97-ba13-e8d86bbed5b1,},Annotations:map[string]string{io.kubernetes.container.hash: fec17a41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7923487019be61d3f46ec909607d2c85a99a8b64a5c00f89ef7bf1200b38cd4,PodSandboxId:dd0f9c7acfd8f65254417dd6e5b967b4595356cfd57f453a91158bfff54e9166,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_EXITED,CreatedAt:1722930708296776676,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9dc7g,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: 51aee9c7-0107-4ffb-a335-2bb3b56f0882,},Annotations:map[string]string{io.kubernetes.container.hash: 6562290,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74e91b8e25d98b20fd885a292fff4c3200fc87c06bdea6de195993b8d96c6480,PodSandboxId:8fb8a4a95cedd5849da9cf5abf6d16e481584589b23c7dc1d0606eaea3566160,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722930705813526750,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8t88h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 8497743e-1315-4ab3-9097-9170fb8a373b,},Annotations:map[string]string{io.kubernetes.container.hash: aea06c0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b56e43215ad57bc3f2aed6d531db767cca6cf0f89b2f01bb78c33d7c3d76692,PodSandboxId:cfbd4fcb42df0f43658221b8e736d4a5e449003e8675e6adf7cd914e51514deb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722930684828005169,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-551471,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e39e809d728843cc0e09f10061b40994,}
,Annotations:map[string]string{io.kubernetes.container.hash: e1eecb92,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beb56425a40e2135964c6eb3931935babc4cf81b32a1e8c70735f1995effbb77,PodSandboxId:807a9beaa4c51df4f993e70e904c5ce01a26723825bd9a04ef78eb566ab34a88,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722930684808536266,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-551471,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71095cd596aca73ba17d34
793adffd9c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cf9151b1941b0e938e94a6d7247c74e3d0e1f26fddaf0350cc2cb6a211618a7,PodSandboxId:a643154dcadf4eadd19864c10b75cd0258590418620383448484ba09d11a46eb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722930684746183306,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-551471,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce0f55a14e9bac0289c2f4eff6d6d50d,},An
notations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:474e29aac6711f79764feef65f30a2d86dcc7d05632b305853bee3f8a4aaecb0,PodSandboxId:bdb6c31cf15795d1a277d665e76704ee8ee04bb0ff0122fd5bec8298726be9be,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722930684752835026,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-551471,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eaa700938e917acefeeb3216b02e55a,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 95688fab,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=97596536-984b-43c5-83b2-adc62c8e58b3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:02:30 multinode-551471 crio[2891]: time="2024-08-06 08:02:30.050810898Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9fd7ff60-eb90-4154-8a81-024a3da3752c name=/runtime.v1.RuntimeService/Version
	Aug 06 08:02:30 multinode-551471 crio[2891]: time="2024-08-06 08:02:30.050907916Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9fd7ff60-eb90-4154-8a81-024a3da3752c name=/runtime.v1.RuntimeService/Version
	Aug 06 08:02:30 multinode-551471 crio[2891]: time="2024-08-06 08:02:30.052142333Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3579dbb6-8f81-4dc7-92c7-4a9b0bb5a2bd name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 08:02:30 multinode-551471 crio[2891]: time="2024-08-06 08:02:30.052780962Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722931350052754970,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143053,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3579dbb6-8f81-4dc7-92c7-4a9b0bb5a2bd name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 08:02:30 multinode-551471 crio[2891]: time="2024-08-06 08:02:30.053216768Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0e35150c-cd89-4e12-bdf5-4236b2215472 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:02:30 multinode-551471 crio[2891]: time="2024-08-06 08:02:30.053287028Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0e35150c-cd89-4e12-bdf5-4236b2215472 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:02:30 multinode-551471 crio[2891]: time="2024-08-06 08:02:30.053699026Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a638ddbb5051a7433b97f22d82ce6388f43365138206ecd8cb6e55eba6cf3e12,PodSandboxId:004df81af52bb64ccc8579bfc09bee86d7d7ba50320b5350755d68a984049c42,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722931142979511780,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-rkhtz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3112feb4-bb59-4877-8fa8-6c47dcb17168,},Annotations:map[string]string{io.kubernetes.container.hash: 5f745e54,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9e8ef0973a44f9939c9abcc18a63a3f102a88e15aaabd57f22a28bab85519b1,PodSandboxId:84abefc52e5b3c26e1d8d05b33bf643540c37c828cd07b41a04926373472f340,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_RUNNING,CreatedAt:1722931109497750212,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9dc7g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51aee9c7-0107-4ffb-a335-2bb3b56f0882,},Annotations:map[string]string{io.kubernetes.container.hash: 6562290,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuber
netes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b90f5bc008f90cfa2fbf2aa3da853823559aa023f76aa906f5246e026bb0f0a0,PodSandboxId:1d94fa01ad36cc4f8b1a37a7f295b0b795c35ce7beae310d48d7e594a2e20216,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722931109425680485,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k2kl8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 434b5086-628b-4c6d-98bf-38e4b2758290,},Annotations:map[string]string{io.kubernetes.container.hash: d8bac289,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\"
:\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7df59d82af251b33479683110323543bab2abee160632a2e863732021d0972e,PodSandboxId:01b7c91f4c977cc0a5bc30555d5e5f88f380f57f178b3d6b8e3d7021ace17ef1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722931109384014548,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b41a9dc-8b36-4c97-ba13-e8d86bbed5b1,},Ann
otations:map[string]string{io.kubernetes.container.hash: fec17a41,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7796edadd615e4a6b72780d99c72833e29376e47f21ace8d1b9846c888ce57ac,PodSandboxId:1e7e6eea84a536a4142ed32fd0956fe4e4ba6cb788ffe82ebffe5d5433d011f3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722931109344692941,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8t88h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8497743e-1315-4ab3-9097-9170fb8a373b,},Annotations:map[string]string{io.kub
ernetes.container.hash: aea06c0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3988dbeb8b095b0f961141d409ed6af31a49a47f792dce6092c49c0f2e02e14a,PodSandboxId:80e8af9bc0b31252bf2950b2a0851edfa325624f738d9ae95a51c9dba426cf21,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722931104440668799,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-551471,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eaa700938e917acefeeb3216b02e55a,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 95688fab,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f179bdc8d4add339e4a2b7c1af524fafe3d8f5fb06c47686220ddf2a8b0ce165,PodSandboxId:bbabc537c895ca5a5e1a204fc3f26c7dff2bbb1a3b73e64f8252fa7971df633d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722931104457689984,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-551471,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce0f55a14e9bac0289c2f4eff6d6d50d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c
8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91ce52b52a5faa28ab026d7d550e9c3edc2df4766c37591a5aa778d3c8b4ffee,PodSandboxId:9409cd35efe496f7cc125d775b3ed284ef00f3e7308d43bd8f582a3eb8810168,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722931104428954111,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-551471,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e39e809d728843cc0e09f10061b40994,},Annotations:map[string]string{io.kubernetes.container.hash: e1eecb92,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91679dd7275cf0c0d0c7264208fe3ea4af2c52d892de4f2536fd9d3a5f8a7ef9,PodSandboxId:a20b86b6ff58c204dc2eb847559803ae9a47943c69abbd6c249bc0acae212687,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722931104345651342,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-551471,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71095cd596aca73ba17d34793adffd9c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93cf5cb7a8319c91c2ef8585a3093e4fc951acd31cb84587ecbf9da395640e8b,PodSandboxId:5321b01ab24b006147f307b73b923098acb5fdde2cdeed6d364facca7b9851d8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722930773814765536,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-rkhtz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3112feb4-bb59-4877-8fa8-6c47dcb17168,},Annotations:map[string]string{io.kubernetes.container.hash: 5f745e54,io.kubernetes.container.resta
rtCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:231998fa150bd615568fd89931d682cd6eb3900552e3e0c414c2ee3a6fb749ad,PodSandboxId:f2f8123ae9c32ea663b58990f1516f0b3653886bc0b8d1c82fda9dffb77f71cb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722930720473670689,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k2kl8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 434b5086-628b-4c6d-98bf-38e4b2758290,},Annotations:map[string]string{io.kubernetes.container.hash: d8bac289,io.kubernetes.container.ports: [{\"name\":\"dns\",\"container
Port\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1f1d9862a98539cf5a1ed1d209cef2b0e4e7061910368640a15be822397ebb9,PodSandboxId:09178607b7fcafb7b5b2e4a1234f64d7351badd06de29ae18d06ed5c4c1405d1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722930720418403516,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 3b41a9dc-8b36-4c97-ba13-e8d86bbed5b1,},Annotations:map[string]string{io.kubernetes.container.hash: fec17a41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7923487019be61d3f46ec909607d2c85a99a8b64a5c00f89ef7bf1200b38cd4,PodSandboxId:dd0f9c7acfd8f65254417dd6e5b967b4595356cfd57f453a91158bfff54e9166,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_EXITED,CreatedAt:1722930708296776676,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9dc7g,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: 51aee9c7-0107-4ffb-a335-2bb3b56f0882,},Annotations:map[string]string{io.kubernetes.container.hash: 6562290,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74e91b8e25d98b20fd885a292fff4c3200fc87c06bdea6de195993b8d96c6480,PodSandboxId:8fb8a4a95cedd5849da9cf5abf6d16e481584589b23c7dc1d0606eaea3566160,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722930705813526750,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8t88h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 8497743e-1315-4ab3-9097-9170fb8a373b,},Annotations:map[string]string{io.kubernetes.container.hash: aea06c0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b56e43215ad57bc3f2aed6d531db767cca6cf0f89b2f01bb78c33d7c3d76692,PodSandboxId:cfbd4fcb42df0f43658221b8e736d4a5e449003e8675e6adf7cd914e51514deb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722930684828005169,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-551471,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e39e809d728843cc0e09f10061b40994,}
,Annotations:map[string]string{io.kubernetes.container.hash: e1eecb92,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beb56425a40e2135964c6eb3931935babc4cf81b32a1e8c70735f1995effbb77,PodSandboxId:807a9beaa4c51df4f993e70e904c5ce01a26723825bd9a04ef78eb566ab34a88,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722930684808536266,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-551471,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71095cd596aca73ba17d34
793adffd9c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cf9151b1941b0e938e94a6d7247c74e3d0e1f26fddaf0350cc2cb6a211618a7,PodSandboxId:a643154dcadf4eadd19864c10b75cd0258590418620383448484ba09d11a46eb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722930684746183306,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-551471,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce0f55a14e9bac0289c2f4eff6d6d50d,},An
notations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:474e29aac6711f79764feef65f30a2d86dcc7d05632b305853bee3f8a4aaecb0,PodSandboxId:bdb6c31cf15795d1a277d665e76704ee8ee04bb0ff0122fd5bec8298726be9be,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722930684752835026,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-551471,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eaa700938e917acefeeb3216b02e55a,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 95688fab,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0e35150c-cd89-4e12-bdf5-4236b2215472 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:02:30 multinode-551471 crio[2891]: time="2024-08-06 08:02:30.097387213Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=454fa571-edd5-4061-a8c6-7d2732f7be53 name=/runtime.v1.RuntimeService/Version
	Aug 06 08:02:30 multinode-551471 crio[2891]: time="2024-08-06 08:02:30.097479526Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=454fa571-edd5-4061-a8c6-7d2732f7be53 name=/runtime.v1.RuntimeService/Version
	Aug 06 08:02:30 multinode-551471 crio[2891]: time="2024-08-06 08:02:30.098760842Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3cb668a3-afce-4387-b8e9-94905a1426d2 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 08:02:30 multinode-551471 crio[2891]: time="2024-08-06 08:02:30.099270305Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722931350099246635,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143053,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3cb668a3-afce-4387-b8e9-94905a1426d2 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 08:02:30 multinode-551471 crio[2891]: time="2024-08-06 08:02:30.099721231Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3a93f5d3-8654-4e87-bb4b-92acf3fe6178 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:02:30 multinode-551471 crio[2891]: time="2024-08-06 08:02:30.099794829Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3a93f5d3-8654-4e87-bb4b-92acf3fe6178 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:02:30 multinode-551471 crio[2891]: time="2024-08-06 08:02:30.100140202Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a638ddbb5051a7433b97f22d82ce6388f43365138206ecd8cb6e55eba6cf3e12,PodSandboxId:004df81af52bb64ccc8579bfc09bee86d7d7ba50320b5350755d68a984049c42,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722931142979511780,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-rkhtz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3112feb4-bb59-4877-8fa8-6c47dcb17168,},Annotations:map[string]string{io.kubernetes.container.hash: 5f745e54,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9e8ef0973a44f9939c9abcc18a63a3f102a88e15aaabd57f22a28bab85519b1,PodSandboxId:84abefc52e5b3c26e1d8d05b33bf643540c37c828cd07b41a04926373472f340,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_RUNNING,CreatedAt:1722931109497750212,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9dc7g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51aee9c7-0107-4ffb-a335-2bb3b56f0882,},Annotations:map[string]string{io.kubernetes.container.hash: 6562290,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuber
netes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b90f5bc008f90cfa2fbf2aa3da853823559aa023f76aa906f5246e026bb0f0a0,PodSandboxId:1d94fa01ad36cc4f8b1a37a7f295b0b795c35ce7beae310d48d7e594a2e20216,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722931109425680485,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k2kl8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 434b5086-628b-4c6d-98bf-38e4b2758290,},Annotations:map[string]string{io.kubernetes.container.hash: d8bac289,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\"
:\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7df59d82af251b33479683110323543bab2abee160632a2e863732021d0972e,PodSandboxId:01b7c91f4c977cc0a5bc30555d5e5f88f380f57f178b3d6b8e3d7021ace17ef1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722931109384014548,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b41a9dc-8b36-4c97-ba13-e8d86bbed5b1,},Ann
otations:map[string]string{io.kubernetes.container.hash: fec17a41,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7796edadd615e4a6b72780d99c72833e29376e47f21ace8d1b9846c888ce57ac,PodSandboxId:1e7e6eea84a536a4142ed32fd0956fe4e4ba6cb788ffe82ebffe5d5433d011f3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722931109344692941,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8t88h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8497743e-1315-4ab3-9097-9170fb8a373b,},Annotations:map[string]string{io.kub
ernetes.container.hash: aea06c0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3988dbeb8b095b0f961141d409ed6af31a49a47f792dce6092c49c0f2e02e14a,PodSandboxId:80e8af9bc0b31252bf2950b2a0851edfa325624f738d9ae95a51c9dba426cf21,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722931104440668799,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-551471,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eaa700938e917acefeeb3216b02e55a,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 95688fab,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f179bdc8d4add339e4a2b7c1af524fafe3d8f5fb06c47686220ddf2a8b0ce165,PodSandboxId:bbabc537c895ca5a5e1a204fc3f26c7dff2bbb1a3b73e64f8252fa7971df633d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722931104457689984,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-551471,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce0f55a14e9bac0289c2f4eff6d6d50d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c
8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91ce52b52a5faa28ab026d7d550e9c3edc2df4766c37591a5aa778d3c8b4ffee,PodSandboxId:9409cd35efe496f7cc125d775b3ed284ef00f3e7308d43bd8f582a3eb8810168,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722931104428954111,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-551471,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e39e809d728843cc0e09f10061b40994,},Annotations:map[string]string{io.kubernetes.container.hash: e1eecb92,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91679dd7275cf0c0d0c7264208fe3ea4af2c52d892de4f2536fd9d3a5f8a7ef9,PodSandboxId:a20b86b6ff58c204dc2eb847559803ae9a47943c69abbd6c249bc0acae212687,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722931104345651342,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-551471,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71095cd596aca73ba17d34793adffd9c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93cf5cb7a8319c91c2ef8585a3093e4fc951acd31cb84587ecbf9da395640e8b,PodSandboxId:5321b01ab24b006147f307b73b923098acb5fdde2cdeed6d364facca7b9851d8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722930773814765536,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-rkhtz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3112feb4-bb59-4877-8fa8-6c47dcb17168,},Annotations:map[string]string{io.kubernetes.container.hash: 5f745e54,io.kubernetes.container.resta
rtCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:231998fa150bd615568fd89931d682cd6eb3900552e3e0c414c2ee3a6fb749ad,PodSandboxId:f2f8123ae9c32ea663b58990f1516f0b3653886bc0b8d1c82fda9dffb77f71cb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722930720473670689,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k2kl8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 434b5086-628b-4c6d-98bf-38e4b2758290,},Annotations:map[string]string{io.kubernetes.container.hash: d8bac289,io.kubernetes.container.ports: [{\"name\":\"dns\",\"container
Port\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1f1d9862a98539cf5a1ed1d209cef2b0e4e7061910368640a15be822397ebb9,PodSandboxId:09178607b7fcafb7b5b2e4a1234f64d7351badd06de29ae18d06ed5c4c1405d1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722930720418403516,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 3b41a9dc-8b36-4c97-ba13-e8d86bbed5b1,},Annotations:map[string]string{io.kubernetes.container.hash: fec17a41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7923487019be61d3f46ec909607d2c85a99a8b64a5c00f89ef7bf1200b38cd4,PodSandboxId:dd0f9c7acfd8f65254417dd6e5b967b4595356cfd57f453a91158bfff54e9166,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_EXITED,CreatedAt:1722930708296776676,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9dc7g,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: 51aee9c7-0107-4ffb-a335-2bb3b56f0882,},Annotations:map[string]string{io.kubernetes.container.hash: 6562290,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74e91b8e25d98b20fd885a292fff4c3200fc87c06bdea6de195993b8d96c6480,PodSandboxId:8fb8a4a95cedd5849da9cf5abf6d16e481584589b23c7dc1d0606eaea3566160,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722930705813526750,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8t88h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 8497743e-1315-4ab3-9097-9170fb8a373b,},Annotations:map[string]string{io.kubernetes.container.hash: aea06c0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b56e43215ad57bc3f2aed6d531db767cca6cf0f89b2f01bb78c33d7c3d76692,PodSandboxId:cfbd4fcb42df0f43658221b8e736d4a5e449003e8675e6adf7cd914e51514deb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722930684828005169,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-551471,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e39e809d728843cc0e09f10061b40994,}
,Annotations:map[string]string{io.kubernetes.container.hash: e1eecb92,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beb56425a40e2135964c6eb3931935babc4cf81b32a1e8c70735f1995effbb77,PodSandboxId:807a9beaa4c51df4f993e70e904c5ce01a26723825bd9a04ef78eb566ab34a88,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722930684808536266,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-551471,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71095cd596aca73ba17d34
793adffd9c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cf9151b1941b0e938e94a6d7247c74e3d0e1f26fddaf0350cc2cb6a211618a7,PodSandboxId:a643154dcadf4eadd19864c10b75cd0258590418620383448484ba09d11a46eb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722930684746183306,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-551471,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce0f55a14e9bac0289c2f4eff6d6d50d,},An
notations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:474e29aac6711f79764feef65f30a2d86dcc7d05632b305853bee3f8a4aaecb0,PodSandboxId:bdb6c31cf15795d1a277d665e76704ee8ee04bb0ff0122fd5bec8298726be9be,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722930684752835026,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-551471,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eaa700938e917acefeeb3216b02e55a,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 95688fab,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3a93f5d3-8654-4e87-bb4b-92acf3fe6178 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a638ddbb5051a       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   004df81af52bb       busybox-fc5497c4f-rkhtz
	c9e8ef0973a44       917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557                                      4 minutes ago       Running             kindnet-cni               1                   84abefc52e5b3       kindnet-9dc7g
	b90f5bc008f90       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago       Running             coredns                   1                   1d94fa01ad36c       coredns-7db6d8ff4d-k2kl8
	a7df59d82af25       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       1                   01b7c91f4c977       storage-provisioner
	7796edadd615e       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      4 minutes ago       Running             kube-proxy                1                   1e7e6eea84a53       kube-proxy-8t88h
	f179bdc8d4add       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      4 minutes ago       Running             kube-scheduler            1                   bbabc537c895c       kube-scheduler-multinode-551471
	3988dbeb8b095       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      4 minutes ago       Running             kube-apiserver            1                   80e8af9bc0b31       kube-apiserver-multinode-551471
	91ce52b52a5fa       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      4 minutes ago       Running             etcd                      1                   9409cd35efe49       etcd-multinode-551471
	91679dd7275cf       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      4 minutes ago       Running             kube-controller-manager   1                   a20b86b6ff58c       kube-controller-manager-multinode-551471
	93cf5cb7a8319       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   9 minutes ago       Exited              busybox                   0                   5321b01ab24b0       busybox-fc5497c4f-rkhtz
	231998fa150bd       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      10 minutes ago      Exited              coredns                   0                   f2f8123ae9c32       coredns-7db6d8ff4d-k2kl8
	f1f1d9862a985       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Exited              storage-provisioner       0                   09178607b7fca       storage-provisioner
	d7923487019be       docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3    10 minutes ago      Exited              kindnet-cni               0                   dd0f9c7acfd8f       kindnet-9dc7g
	74e91b8e25d98       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      10 minutes ago      Exited              kube-proxy                0                   8fb8a4a95cedd       kube-proxy-8t88h
	4b56e43215ad5       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      11 minutes ago      Exited              etcd                      0                   cfbd4fcb42df0       etcd-multinode-551471
	beb56425a40e2       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      11 minutes ago      Exited              kube-controller-manager   0                   807a9beaa4c51       kube-controller-manager-multinode-551471
	474e29aac6711       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      11 minutes ago      Exited              kube-apiserver            0                   bdb6c31cf1579       kube-apiserver-multinode-551471
	1cf9151b1941b       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      11 minutes ago      Exited              kube-scheduler            0                   a643154dcadf4       kube-scheduler-multinode-551471
	
	
	==> coredns [231998fa150bd615568fd89931d682cd6eb3900552e3e0c414c2ee3a6fb749ad] <==
	[INFO] 10.244.0.3:35082 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001916689s
	[INFO] 10.244.0.3:33490 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000096442s
	[INFO] 10.244.0.3:60401 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000080282s
	[INFO] 10.244.0.3:51055 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001488103s
	[INFO] 10.244.0.3:58516 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000045192s
	[INFO] 10.244.0.3:55971 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000056963s
	[INFO] 10.244.0.3:58259 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000074335s
	[INFO] 10.244.1.2:51261 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000202716s
	[INFO] 10.244.1.2:56013 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00008516s
	[INFO] 10.244.1.2:37549 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000096782s
	[INFO] 10.244.1.2:53521 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000067543s
	[INFO] 10.244.0.3:44334 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132902s
	[INFO] 10.244.0.3:35687 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000104415s
	[INFO] 10.244.0.3:53100 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000045344s
	[INFO] 10.244.0.3:50135 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000079774s
	[INFO] 10.244.1.2:37210 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000154836s
	[INFO] 10.244.1.2:52512 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000147443s
	[INFO] 10.244.1.2:47941 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00011877s
	[INFO] 10.244.1.2:36295 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000104739s
	[INFO] 10.244.0.3:52327 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000091545s
	[INFO] 10.244.0.3:56632 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000099425s
	[INFO] 10.244.0.3:38284 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000091828s
	[INFO] 10.244.0.3:58697 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000063258s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [b90f5bc008f90cfa2fbf2aa3da853823559aa023f76aa906f5246e026bb0f0a0] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:37963 - 52055 "HINFO IN 2755026096945446625.896548438532864949. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.016096574s
	
	
	==> describe nodes <==
	Name:               multinode-551471
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-551471
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e92cb06692f5ea1ba801d10d148e5e92e807f9c8
	                    minikube.k8s.io/name=multinode-551471
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_06T07_51_31_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 06 Aug 2024 07:51:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-551471
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 06 Aug 2024 08:02:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 06 Aug 2024 07:58:28 +0000   Tue, 06 Aug 2024 07:51:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 06 Aug 2024 07:58:28 +0000   Tue, 06 Aug 2024 07:51:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 06 Aug 2024 07:58:28 +0000   Tue, 06 Aug 2024 07:51:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 06 Aug 2024 07:58:28 +0000   Tue, 06 Aug 2024 07:51:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.186
	  Hostname:    multinode-551471
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 404b782436a441b6a246f834972f2722
	  System UUID:                404b7824-36a4-41b6-a246-f834972f2722
	  Boot ID:                    48ae4806-1015-46ea-99e2-4a672a102a84
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-rkhtz                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m40s
	  kube-system                 coredns-7db6d8ff4d-k2kl8                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     10m
	  kube-system                 etcd-multinode-551471                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 kindnet-9dc7g                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-apiserver-multinode-551471             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-controller-manager-multinode-551471    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-proxy-8t88h                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-scheduler-multinode-551471             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 10m                  kube-proxy       
	  Normal  Starting                 4m                   kube-proxy       
	  Normal  NodeAllocatableEnforced  11m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  11m (x2 over 11m)    kubelet          Node multinode-551471 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x2 over 11m)    kubelet          Node multinode-551471 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m (x2 over 11m)    kubelet          Node multinode-551471 status is now: NodeHasSufficientPID
	  Normal  Starting                 11m                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           10m                  node-controller  Node multinode-551471 event: Registered Node multinode-551471 in Controller
	  Normal  NodeReady                10m                  kubelet          Node multinode-551471 status is now: NodeReady
	  Normal  Starting                 4m7s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m7s (x8 over 4m7s)  kubelet          Node multinode-551471 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m7s (x8 over 4m7s)  kubelet          Node multinode-551471 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m7s (x7 over 4m7s)  kubelet          Node multinode-551471 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m7s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m49s                node-controller  Node multinode-551471 event: Registered Node multinode-551471 in Controller
	
	
	Name:               multinode-551471-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-551471-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e92cb06692f5ea1ba801d10d148e5e92e807f9c8
	                    minikube.k8s.io/name=multinode-551471
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_06T07_59_04_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 06 Aug 2024 07:59:03 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-551471-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 06 Aug 2024 08:00:04 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 06 Aug 2024 07:59:33 +0000   Tue, 06 Aug 2024 08:00:46 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 06 Aug 2024 07:59:33 +0000   Tue, 06 Aug 2024 08:00:46 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 06 Aug 2024 07:59:33 +0000   Tue, 06 Aug 2024 08:00:46 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 06 Aug 2024 07:59:33 +0000   Tue, 06 Aug 2024 08:00:46 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.134
	  Hostname:    multinode-551471-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 bf8d76ce939e4437873bfaba68c51663
	  System UUID:                bf8d76ce-939e-4437-873b-faba68c51663
	  Boot ID:                    9f87ecbf-c3ed-443c-9089-34920f426293
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-7qztc    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m30s
	  kube-system                 kindnet-4tl5t              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-proxy-7n7gg           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m22s                  kube-proxy       
	  Normal  Starting                 9m57s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  10m (x2 over 10m)      kubelet          Node multinode-551471-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x2 over 10m)      kubelet          Node multinode-551471-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x2 over 10m)      kubelet          Node multinode-551471-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m42s                  kubelet          Node multinode-551471-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m27s (x2 over 3m27s)  kubelet          Node multinode-551471-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m27s (x2 over 3m27s)  kubelet          Node multinode-551471-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m27s (x2 over 3m27s)  kubelet          Node multinode-551471-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m27s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m8s                   kubelet          Node multinode-551471-m02 status is now: NodeReady
	  Normal  NodeNotReady             104s                   node-controller  Node multinode-551471-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.056065] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.162444] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.127583] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.266054] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +4.266928] systemd-fstab-generator[762]: Ignoring "noauto" option for root device
	[  +4.621521] systemd-fstab-generator[941]: Ignoring "noauto" option for root device
	[  +0.063120] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.513551] systemd-fstab-generator[1283]: Ignoring "noauto" option for root device
	[  +0.090341] kauditd_printk_skb: 69 callbacks suppressed
	[ +14.106804] kauditd_printk_skb: 21 callbacks suppressed
	[  +0.031004] systemd-fstab-generator[1499]: Ignoring "noauto" option for root device
	[  +5.058521] kauditd_printk_skb: 59 callbacks suppressed
	[Aug 6 07:52] kauditd_printk_skb: 14 callbacks suppressed
	[Aug 6 07:58] systemd-fstab-generator[2808]: Ignoring "noauto" option for root device
	[  +0.151533] systemd-fstab-generator[2820]: Ignoring "noauto" option for root device
	[  +0.178308] systemd-fstab-generator[2834]: Ignoring "noauto" option for root device
	[  +0.149957] systemd-fstab-generator[2846]: Ignoring "noauto" option for root device
	[  +0.294232] systemd-fstab-generator[2874]: Ignoring "noauto" option for root device
	[  +6.612031] systemd-fstab-generator[2973]: Ignoring "noauto" option for root device
	[  +0.078076] kauditd_printk_skb: 100 callbacks suppressed
	[  +1.911378] systemd-fstab-generator[3099]: Ignoring "noauto" option for root device
	[  +5.764827] kauditd_printk_skb: 74 callbacks suppressed
	[ +10.450574] systemd-fstab-generator[3885]: Ignoring "noauto" option for root device
	[  +0.106825] kauditd_printk_skb: 32 callbacks suppressed
	[Aug 6 07:59] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [4b56e43215ad57bc3f2aed6d531db767cca6cf0f89b2f01bb78c33d7c3d76692] <==
	{"level":"info","ts":"2024-08-06T07:52:28.96643Z","caller":"traceutil/trace.go:171","msg":"trace[2077002954] transaction","detail":"{read_only:false; response_revision:447; number_of_response:1; }","duration":"235.59679ms","start":"2024-08-06T07:52:28.730798Z","end":"2024-08-06T07:52:28.966394Z","steps":["trace[2077002954] 'process raft request'  (duration: 215.628153ms)","trace[2077002954] 'compare'  (duration: 19.654704ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-06T07:52:28.967713Z","caller":"traceutil/trace.go:171","msg":"trace[321245351] linearizableReadLoop","detail":"{readStateIndex:466; appliedIndex:465; }","duration":"225.620073ms","start":"2024-08-06T07:52:28.742079Z","end":"2024-08-06T07:52:28.967699Z","steps":["trace[321245351] 'read index received'  (duration: 204.464103ms)","trace[321245351] 'applied index is now lower than readState.Index'  (duration: 21.154162ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-06T07:52:28.967985Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"225.808558ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csinodes/multinode-551471-m02\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-06T07:52:28.968048Z","caller":"traceutil/trace.go:171","msg":"trace[1656718840] range","detail":"{range_begin:/registry/csinodes/multinode-551471-m02; range_end:; response_count:0; response_revision:447; }","duration":"225.986238ms","start":"2024-08-06T07:52:28.742054Z","end":"2024-08-06T07:52:28.96804Z","steps":["trace[1656718840] 'agreement among raft nodes before linearized reading'  (duration: 225.763862ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-06T07:52:28.970844Z","caller":"traceutil/trace.go:171","msg":"trace[577163816] transaction","detail":"{read_only:false; response_revision:448; number_of_response:1; }","duration":"200.280697ms","start":"2024-08-06T07:52:28.770555Z","end":"2024-08-06T07:52:28.970836Z","steps":["trace[577163816] 'process raft request'  (duration: 196.222458ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-06T07:52:28.971106Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"202.8304ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1116"}
	{"level":"info","ts":"2024-08-06T07:52:28.978077Z","caller":"traceutil/trace.go:171","msg":"trace[1187038607] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:448; }","duration":"209.823934ms","start":"2024-08-06T07:52:28.76824Z","end":"2024-08-06T07:52:28.978064Z","steps":["trace[1187038607] 'agreement among raft nodes before linearized reading'  (duration: 202.786148ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-06T07:52:28.971165Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"223.038702ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-551471-m02\" ","response":"range_response_count:1 size:1926"}
	{"level":"info","ts":"2024-08-06T07:52:28.978174Z","caller":"traceutil/trace.go:171","msg":"trace[281815515] range","detail":"{range_begin:/registry/minions/multinode-551471-m02; range_end:; response_count:1; response_revision:448; }","duration":"230.079213ms","start":"2024-08-06T07:52:28.748089Z","end":"2024-08-06T07:52:28.978168Z","steps":["trace[281815515] 'agreement among raft nodes before linearized reading'  (duration: 223.016088ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-06T07:53:27.536534Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"156.273853ms","expected-duration":"100ms","prefix":"","request":"header:<ID:12886365504069230174 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:579 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1028 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-08-06T07:53:27.537422Z","caller":"traceutil/trace.go:171","msg":"trace[108812265] transaction","detail":"{read_only:false; response_revision:585; number_of_response:1; }","duration":"232.771963ms","start":"2024-08-06T07:53:27.304617Z","end":"2024-08-06T07:53:27.537389Z","steps":["trace[108812265] 'process raft request'  (duration: 74.833807ms)","trace[108812265] 'compare'  (duration: 156.172305ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-06T07:53:27.537452Z","caller":"traceutil/trace.go:171","msg":"trace[2049791661] linearizableReadLoop","detail":"{readStateIndex:622; appliedIndex:620; }","duration":"136.946587ms","start":"2024-08-06T07:53:27.400494Z","end":"2024-08-06T07:53:27.53744Z","steps":["trace[2049791661] 'read index received'  (duration: 135.091798ms)","trace[2049791661] 'applied index is now lower than readState.Index'  (duration: 1.854167ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-06T07:53:27.537684Z","caller":"traceutil/trace.go:171","msg":"trace[2045106609] transaction","detail":"{read_only:false; response_revision:586; number_of_response:1; }","duration":"165.131551ms","start":"2024-08-06T07:53:27.372544Z","end":"2024-08-06T07:53:27.537676Z","steps":["trace[2045106609] 'process raft request'  (duration: 164.830358ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-06T07:53:27.537839Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"137.331754ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-551471-m03\" ","response":"range_response_count:1 size:1926"}
	{"level":"info","ts":"2024-08-06T07:53:27.537905Z","caller":"traceutil/trace.go:171","msg":"trace[1002770888] range","detail":"{range_begin:/registry/minions/multinode-551471-m03; range_end:; response_count:1; response_revision:586; }","duration":"137.423354ms","start":"2024-08-06T07:53:27.40047Z","end":"2024-08-06T07:53:27.537893Z","steps":["trace[1002770888] 'agreement among raft nodes before linearized reading'  (duration: 137.218666ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-06T07:56:42.64652Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-06T07:56:42.646638Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-551471","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.186:2380"],"advertise-client-urls":["https://192.168.39.186:2379"]}
	{"level":"warn","ts":"2024-08-06T07:56:42.650128Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-06T07:56:42.650268Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-06T07:56:42.707739Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.186:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-06T07:56:42.707876Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.186:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-06T07:56:42.707997Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"1bfd5d64eb00b2d5","current-leader-member-id":"1bfd5d64eb00b2d5"}
	{"level":"info","ts":"2024-08-06T07:56:42.712382Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.186:2380"}
	{"level":"info","ts":"2024-08-06T07:56:42.712535Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.186:2380"}
	{"level":"info","ts":"2024-08-06T07:56:42.712566Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-551471","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.186:2380"],"advertise-client-urls":["https://192.168.39.186:2379"]}
	
	
	==> etcd [91ce52b52a5faa28ab026d7d550e9c3edc2df4766c37591a5aa778d3c8b4ffee] <==
	{"level":"info","ts":"2024-08-06T07:58:24.878101Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1bfd5d64eb00b2d5 switched to configuration voters=(2016870896152654549)"}
	{"level":"info","ts":"2024-08-06T07:58:24.878169Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"7d06a36b1777ee5c","local-member-id":"1bfd5d64eb00b2d5","added-peer-id":"1bfd5d64eb00b2d5","added-peer-peer-urls":["https://192.168.39.186:2380"]}
	{"level":"info","ts":"2024-08-06T07:58:24.878402Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"7d06a36b1777ee5c","local-member-id":"1bfd5d64eb00b2d5","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-06T07:58:24.878455Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-06T07:58:24.889927Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-06T07:58:24.892543Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"1bfd5d64eb00b2d5","initial-advertise-peer-urls":["https://192.168.39.186:2380"],"listen-peer-urls":["https://192.168.39.186:2380"],"advertise-client-urls":["https://192.168.39.186:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.186:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-06T07:58:24.892606Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-06T07:58:24.892756Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.186:2380"}
	{"level":"info","ts":"2024-08-06T07:58:24.892783Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.186:2380"}
	{"level":"info","ts":"2024-08-06T07:58:26.698225Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1bfd5d64eb00b2d5 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-06T07:58:26.698297Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1bfd5d64eb00b2d5 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-06T07:58:26.698418Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1bfd5d64eb00b2d5 received MsgPreVoteResp from 1bfd5d64eb00b2d5 at term 2"}
	{"level":"info","ts":"2024-08-06T07:58:26.698437Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1bfd5d64eb00b2d5 became candidate at term 3"}
	{"level":"info","ts":"2024-08-06T07:58:26.698443Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1bfd5d64eb00b2d5 received MsgVoteResp from 1bfd5d64eb00b2d5 at term 3"}
	{"level":"info","ts":"2024-08-06T07:58:26.698452Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1bfd5d64eb00b2d5 became leader at term 3"}
	{"level":"info","ts":"2024-08-06T07:58:26.698459Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 1bfd5d64eb00b2d5 elected leader 1bfd5d64eb00b2d5 at term 3"}
	{"level":"info","ts":"2024-08-06T07:58:26.705923Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"1bfd5d64eb00b2d5","local-member-attributes":"{Name:multinode-551471 ClientURLs:[https://192.168.39.186:2379]}","request-path":"/0/members/1bfd5d64eb00b2d5/attributes","cluster-id":"7d06a36b1777ee5c","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-06T07:58:26.705933Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-06T07:58:26.706204Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-06T07:58:26.706308Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-06T07:58:26.70638Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-06T07:58:26.708313Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-06T07:58:26.708464Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.186:2379"}
	{"level":"warn","ts":"2024-08-06T07:59:07.426441Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.363881ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-551471-m02\" ","response":"range_response_count:1 size:3118"}
	{"level":"info","ts":"2024-08-06T07:59:07.426721Z","caller":"traceutil/trace.go:171","msg":"trace[446410568] range","detail":"{range_begin:/registry/minions/multinode-551471-m02; range_end:; response_count:1; response_revision:1042; }","duration":"100.702503ms","start":"2024-08-06T07:59:07.325995Z","end":"2024-08-06T07:59:07.426698Z","steps":["trace[446410568] 'range keys from in-memory index tree'  (duration: 100.167791ms)"],"step_count":1}
	
	
	==> kernel <==
	 08:02:30 up 11 min,  0 users,  load average: 0.11, 0.18, 0.11
	Linux multinode-551471 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [c9e8ef0973a44f9939c9abcc18a63a3f102a88e15aaabd57f22a28bab85519b1] <==
	I0806 08:01:30.411569       1 main.go:322] Node multinode-551471-m02 has CIDR [10.244.1.0/24] 
	I0806 08:01:40.414133       1 main.go:295] Handling node with IPs: map[192.168.39.134:{}]
	I0806 08:01:40.414186       1 main.go:322] Node multinode-551471-m02 has CIDR [10.244.1.0/24] 
	I0806 08:01:40.414403       1 main.go:295] Handling node with IPs: map[192.168.39.186:{}]
	I0806 08:01:40.414427       1 main.go:299] handling current node
	I0806 08:01:50.411798       1 main.go:295] Handling node with IPs: map[192.168.39.186:{}]
	I0806 08:01:50.411909       1 main.go:299] handling current node
	I0806 08:01:50.411937       1 main.go:295] Handling node with IPs: map[192.168.39.134:{}]
	I0806 08:01:50.411954       1 main.go:322] Node multinode-551471-m02 has CIDR [10.244.1.0/24] 
	I0806 08:02:00.410881       1 main.go:295] Handling node with IPs: map[192.168.39.186:{}]
	I0806 08:02:00.410955       1 main.go:299] handling current node
	I0806 08:02:00.410974       1 main.go:295] Handling node with IPs: map[192.168.39.134:{}]
	I0806 08:02:00.410982       1 main.go:322] Node multinode-551471-m02 has CIDR [10.244.1.0/24] 
	I0806 08:02:10.419938       1 main.go:295] Handling node with IPs: map[192.168.39.186:{}]
	I0806 08:02:10.420041       1 main.go:299] handling current node
	I0806 08:02:10.420069       1 main.go:295] Handling node with IPs: map[192.168.39.134:{}]
	I0806 08:02:10.420086       1 main.go:322] Node multinode-551471-m02 has CIDR [10.244.1.0/24] 
	I0806 08:02:20.414630       1 main.go:295] Handling node with IPs: map[192.168.39.186:{}]
	I0806 08:02:20.414680       1 main.go:299] handling current node
	I0806 08:02:20.414703       1 main.go:295] Handling node with IPs: map[192.168.39.134:{}]
	I0806 08:02:20.414709       1 main.go:322] Node multinode-551471-m02 has CIDR [10.244.1.0/24] 
	I0806 08:02:30.411265       1 main.go:295] Handling node with IPs: map[192.168.39.186:{}]
	I0806 08:02:30.411646       1 main.go:299] handling current node
	I0806 08:02:30.411715       1 main.go:295] Handling node with IPs: map[192.168.39.134:{}]
	I0806 08:02:30.411737       1 main.go:322] Node multinode-551471-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [d7923487019be61d3f46ec909607d2c85a99a8b64a5c00f89ef7bf1200b38cd4] <==
	I0806 07:55:59.424142       1 main.go:322] Node multinode-551471-m03 has CIDR [10.244.3.0/24] 
	I0806 07:56:09.422456       1 main.go:295] Handling node with IPs: map[192.168.39.189:{}]
	I0806 07:56:09.422500       1 main.go:322] Node multinode-551471-m03 has CIDR [10.244.3.0/24] 
	I0806 07:56:09.422620       1 main.go:295] Handling node with IPs: map[192.168.39.186:{}]
	I0806 07:56:09.422644       1 main.go:299] handling current node
	I0806 07:56:09.422656       1 main.go:295] Handling node with IPs: map[192.168.39.134:{}]
	I0806 07:56:09.422661       1 main.go:322] Node multinode-551471-m02 has CIDR [10.244.1.0/24] 
	I0806 07:56:19.415204       1 main.go:295] Handling node with IPs: map[192.168.39.186:{}]
	I0806 07:56:19.415317       1 main.go:299] handling current node
	I0806 07:56:19.415426       1 main.go:295] Handling node with IPs: map[192.168.39.134:{}]
	I0806 07:56:19.415467       1 main.go:322] Node multinode-551471-m02 has CIDR [10.244.1.0/24] 
	I0806 07:56:19.415619       1 main.go:295] Handling node with IPs: map[192.168.39.189:{}]
	I0806 07:56:19.415642       1 main.go:322] Node multinode-551471-m03 has CIDR [10.244.3.0/24] 
	I0806 07:56:29.421483       1 main.go:295] Handling node with IPs: map[192.168.39.186:{}]
	I0806 07:56:29.421602       1 main.go:299] handling current node
	I0806 07:56:29.421653       1 main.go:295] Handling node with IPs: map[192.168.39.134:{}]
	I0806 07:56:29.421658       1 main.go:322] Node multinode-551471-m02 has CIDR [10.244.1.0/24] 
	I0806 07:56:29.421858       1 main.go:295] Handling node with IPs: map[192.168.39.189:{}]
	I0806 07:56:29.421884       1 main.go:322] Node multinode-551471-m03 has CIDR [10.244.3.0/24] 
	I0806 07:56:39.421617       1 main.go:295] Handling node with IPs: map[192.168.39.134:{}]
	I0806 07:56:39.421855       1 main.go:322] Node multinode-551471-m02 has CIDR [10.244.1.0/24] 
	I0806 07:56:39.422012       1 main.go:295] Handling node with IPs: map[192.168.39.189:{}]
	I0806 07:56:39.422035       1 main.go:322] Node multinode-551471-m03 has CIDR [10.244.3.0/24] 
	I0806 07:56:39.422105       1 main.go:295] Handling node with IPs: map[192.168.39.186:{}]
	I0806 07:56:39.422124       1 main.go:299] handling current node
	
	
	==> kube-apiserver [3988dbeb8b095b0f961141d409ed6af31a49a47f792dce6092c49c0f2e02e14a] <==
	I0806 07:58:28.078165       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0806 07:58:28.092209       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0806 07:58:28.092421       1 policy_source.go:224] refreshing policies
	I0806 07:58:28.093886       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0806 07:58:28.096884       1 shared_informer.go:320] Caches are synced for configmaps
	I0806 07:58:28.099016       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0806 07:58:28.101852       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0806 07:58:28.113200       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0806 07:58:28.118581       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0806 07:58:28.118613       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0806 07:58:28.121498       1 aggregator.go:165] initial CRD sync complete...
	I0806 07:58:28.121543       1 autoregister_controller.go:141] Starting autoregister controller
	I0806 07:58:28.121550       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0806 07:58:28.121556       1 cache.go:39] Caches are synced for autoregister controller
	I0806 07:58:28.125150       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0806 07:58:28.137472       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	E0806 07:58:28.167900       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0806 07:58:29.012103       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0806 07:58:30.278005       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0806 07:58:30.423791       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0806 07:58:30.434209       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0806 07:58:30.500083       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0806 07:58:30.510202       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0806 07:58:41.143308       1 controller.go:615] quota admission added evaluator for: endpoints
	I0806 07:58:41.299758       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [474e29aac6711f79764feef65f30a2d86dcc7d05632b305853bee3f8a4aaecb0] <==
	I0806 07:56:42.673004       1 controller.go:167] Shutting down OpenAPI controller
	I0806 07:56:42.673038       1 naming_controller.go:302] Shutting down NamingConditionController
	I0806 07:56:42.673060       1 apf_controller.go:386] Shutting down API Priority and Fairness config worker
	I0806 07:56:42.673100       1 autoregister_controller.go:165] Shutting down autoregister controller
	I0806 07:56:42.673133       1 system_namespaces_controller.go:77] Shutting down system namespaces controller
	I0806 07:56:42.673162       1 customresource_discovery_controller.go:325] Shutting down DiscoveryController
	I0806 07:56:42.673201       1 crdregistration_controller.go:142] Shutting down crd-autoregister controller
	I0806 07:56:42.673244       1 available_controller.go:439] Shutting down AvailableConditionController
	I0806 07:56:42.673258       1 gc_controller.go:91] Shutting down apiserver lease garbage collector
	I0806 07:56:42.673302       1 controller.go:129] Ending legacy_token_tracking_controller
	I0806 07:56:42.673367       1 controller.go:130] Shutting down legacy_token_tracking_controller
	I0806 07:56:42.673387       1 apiservice_controller.go:131] Shutting down APIServiceRegistrationController
	I0806 07:56:42.673462       1 controller.go:157] Shutting down quota evaluator
	I0806 07:56:42.673496       1 controller.go:176] quota evaluator worker shutdown
	I0806 07:56:42.674085       1 controller.go:176] quota evaluator worker shutdown
	I0806 07:56:42.674119       1 controller.go:176] quota evaluator worker shutdown
	I0806 07:56:42.674125       1 controller.go:176] quota evaluator worker shutdown
	I0806 07:56:42.674140       1 controller.go:176] quota evaluator worker shutdown
	I0806 07:56:42.674513       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0806 07:56:42.674858       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0806 07:56:42.675076       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I0806 07:56:42.675117       1 controller.go:84] Shutting down OpenAPI AggregationController
	I0806 07:56:42.675221       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0806 07:56:42.680557       1 secure_serving.go:258] Stopped listening on [::]:8443
	I0806 07:56:42.680629       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	
	
	==> kube-controller-manager [91679dd7275cf0c0d0c7264208fe3ea4af2c52d892de4f2536fd9d3a5f8a7ef9] <==
	I0806 07:59:03.304920       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-551471-m02" podCIDRs=["10.244.1.0/24"]
	I0806 07:59:05.178825       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="60.697µs"
	I0806 07:59:05.205643       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.452µs"
	I0806 07:59:05.221186       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="105.666µs"
	I0806 07:59:05.251197       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.889µs"
	I0806 07:59:05.260315       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.95µs"
	I0806 07:59:05.262532       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.298µs"
	I0806 07:59:11.534844       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.483µs"
	I0806 07:59:22.998158       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-551471-m02"
	I0806 07:59:23.021791       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="106.353µs"
	I0806 07:59:23.035957       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="50.016µs"
	I0806 07:59:26.456854       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="5.398406ms"
	I0806 07:59:26.457244       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.818µs"
	I0806 07:59:42.341679       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-551471-m02"
	I0806 07:59:43.336639       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-551471-m03\" does not exist"
	I0806 07:59:43.337180       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-551471-m02"
	I0806 07:59:43.360176       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-551471-m03" podCIDRs=["10.244.2.0/24"]
	I0806 08:00:03.140401       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-551471-m02"
	I0806 08:00:08.727583       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-551471-m02"
	I0806 08:00:46.325985       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="18.803615ms"
	I0806 08:00:46.327455       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="146.691µs"
	I0806 08:01:01.037769       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-mbm4z"
	I0806 08:01:01.067176       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-mbm4z"
	I0806 08:01:01.067268       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-m6r5n"
	I0806 08:01:01.096826       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-m6r5n"
	
	
	==> kube-controller-manager [beb56425a40e2135964c6eb3931935babc4cf81b32a1e8c70735f1995effbb77] <==
	I0806 07:52:28.973595       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-551471-m02\" does not exist"
	I0806 07:52:29.023121       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-551471-m02" podCIDRs=["10.244.1.0/24"]
	I0806 07:52:33.027484       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-551471-m02"
	I0806 07:52:48.053605       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-551471-m02"
	I0806 07:52:50.599828       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.143455ms"
	I0806 07:52:50.644059       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.018782ms"
	I0806 07:52:50.644151       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.932µs"
	I0806 07:52:50.646997       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.389µs"
	I0806 07:52:53.821429       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="19.468204ms"
	I0806 07:52:53.821665       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.322µs"
	I0806 07:52:54.780428       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.985158ms"
	I0806 07:52:54.780639       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.893µs"
	I0806 07:53:27.544709       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-551471-m02"
	I0806 07:53:27.552773       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-551471-m03\" does not exist"
	I0806 07:53:27.580971       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-551471-m03" podCIDRs=["10.244.2.0/24"]
	I0806 07:53:28.046201       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-551471-m03"
	I0806 07:53:47.841607       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-551471-m02"
	I0806 07:54:16.274381       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-551471-m02"
	I0806 07:54:17.307217       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-551471-m02"
	I0806 07:54:17.308838       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-551471-m03\" does not exist"
	I0806 07:54:17.320721       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-551471-m03" podCIDRs=["10.244.3.0/24"]
	I0806 07:54:37.089250       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-551471-m02"
	I0806 07:55:18.102837       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-551471-m02"
	I0806 07:55:23.201497       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.341894ms"
	I0806 07:55:23.201819       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="101.88µs"
	
	
	==> kube-proxy [74e91b8e25d98b20fd885a292fff4c3200fc87c06bdea6de195993b8d96c6480] <==
	I0806 07:51:45.998060       1 server_linux.go:69] "Using iptables proxy"
	I0806 07:51:46.013068       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.186"]
	I0806 07:51:46.049454       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0806 07:51:46.049500       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0806 07:51:46.049516       1 server_linux.go:165] "Using iptables Proxier"
	I0806 07:51:46.052468       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0806 07:51:46.052831       1 server.go:872] "Version info" version="v1.30.3"
	I0806 07:51:46.052844       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0806 07:51:46.054833       1 config.go:192] "Starting service config controller"
	I0806 07:51:46.055188       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0806 07:51:46.055248       1 config.go:101] "Starting endpoint slice config controller"
	I0806 07:51:46.055266       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0806 07:51:46.057212       1 config.go:319] "Starting node config controller"
	I0806 07:51:46.057252       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0806 07:51:46.155924       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0806 07:51:46.155994       1 shared_informer.go:320] Caches are synced for service config
	I0806 07:51:46.157420       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [7796edadd615e4a6b72780d99c72833e29376e47f21ace8d1b9846c888ce57ac] <==
	I0806 07:58:29.738094       1 server_linux.go:69] "Using iptables proxy"
	I0806 07:58:29.767124       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.186"]
	I0806 07:58:29.915698       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0806 07:58:29.915734       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0806 07:58:29.915756       1 server_linux.go:165] "Using iptables Proxier"
	I0806 07:58:29.927588       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0806 07:58:29.927810       1 server.go:872] "Version info" version="v1.30.3"
	I0806 07:58:29.927848       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0806 07:58:29.933640       1 config.go:192] "Starting service config controller"
	I0806 07:58:29.933680       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0806 07:58:29.933707       1 config.go:101] "Starting endpoint slice config controller"
	I0806 07:58:29.933711       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0806 07:58:29.934317       1 config.go:319] "Starting node config controller"
	I0806 07:58:29.939416       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0806 07:58:30.035472       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0806 07:58:30.035490       1 shared_informer.go:320] Caches are synced for service config
	I0806 07:58:30.040422       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [1cf9151b1941b0e938e94a6d7247c74e3d0e1f26fddaf0350cc2cb6a211618a7] <==
	E0806 07:51:27.637881       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0806 07:51:27.633745       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0806 07:51:27.637940       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0806 07:51:28.439025       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0806 07:51:28.439190       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0806 07:51:28.478253       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0806 07:51:28.478406       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0806 07:51:28.485809       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0806 07:51:28.485975       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0806 07:51:28.707587       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0806 07:51:28.707637       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0806 07:51:28.776183       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0806 07:51:28.776275       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0806 07:51:28.815277       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0806 07:51:28.815838       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0806 07:51:28.859575       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0806 07:51:28.859634       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0806 07:51:28.881894       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0806 07:51:28.882001       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0806 07:51:28.966413       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0806 07:51:28.966525       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0806 07:51:31.426619       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0806 07:56:42.642844       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0806 07:56:42.646986       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E0806 07:56:42.651740       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [f179bdc8d4add339e4a2b7c1af524fafe3d8f5fb06c47686220ddf2a8b0ce165] <==
	I0806 07:58:25.814301       1 serving.go:380] Generated self-signed cert in-memory
	W0806 07:58:28.057491       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0806 07:58:28.057578       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0806 07:58:28.057589       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0806 07:58:28.057595       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0806 07:58:28.107875       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0806 07:58:28.107917       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0806 07:58:28.109635       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0806 07:58:28.109789       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0806 07:58:28.109827       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0806 07:58:28.109842       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0806 07:58:28.210668       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 06 07:58:28 multinode-551471 kubelet[3106]: I0806 07:58:28.719814    3106 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8497743e-1315-4ab3-9097-9170fb8a373b-lib-modules\") pod \"kube-proxy-8t88h\" (UID: \"8497743e-1315-4ab3-9097-9170fb8a373b\") " pod="kube-system/kube-proxy-8t88h"
	Aug 06 07:58:28 multinode-551471 kubelet[3106]: I0806 07:58:28.719884    3106 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/3b41a9dc-8b36-4c97-ba13-e8d86bbed5b1-tmp\") pod \"storage-provisioner\" (UID: \"3b41a9dc-8b36-4c97-ba13-e8d86bbed5b1\") " pod="kube-system/storage-provisioner"
	Aug 06 07:58:28 multinode-551471 kubelet[3106]: I0806 07:58:28.719939    3106 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/51aee9c7-0107-4ffb-a335-2bb3b56f0882-cni-cfg\") pod \"kindnet-9dc7g\" (UID: \"51aee9c7-0107-4ffb-a335-2bb3b56f0882\") " pod="kube-system/kindnet-9dc7g"
	Aug 06 07:58:28 multinode-551471 kubelet[3106]: I0806 07:58:28.720117    3106 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8497743e-1315-4ab3-9097-9170fb8a373b-xtables-lock\") pod \"kube-proxy-8t88h\" (UID: \"8497743e-1315-4ab3-9097-9170fb8a373b\") " pod="kube-system/kube-proxy-8t88h"
	Aug 06 07:58:37 multinode-551471 kubelet[3106]: I0806 07:58:37.284625    3106 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Aug 06 07:59:23 multinode-551471 kubelet[3106]: E0806 07:59:23.742436    3106 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 06 07:59:23 multinode-551471 kubelet[3106]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 06 07:59:23 multinode-551471 kubelet[3106]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 06 07:59:23 multinode-551471 kubelet[3106]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 06 07:59:23 multinode-551471 kubelet[3106]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 06 08:00:23 multinode-551471 kubelet[3106]: E0806 08:00:23.745730    3106 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 06 08:00:23 multinode-551471 kubelet[3106]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 06 08:00:23 multinode-551471 kubelet[3106]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 06 08:00:23 multinode-551471 kubelet[3106]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 06 08:00:23 multinode-551471 kubelet[3106]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 06 08:01:23 multinode-551471 kubelet[3106]: E0806 08:01:23.743582    3106 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 06 08:01:23 multinode-551471 kubelet[3106]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 06 08:01:23 multinode-551471 kubelet[3106]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 06 08:01:23 multinode-551471 kubelet[3106]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 06 08:01:23 multinode-551471 kubelet[3106]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 06 08:02:23 multinode-551471 kubelet[3106]: E0806 08:02:23.742574    3106 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 06 08:02:23 multinode-551471 kubelet[3106]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 06 08:02:23 multinode-551471 kubelet[3106]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 06 08:02:23 multinode-551471 kubelet[3106]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 06 08:02:23 multinode-551471 kubelet[3106]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0806 08:02:29.659625   51663 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19370-13057/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-551471 -n multinode-551471
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-551471 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.44s)

                                                
                                    
x
+
TestPreload (278.7s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-561409 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-561409 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m15.660112598s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-561409 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-561409 image pull gcr.io/k8s-minikube/busybox: (3.07237605s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-561409
E0806 08:08:56.254297   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/addons-505457/client.crt: no such file or directory
E0806 08:09:13.203462   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/addons-505457/client.crt: no such file or directory
preload_test.go:58: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p test-preload-561409: exit status 82 (2m0.467117442s)

                                                
                                                
-- stdout --
	* Stopping node "test-preload-561409"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
preload_test.go:60: out/minikube-linux-amd64 stop -p test-preload-561409 failed: exit status 82
panic.go:626: *** TestPreload FAILED at 2024-08-06 08:10:45.547532652 +0000 UTC m=+3968.772343082
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-561409 -n test-preload-561409
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-561409 -n test-preload-561409: exit status 3 (18.588491413s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0806 08:11:04.132734   54589 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.144:22: connect: no route to host
	E0806 08:11:04.132754   54589 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.144:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "test-preload-561409" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "test-preload-561409" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-561409
--- FAIL: TestPreload (278.70s)

                                                
                                    
x
+
TestKubernetesUpgrade (464.53s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-122219 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0806 08:16:04.870949   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/functional-811691/client.crt: no such file or directory
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-122219 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m51.593841172s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-122219] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19370
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19370-13057/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19370-13057/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-122219" primary control-plane node in "kubernetes-upgrade-122219" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 08:15:51.584228   60490 out.go:291] Setting OutFile to fd 1 ...
	I0806 08:15:51.584407   60490 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 08:15:51.584420   60490 out.go:304] Setting ErrFile to fd 2...
	I0806 08:15:51.584429   60490 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 08:15:51.584713   60490 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19370-13057/.minikube/bin
	I0806 08:15:51.585859   60490 out.go:298] Setting JSON to false
	I0806 08:15:51.587554   60490 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7098,"bootTime":1722925054,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0806 08:15:51.587642   60490 start.go:139] virtualization: kvm guest
	I0806 08:15:51.589704   60490 out.go:177] * [kubernetes-upgrade-122219] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0806 08:15:51.591385   60490 notify.go:220] Checking for updates...
	I0806 08:15:51.591393   60490 out.go:177]   - MINIKUBE_LOCATION=19370
	I0806 08:15:51.592795   60490 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0806 08:15:51.594006   60490 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19370-13057/kubeconfig
	I0806 08:15:51.595300   60490 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19370-13057/.minikube
	I0806 08:15:51.596628   60490 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0806 08:15:51.597962   60490 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0806 08:15:51.599923   60490 config.go:182] Loaded profile config "NoKubernetes-681138": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0806 08:15:51.600081   60490 config.go:182] Loaded profile config "cert-expiration-739400": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 08:15:51.600215   60490 driver.go:392] Setting default libvirt URI to qemu:///system
	I0806 08:15:51.639297   60490 out.go:177] * Using the kvm2 driver based on user configuration
	I0806 08:15:51.640984   60490 start.go:297] selected driver: kvm2
	I0806 08:15:51.641007   60490 start.go:901] validating driver "kvm2" against <nil>
	I0806 08:15:51.641023   60490 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0806 08:15:51.642164   60490 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 08:15:51.642286   60490 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19370-13057/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0806 08:15:51.658215   60490 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0806 08:15:51.658271   60490 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0806 08:15:51.658470   60490 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0806 08:15:51.658522   60490 cni.go:84] Creating CNI manager for ""
	I0806 08:15:51.658533   60490 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0806 08:15:51.658545   60490 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0806 08:15:51.658595   60490 start.go:340] cluster config:
	{Name:kubernetes-upgrade-122219 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-122219 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 08:15:51.658703   60490 iso.go:125] acquiring lock: {Name:mke58d71de28babe305c5eb0c31c274e9713bb3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 08:15:51.660601   60490 out.go:177] * Starting "kubernetes-upgrade-122219" primary control-plane node in "kubernetes-upgrade-122219" cluster
	I0806 08:15:51.661976   60490 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0806 08:15:51.662023   60490 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0806 08:15:51.662036   60490 cache.go:56] Caching tarball of preloaded images
	I0806 08:15:51.662116   60490 preload.go:172] Found /home/jenkins/minikube-integration/19370-13057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0806 08:15:51.662125   60490 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0806 08:15:51.662205   60490 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/kubernetes-upgrade-122219/config.json ...
	I0806 08:15:51.662221   60490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/kubernetes-upgrade-122219/config.json: {Name:mk3ad84492f97b3476f65c25a4c8dad416f02f59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:15:51.662350   60490 start.go:360] acquireMachinesLock for kubernetes-upgrade-122219: {Name:mkc602ac9c8cc3360dcc0fcb8104303837aaab61 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 08:16:13.045509   60490 start.go:364] duration metric: took 21.383123627s to acquireMachinesLock for "kubernetes-upgrade-122219"
	I0806 08:16:13.045597   60490 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-122219 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-122219 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0806 08:16:13.045714   60490 start.go:125] createHost starting for "" (driver="kvm2")
	I0806 08:16:13.048945   60490 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0806 08:16:13.049179   60490 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:16:13.049249   60490 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:16:13.070270   60490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35759
	I0806 08:16:13.070846   60490 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:16:13.071505   60490 main.go:141] libmachine: Using API Version  1
	I0806 08:16:13.071525   60490 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:16:13.071925   60490 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:16:13.072096   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetMachineName
	I0806 08:16:13.072270   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .DriverName
	I0806 08:16:13.072463   60490 start.go:159] libmachine.API.Create for "kubernetes-upgrade-122219" (driver="kvm2")
	I0806 08:16:13.072498   60490 client.go:168] LocalClient.Create starting
	I0806 08:16:13.072545   60490 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem
	I0806 08:16:13.072585   60490 main.go:141] libmachine: Decoding PEM data...
	I0806 08:16:13.072613   60490 main.go:141] libmachine: Parsing certificate...
	I0806 08:16:13.072679   60490 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem
	I0806 08:16:13.072703   60490 main.go:141] libmachine: Decoding PEM data...
	I0806 08:16:13.072714   60490 main.go:141] libmachine: Parsing certificate...
	I0806 08:16:13.072728   60490 main.go:141] libmachine: Running pre-create checks...
	I0806 08:16:13.072737   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .PreCreateCheck
	I0806 08:16:13.073171   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetConfigRaw
	I0806 08:16:13.073617   60490 main.go:141] libmachine: Creating machine...
	I0806 08:16:13.073633   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .Create
	I0806 08:16:13.073762   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) Creating KVM machine...
	I0806 08:16:13.075374   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | found existing default KVM network
	I0806 08:16:13.076954   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | I0806 08:16:13.076760   60651 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:02:0b:6e} reservation:<nil>}
	I0806 08:16:13.078274   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | I0806 08:16:13.078193   60651 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000209e10}
	I0806 08:16:13.078351   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | created network xml: 
	I0806 08:16:13.078371   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | <network>
	I0806 08:16:13.078382   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG |   <name>mk-kubernetes-upgrade-122219</name>
	I0806 08:16:13.078397   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG |   <dns enable='no'/>
	I0806 08:16:13.078408   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG |   
	I0806 08:16:13.078417   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0806 08:16:13.078429   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG |     <dhcp>
	I0806 08:16:13.078441   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0806 08:16:13.078455   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG |     </dhcp>
	I0806 08:16:13.078465   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG |   </ip>
	I0806 08:16:13.078493   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG |   
	I0806 08:16:13.078517   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | </network>
	I0806 08:16:13.078542   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | 
	I0806 08:16:13.084587   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | trying to create private KVM network mk-kubernetes-upgrade-122219 192.168.50.0/24...
	I0806 08:16:13.155263   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | private KVM network mk-kubernetes-upgrade-122219 192.168.50.0/24 created
	I0806 08:16:13.155302   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) Setting up store path in /home/jenkins/minikube-integration/19370-13057/.minikube/machines/kubernetes-upgrade-122219 ...
	I0806 08:16:13.155316   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) Building disk image from file:///home/jenkins/minikube-integration/19370-13057/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0806 08:16:13.155335   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | I0806 08:16:13.155294   60651 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19370-13057/.minikube
	I0806 08:16:13.155506   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) Downloading /home/jenkins/minikube-integration/19370-13057/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19370-13057/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0806 08:16:13.391908   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | I0806 08:16:13.391719   60651 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/kubernetes-upgrade-122219/id_rsa...
	I0806 08:16:13.607073   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | I0806 08:16:13.606915   60651 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/kubernetes-upgrade-122219/kubernetes-upgrade-122219.rawdisk...
	I0806 08:16:13.607106   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | Writing magic tar header
	I0806 08:16:13.607119   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | Writing SSH key tar header
	I0806 08:16:13.607127   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | I0806 08:16:13.607031   60651 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19370-13057/.minikube/machines/kubernetes-upgrade-122219 ...
	I0806 08:16:13.607138   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/kubernetes-upgrade-122219
	I0806 08:16:13.607227   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19370-13057/.minikube/machines
	I0806 08:16:13.607247   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19370-13057/.minikube
	I0806 08:16:13.607261   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) Setting executable bit set on /home/jenkins/minikube-integration/19370-13057/.minikube/machines/kubernetes-upgrade-122219 (perms=drwx------)
	I0806 08:16:13.607279   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) Setting executable bit set on /home/jenkins/minikube-integration/19370-13057/.minikube/machines (perms=drwxr-xr-x)
	I0806 08:16:13.607293   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19370-13057
	I0806 08:16:13.607312   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) Setting executable bit set on /home/jenkins/minikube-integration/19370-13057/.minikube (perms=drwxr-xr-x)
	I0806 08:16:13.607323   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) Setting executable bit set on /home/jenkins/minikube-integration/19370-13057 (perms=drwxrwxr-x)
	I0806 08:16:13.607333   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0806 08:16:13.607346   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0806 08:16:13.607355   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0806 08:16:13.607401   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | Checking permissions on dir: /home/jenkins
	I0806 08:16:13.607427   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) Creating domain...
	I0806 08:16:13.607436   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | Checking permissions on dir: /home
	I0806 08:16:13.607448   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | Skipping /home - not owner
	I0806 08:16:13.608671   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) define libvirt domain using xml: 
	I0806 08:16:13.608701   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) <domain type='kvm'>
	I0806 08:16:13.608715   60490 main.go:141] libmachine: (kubernetes-upgrade-122219)   <name>kubernetes-upgrade-122219</name>
	I0806 08:16:13.608725   60490 main.go:141] libmachine: (kubernetes-upgrade-122219)   <memory unit='MiB'>2200</memory>
	I0806 08:16:13.608739   60490 main.go:141] libmachine: (kubernetes-upgrade-122219)   <vcpu>2</vcpu>
	I0806 08:16:13.608748   60490 main.go:141] libmachine: (kubernetes-upgrade-122219)   <features>
	I0806 08:16:13.608760   60490 main.go:141] libmachine: (kubernetes-upgrade-122219)     <acpi/>
	I0806 08:16:13.608769   60490 main.go:141] libmachine: (kubernetes-upgrade-122219)     <apic/>
	I0806 08:16:13.608819   60490 main.go:141] libmachine: (kubernetes-upgrade-122219)     <pae/>
	I0806 08:16:13.608840   60490 main.go:141] libmachine: (kubernetes-upgrade-122219)     
	I0806 08:16:13.608851   60490 main.go:141] libmachine: (kubernetes-upgrade-122219)   </features>
	I0806 08:16:13.608863   60490 main.go:141] libmachine: (kubernetes-upgrade-122219)   <cpu mode='host-passthrough'>
	I0806 08:16:13.608872   60490 main.go:141] libmachine: (kubernetes-upgrade-122219)   
	I0806 08:16:13.608883   60490 main.go:141] libmachine: (kubernetes-upgrade-122219)   </cpu>
	I0806 08:16:13.608892   60490 main.go:141] libmachine: (kubernetes-upgrade-122219)   <os>
	I0806 08:16:13.608903   60490 main.go:141] libmachine: (kubernetes-upgrade-122219)     <type>hvm</type>
	I0806 08:16:13.608912   60490 main.go:141] libmachine: (kubernetes-upgrade-122219)     <boot dev='cdrom'/>
	I0806 08:16:13.608923   60490 main.go:141] libmachine: (kubernetes-upgrade-122219)     <boot dev='hd'/>
	I0806 08:16:13.608932   60490 main.go:141] libmachine: (kubernetes-upgrade-122219)     <bootmenu enable='no'/>
	I0806 08:16:13.608942   60490 main.go:141] libmachine: (kubernetes-upgrade-122219)   </os>
	I0806 08:16:13.608950   60490 main.go:141] libmachine: (kubernetes-upgrade-122219)   <devices>
	I0806 08:16:13.608959   60490 main.go:141] libmachine: (kubernetes-upgrade-122219)     <disk type='file' device='cdrom'>
	I0806 08:16:13.608973   60490 main.go:141] libmachine: (kubernetes-upgrade-122219)       <source file='/home/jenkins/minikube-integration/19370-13057/.minikube/machines/kubernetes-upgrade-122219/boot2docker.iso'/>
	I0806 08:16:13.608985   60490 main.go:141] libmachine: (kubernetes-upgrade-122219)       <target dev='hdc' bus='scsi'/>
	I0806 08:16:13.608998   60490 main.go:141] libmachine: (kubernetes-upgrade-122219)       <readonly/>
	I0806 08:16:13.609011   60490 main.go:141] libmachine: (kubernetes-upgrade-122219)     </disk>
	I0806 08:16:13.609025   60490 main.go:141] libmachine: (kubernetes-upgrade-122219)     <disk type='file' device='disk'>
	I0806 08:16:13.609034   60490 main.go:141] libmachine: (kubernetes-upgrade-122219)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0806 08:16:13.609053   60490 main.go:141] libmachine: (kubernetes-upgrade-122219)       <source file='/home/jenkins/minikube-integration/19370-13057/.minikube/machines/kubernetes-upgrade-122219/kubernetes-upgrade-122219.rawdisk'/>
	I0806 08:16:13.609065   60490 main.go:141] libmachine: (kubernetes-upgrade-122219)       <target dev='hda' bus='virtio'/>
	I0806 08:16:13.609086   60490 main.go:141] libmachine: (kubernetes-upgrade-122219)     </disk>
	I0806 08:16:13.609098   60490 main.go:141] libmachine: (kubernetes-upgrade-122219)     <interface type='network'>
	I0806 08:16:13.609110   60490 main.go:141] libmachine: (kubernetes-upgrade-122219)       <source network='mk-kubernetes-upgrade-122219'/>
	I0806 08:16:13.609122   60490 main.go:141] libmachine: (kubernetes-upgrade-122219)       <model type='virtio'/>
	I0806 08:16:13.609131   60490 main.go:141] libmachine: (kubernetes-upgrade-122219)     </interface>
	I0806 08:16:13.609139   60490 main.go:141] libmachine: (kubernetes-upgrade-122219)     <interface type='network'>
	I0806 08:16:13.609148   60490 main.go:141] libmachine: (kubernetes-upgrade-122219)       <source network='default'/>
	I0806 08:16:13.609160   60490 main.go:141] libmachine: (kubernetes-upgrade-122219)       <model type='virtio'/>
	I0806 08:16:13.609172   60490 main.go:141] libmachine: (kubernetes-upgrade-122219)     </interface>
	I0806 08:16:13.609182   60490 main.go:141] libmachine: (kubernetes-upgrade-122219)     <serial type='pty'>
	I0806 08:16:13.609191   60490 main.go:141] libmachine: (kubernetes-upgrade-122219)       <target port='0'/>
	I0806 08:16:13.609201   60490 main.go:141] libmachine: (kubernetes-upgrade-122219)     </serial>
	I0806 08:16:13.609209   60490 main.go:141] libmachine: (kubernetes-upgrade-122219)     <console type='pty'>
	I0806 08:16:13.609287   60490 main.go:141] libmachine: (kubernetes-upgrade-122219)       <target type='serial' port='0'/>
	I0806 08:16:13.609311   60490 main.go:141] libmachine: (kubernetes-upgrade-122219)     </console>
	I0806 08:16:13.609320   60490 main.go:141] libmachine: (kubernetes-upgrade-122219)     <rng model='virtio'>
	I0806 08:16:13.609331   60490 main.go:141] libmachine: (kubernetes-upgrade-122219)       <backend model='random'>/dev/random</backend>
	I0806 08:16:13.609341   60490 main.go:141] libmachine: (kubernetes-upgrade-122219)     </rng>
	I0806 08:16:13.609350   60490 main.go:141] libmachine: (kubernetes-upgrade-122219)     
	I0806 08:16:13.609361   60490 main.go:141] libmachine: (kubernetes-upgrade-122219)     
	I0806 08:16:13.609372   60490 main.go:141] libmachine: (kubernetes-upgrade-122219)   </devices>
	I0806 08:16:13.609382   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) </domain>
	I0806 08:16:13.609392   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) 
	I0806 08:16:13.613985   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | domain kubernetes-upgrade-122219 has defined MAC address 52:54:00:3e:90:bd in network default
	I0806 08:16:13.614569   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) Ensuring networks are active...
	I0806 08:16:13.614594   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | domain kubernetes-upgrade-122219 has defined MAC address 52:54:00:1e:81:b8 in network mk-kubernetes-upgrade-122219
	I0806 08:16:13.615329   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) Ensuring network default is active
	I0806 08:16:13.615688   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) Ensuring network mk-kubernetes-upgrade-122219 is active
	I0806 08:16:13.616288   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) Getting domain xml...
	I0806 08:16:13.617096   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) Creating domain...
	I0806 08:16:14.879166   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) Waiting to get IP...
	I0806 08:16:14.880008   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | domain kubernetes-upgrade-122219 has defined MAC address 52:54:00:1e:81:b8 in network mk-kubernetes-upgrade-122219
	I0806 08:16:14.880492   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | unable to find current IP address of domain kubernetes-upgrade-122219 in network mk-kubernetes-upgrade-122219
	I0806 08:16:14.880516   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | I0806 08:16:14.880475   60651 retry.go:31] will retry after 252.875261ms: waiting for machine to come up
	I0806 08:16:15.134980   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | domain kubernetes-upgrade-122219 has defined MAC address 52:54:00:1e:81:b8 in network mk-kubernetes-upgrade-122219
	I0806 08:16:15.135533   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | unable to find current IP address of domain kubernetes-upgrade-122219 in network mk-kubernetes-upgrade-122219
	I0806 08:16:15.135559   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | I0806 08:16:15.135491   60651 retry.go:31] will retry after 342.086152ms: waiting for machine to come up
	I0806 08:16:15.479169   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | domain kubernetes-upgrade-122219 has defined MAC address 52:54:00:1e:81:b8 in network mk-kubernetes-upgrade-122219
	I0806 08:16:15.479614   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | unable to find current IP address of domain kubernetes-upgrade-122219 in network mk-kubernetes-upgrade-122219
	I0806 08:16:15.479662   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | I0806 08:16:15.479551   60651 retry.go:31] will retry after 461.943672ms: waiting for machine to come up
	I0806 08:16:15.943074   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | domain kubernetes-upgrade-122219 has defined MAC address 52:54:00:1e:81:b8 in network mk-kubernetes-upgrade-122219
	I0806 08:16:15.943483   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | unable to find current IP address of domain kubernetes-upgrade-122219 in network mk-kubernetes-upgrade-122219
	I0806 08:16:15.943511   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | I0806 08:16:15.943433   60651 retry.go:31] will retry after 440.821268ms: waiting for machine to come up
	I0806 08:16:16.386025   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | domain kubernetes-upgrade-122219 has defined MAC address 52:54:00:1e:81:b8 in network mk-kubernetes-upgrade-122219
	I0806 08:16:16.386572   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | unable to find current IP address of domain kubernetes-upgrade-122219 in network mk-kubernetes-upgrade-122219
	I0806 08:16:16.386603   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | I0806 08:16:16.386518   60651 retry.go:31] will retry after 642.860335ms: waiting for machine to come up
	I0806 08:16:17.031550   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | domain kubernetes-upgrade-122219 has defined MAC address 52:54:00:1e:81:b8 in network mk-kubernetes-upgrade-122219
	I0806 08:16:17.031938   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | unable to find current IP address of domain kubernetes-upgrade-122219 in network mk-kubernetes-upgrade-122219
	I0806 08:16:17.031966   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | I0806 08:16:17.031889   60651 retry.go:31] will retry after 921.490994ms: waiting for machine to come up
	I0806 08:16:17.955013   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | domain kubernetes-upgrade-122219 has defined MAC address 52:54:00:1e:81:b8 in network mk-kubernetes-upgrade-122219
	I0806 08:16:17.955569   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | unable to find current IP address of domain kubernetes-upgrade-122219 in network mk-kubernetes-upgrade-122219
	I0806 08:16:17.955596   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | I0806 08:16:17.955508   60651 retry.go:31] will retry after 846.467529ms: waiting for machine to come up
	I0806 08:16:18.803899   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | domain kubernetes-upgrade-122219 has defined MAC address 52:54:00:1e:81:b8 in network mk-kubernetes-upgrade-122219
	I0806 08:16:18.804373   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | unable to find current IP address of domain kubernetes-upgrade-122219 in network mk-kubernetes-upgrade-122219
	I0806 08:16:18.804403   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | I0806 08:16:18.804312   60651 retry.go:31] will retry after 907.706283ms: waiting for machine to come up
	I0806 08:16:19.713444   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | domain kubernetes-upgrade-122219 has defined MAC address 52:54:00:1e:81:b8 in network mk-kubernetes-upgrade-122219
	I0806 08:16:19.714057   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | unable to find current IP address of domain kubernetes-upgrade-122219 in network mk-kubernetes-upgrade-122219
	I0806 08:16:19.714090   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | I0806 08:16:19.714004   60651 retry.go:31] will retry after 1.421010085s: waiting for machine to come up
	I0806 08:16:21.136265   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | domain kubernetes-upgrade-122219 has defined MAC address 52:54:00:1e:81:b8 in network mk-kubernetes-upgrade-122219
	I0806 08:16:21.136877   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | unable to find current IP address of domain kubernetes-upgrade-122219 in network mk-kubernetes-upgrade-122219
	I0806 08:16:21.136906   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | I0806 08:16:21.136806   60651 retry.go:31] will retry after 1.722583852s: waiting for machine to come up
	I0806 08:16:22.861497   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | domain kubernetes-upgrade-122219 has defined MAC address 52:54:00:1e:81:b8 in network mk-kubernetes-upgrade-122219
	I0806 08:16:22.861928   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | unable to find current IP address of domain kubernetes-upgrade-122219 in network mk-kubernetes-upgrade-122219
	I0806 08:16:22.861963   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | I0806 08:16:22.861861   60651 retry.go:31] will retry after 2.896506473s: waiting for machine to come up
	I0806 08:16:25.761271   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | domain kubernetes-upgrade-122219 has defined MAC address 52:54:00:1e:81:b8 in network mk-kubernetes-upgrade-122219
	I0806 08:16:25.761665   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | unable to find current IP address of domain kubernetes-upgrade-122219 in network mk-kubernetes-upgrade-122219
	I0806 08:16:25.761688   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | I0806 08:16:25.761616   60651 retry.go:31] will retry after 2.231289352s: waiting for machine to come up
	I0806 08:16:27.995647   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | domain kubernetes-upgrade-122219 has defined MAC address 52:54:00:1e:81:b8 in network mk-kubernetes-upgrade-122219
	I0806 08:16:27.996177   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | unable to find current IP address of domain kubernetes-upgrade-122219 in network mk-kubernetes-upgrade-122219
	I0806 08:16:27.996207   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | I0806 08:16:27.996125   60651 retry.go:31] will retry after 3.474732965s: waiting for machine to come up
	I0806 08:16:31.473406   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | domain kubernetes-upgrade-122219 has defined MAC address 52:54:00:1e:81:b8 in network mk-kubernetes-upgrade-122219
	I0806 08:16:31.473791   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | unable to find current IP address of domain kubernetes-upgrade-122219 in network mk-kubernetes-upgrade-122219
	I0806 08:16:31.473812   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | I0806 08:16:31.473747   60651 retry.go:31] will retry after 3.909321647s: waiting for machine to come up
	I0806 08:16:35.386214   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | domain kubernetes-upgrade-122219 has defined MAC address 52:54:00:1e:81:b8 in network mk-kubernetes-upgrade-122219
	I0806 08:16:35.386706   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) Found IP for machine: 192.168.50.53
	I0806 08:16:35.386738   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | domain kubernetes-upgrade-122219 has current primary IP address 192.168.50.53 and MAC address 52:54:00:1e:81:b8 in network mk-kubernetes-upgrade-122219
	I0806 08:16:35.386749   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) Reserving static IP address...
	I0806 08:16:35.387177   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-122219", mac: "52:54:00:1e:81:b8", ip: "192.168.50.53"} in network mk-kubernetes-upgrade-122219
	I0806 08:16:35.462925   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) Reserved static IP address: 192.168.50.53
	I0806 08:16:35.462951   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) Waiting for SSH to be available...
	I0806 08:16:35.462984   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | Getting to WaitForSSH function...
	I0806 08:16:35.465526   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | domain kubernetes-upgrade-122219 has defined MAC address 52:54:00:1e:81:b8 in network mk-kubernetes-upgrade-122219
	I0806 08:16:35.465926   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:81:b8", ip: ""} in network mk-kubernetes-upgrade-122219: {Iface:virbr3 ExpiryTime:2024-08-06 09:16:28 +0000 UTC Type:0 Mac:52:54:00:1e:81:b8 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:minikube Clientid:01:52:54:00:1e:81:b8}
	I0806 08:16:35.465958   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | domain kubernetes-upgrade-122219 has defined IP address 192.168.50.53 and MAC address 52:54:00:1e:81:b8 in network mk-kubernetes-upgrade-122219
	I0806 08:16:35.466072   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | Using SSH client type: external
	I0806 08:16:35.466099   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | Using SSH private key: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/kubernetes-upgrade-122219/id_rsa (-rw-------)
	I0806 08:16:35.466147   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.53 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19370-13057/.minikube/machines/kubernetes-upgrade-122219/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0806 08:16:35.466168   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | About to run SSH command:
	I0806 08:16:35.466186   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | exit 0
	I0806 08:16:35.589020   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | SSH cmd err, output: <nil>: 
	I0806 08:16:35.589289   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) KVM machine creation complete!
	I0806 08:16:35.589601   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetConfigRaw
	I0806 08:16:35.590122   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .DriverName
	I0806 08:16:35.590324   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .DriverName
	I0806 08:16:35.590489   60490 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0806 08:16:35.590503   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetState
	I0806 08:16:35.591704   60490 main.go:141] libmachine: Detecting operating system of created instance...
	I0806 08:16:35.591720   60490 main.go:141] libmachine: Waiting for SSH to be available...
	I0806 08:16:35.591727   60490 main.go:141] libmachine: Getting to WaitForSSH function...
	I0806 08:16:35.591736   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetSSHHostname
	I0806 08:16:35.594074   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | domain kubernetes-upgrade-122219 has defined MAC address 52:54:00:1e:81:b8 in network mk-kubernetes-upgrade-122219
	I0806 08:16:35.594476   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:81:b8", ip: ""} in network mk-kubernetes-upgrade-122219: {Iface:virbr3 ExpiryTime:2024-08-06 09:16:28 +0000 UTC Type:0 Mac:52:54:00:1e:81:b8 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:kubernetes-upgrade-122219 Clientid:01:52:54:00:1e:81:b8}
	I0806 08:16:35.594504   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | domain kubernetes-upgrade-122219 has defined IP address 192.168.50.53 and MAC address 52:54:00:1e:81:b8 in network mk-kubernetes-upgrade-122219
	I0806 08:16:35.594732   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetSSHPort
	I0806 08:16:35.594920   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetSSHKeyPath
	I0806 08:16:35.595255   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetSSHKeyPath
	I0806 08:16:35.595439   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetSSHUsername
	I0806 08:16:35.595635   60490 main.go:141] libmachine: Using SSH client type: native
	I0806 08:16:35.595816   60490 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.53 22 <nil> <nil>}
	I0806 08:16:35.595828   60490 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0806 08:16:35.699839   60490 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 08:16:35.699875   60490 main.go:141] libmachine: Detecting the provisioner...
	I0806 08:16:35.699885   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetSSHHostname
	I0806 08:16:35.702713   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | domain kubernetes-upgrade-122219 has defined MAC address 52:54:00:1e:81:b8 in network mk-kubernetes-upgrade-122219
	I0806 08:16:35.703087   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:81:b8", ip: ""} in network mk-kubernetes-upgrade-122219: {Iface:virbr3 ExpiryTime:2024-08-06 09:16:28 +0000 UTC Type:0 Mac:52:54:00:1e:81:b8 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:kubernetes-upgrade-122219 Clientid:01:52:54:00:1e:81:b8}
	I0806 08:16:35.703108   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | domain kubernetes-upgrade-122219 has defined IP address 192.168.50.53 and MAC address 52:54:00:1e:81:b8 in network mk-kubernetes-upgrade-122219
	I0806 08:16:35.703306   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetSSHPort
	I0806 08:16:35.703499   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetSSHKeyPath
	I0806 08:16:35.703694   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetSSHKeyPath
	I0806 08:16:35.703885   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetSSHUsername
	I0806 08:16:35.704074   60490 main.go:141] libmachine: Using SSH client type: native
	I0806 08:16:35.704250   60490 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.53 22 <nil> <nil>}
	I0806 08:16:35.704262   60490 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0806 08:16:35.809430   60490 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0806 08:16:35.809490   60490 main.go:141] libmachine: found compatible host: buildroot
	I0806 08:16:35.809496   60490 main.go:141] libmachine: Provisioning with buildroot...
	I0806 08:16:35.809505   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetMachineName
	I0806 08:16:35.809732   60490 buildroot.go:166] provisioning hostname "kubernetes-upgrade-122219"
	I0806 08:16:35.809757   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetMachineName
	I0806 08:16:35.809944   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetSSHHostname
	I0806 08:16:35.812810   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | domain kubernetes-upgrade-122219 has defined MAC address 52:54:00:1e:81:b8 in network mk-kubernetes-upgrade-122219
	I0806 08:16:35.813168   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:81:b8", ip: ""} in network mk-kubernetes-upgrade-122219: {Iface:virbr3 ExpiryTime:2024-08-06 09:16:28 +0000 UTC Type:0 Mac:52:54:00:1e:81:b8 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:kubernetes-upgrade-122219 Clientid:01:52:54:00:1e:81:b8}
	I0806 08:16:35.813205   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | domain kubernetes-upgrade-122219 has defined IP address 192.168.50.53 and MAC address 52:54:00:1e:81:b8 in network mk-kubernetes-upgrade-122219
	I0806 08:16:35.813364   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetSSHPort
	I0806 08:16:35.813558   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetSSHKeyPath
	I0806 08:16:35.813719   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetSSHKeyPath
	I0806 08:16:35.813945   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetSSHUsername
	I0806 08:16:35.814139   60490 main.go:141] libmachine: Using SSH client type: native
	I0806 08:16:35.814308   60490 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.53 22 <nil> <nil>}
	I0806 08:16:35.814325   60490 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-122219 && echo "kubernetes-upgrade-122219" | sudo tee /etc/hostname
	I0806 08:16:35.931723   60490 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-122219
	
	I0806 08:16:35.931770   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetSSHHostname
	I0806 08:16:35.934926   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | domain kubernetes-upgrade-122219 has defined MAC address 52:54:00:1e:81:b8 in network mk-kubernetes-upgrade-122219
	I0806 08:16:35.935287   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:81:b8", ip: ""} in network mk-kubernetes-upgrade-122219: {Iface:virbr3 ExpiryTime:2024-08-06 09:16:28 +0000 UTC Type:0 Mac:52:54:00:1e:81:b8 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:kubernetes-upgrade-122219 Clientid:01:52:54:00:1e:81:b8}
	I0806 08:16:35.935315   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | domain kubernetes-upgrade-122219 has defined IP address 192.168.50.53 and MAC address 52:54:00:1e:81:b8 in network mk-kubernetes-upgrade-122219
	I0806 08:16:35.935519   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetSSHPort
	I0806 08:16:35.935697   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetSSHKeyPath
	I0806 08:16:35.935859   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetSSHKeyPath
	I0806 08:16:35.936003   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetSSHUsername
	I0806 08:16:35.936211   60490 main.go:141] libmachine: Using SSH client type: native
	I0806 08:16:35.936404   60490 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.53 22 <nil> <nil>}
	I0806 08:16:35.936422   60490 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-122219' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-122219/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-122219' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0806 08:16:36.049942   60490 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 08:16:36.049972   60490 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19370-13057/.minikube CaCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19370-13057/.minikube}
	I0806 08:16:36.050014   60490 buildroot.go:174] setting up certificates
	I0806 08:16:36.050027   60490 provision.go:84] configureAuth start
	I0806 08:16:36.050035   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetMachineName
	I0806 08:16:36.050309   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetIP
	I0806 08:16:36.052938   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | domain kubernetes-upgrade-122219 has defined MAC address 52:54:00:1e:81:b8 in network mk-kubernetes-upgrade-122219
	I0806 08:16:36.053301   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:81:b8", ip: ""} in network mk-kubernetes-upgrade-122219: {Iface:virbr3 ExpiryTime:2024-08-06 09:16:28 +0000 UTC Type:0 Mac:52:54:00:1e:81:b8 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:kubernetes-upgrade-122219 Clientid:01:52:54:00:1e:81:b8}
	I0806 08:16:36.053329   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | domain kubernetes-upgrade-122219 has defined IP address 192.168.50.53 and MAC address 52:54:00:1e:81:b8 in network mk-kubernetes-upgrade-122219
	I0806 08:16:36.053462   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetSSHHostname
	I0806 08:16:36.055580   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | domain kubernetes-upgrade-122219 has defined MAC address 52:54:00:1e:81:b8 in network mk-kubernetes-upgrade-122219
	I0806 08:16:36.056011   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:81:b8", ip: ""} in network mk-kubernetes-upgrade-122219: {Iface:virbr3 ExpiryTime:2024-08-06 09:16:28 +0000 UTC Type:0 Mac:52:54:00:1e:81:b8 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:kubernetes-upgrade-122219 Clientid:01:52:54:00:1e:81:b8}
	I0806 08:16:36.056047   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | domain kubernetes-upgrade-122219 has defined IP address 192.168.50.53 and MAC address 52:54:00:1e:81:b8 in network mk-kubernetes-upgrade-122219
	I0806 08:16:36.056186   60490 provision.go:143] copyHostCerts
	I0806 08:16:36.056242   60490 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem, removing ...
	I0806 08:16:36.056261   60490 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem
	I0806 08:16:36.056316   60490 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem (1078 bytes)
	I0806 08:16:36.056452   60490 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem, removing ...
	I0806 08:16:36.056462   60490 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem
	I0806 08:16:36.056484   60490 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem (1123 bytes)
	I0806 08:16:36.056552   60490 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem, removing ...
	I0806 08:16:36.056559   60490 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem
	I0806 08:16:36.056577   60490 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem (1675 bytes)
	I0806 08:16:36.056648   60490 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-122219 san=[127.0.0.1 192.168.50.53 kubernetes-upgrade-122219 localhost minikube]
	I0806 08:16:36.248243   60490 provision.go:177] copyRemoteCerts
	I0806 08:16:36.248303   60490 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0806 08:16:36.248325   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetSSHHostname
	I0806 08:16:36.251271   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | domain kubernetes-upgrade-122219 has defined MAC address 52:54:00:1e:81:b8 in network mk-kubernetes-upgrade-122219
	I0806 08:16:36.251606   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:81:b8", ip: ""} in network mk-kubernetes-upgrade-122219: {Iface:virbr3 ExpiryTime:2024-08-06 09:16:28 +0000 UTC Type:0 Mac:52:54:00:1e:81:b8 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:kubernetes-upgrade-122219 Clientid:01:52:54:00:1e:81:b8}
	I0806 08:16:36.251630   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | domain kubernetes-upgrade-122219 has defined IP address 192.168.50.53 and MAC address 52:54:00:1e:81:b8 in network mk-kubernetes-upgrade-122219
	I0806 08:16:36.251810   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetSSHPort
	I0806 08:16:36.251969   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetSSHKeyPath
	I0806 08:16:36.252132   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetSSHUsername
	I0806 08:16:36.252264   60490 sshutil.go:53] new ssh client: &{IP:192.168.50.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/kubernetes-upgrade-122219/id_rsa Username:docker}
	I0806 08:16:36.336220   60490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0806 08:16:36.362053   60490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0806 08:16:36.387427   60490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0806 08:16:36.410395   60490 provision.go:87] duration metric: took 360.356836ms to configureAuth
	I0806 08:16:36.410422   60490 buildroot.go:189] setting minikube options for container-runtime
	I0806 08:16:36.410585   60490 config.go:182] Loaded profile config "kubernetes-upgrade-122219": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0806 08:16:36.410650   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetSSHHostname
	I0806 08:16:36.413416   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | domain kubernetes-upgrade-122219 has defined MAC address 52:54:00:1e:81:b8 in network mk-kubernetes-upgrade-122219
	I0806 08:16:36.413773   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:81:b8", ip: ""} in network mk-kubernetes-upgrade-122219: {Iface:virbr3 ExpiryTime:2024-08-06 09:16:28 +0000 UTC Type:0 Mac:52:54:00:1e:81:b8 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:kubernetes-upgrade-122219 Clientid:01:52:54:00:1e:81:b8}
	I0806 08:16:36.413832   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | domain kubernetes-upgrade-122219 has defined IP address 192.168.50.53 and MAC address 52:54:00:1e:81:b8 in network mk-kubernetes-upgrade-122219
	I0806 08:16:36.413933   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetSSHPort
	I0806 08:16:36.414116   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetSSHKeyPath
	I0806 08:16:36.414289   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetSSHKeyPath
	I0806 08:16:36.414477   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetSSHUsername
	I0806 08:16:36.414648   60490 main.go:141] libmachine: Using SSH client type: native
	I0806 08:16:36.414799   60490 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.53 22 <nil> <nil>}
	I0806 08:16:36.414816   60490 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0806 08:16:36.681636   60490 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0806 08:16:36.681663   60490 main.go:141] libmachine: Checking connection to Docker...
	I0806 08:16:36.681675   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetURL
	I0806 08:16:36.683132   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | Using libvirt version 6000000
	I0806 08:16:36.686106   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | domain kubernetes-upgrade-122219 has defined MAC address 52:54:00:1e:81:b8 in network mk-kubernetes-upgrade-122219
	I0806 08:16:36.686472   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:81:b8", ip: ""} in network mk-kubernetes-upgrade-122219: {Iface:virbr3 ExpiryTime:2024-08-06 09:16:28 +0000 UTC Type:0 Mac:52:54:00:1e:81:b8 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:kubernetes-upgrade-122219 Clientid:01:52:54:00:1e:81:b8}
	I0806 08:16:36.686502   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | domain kubernetes-upgrade-122219 has defined IP address 192.168.50.53 and MAC address 52:54:00:1e:81:b8 in network mk-kubernetes-upgrade-122219
	I0806 08:16:36.686707   60490 main.go:141] libmachine: Docker is up and running!
	I0806 08:16:36.686727   60490 main.go:141] libmachine: Reticulating splines...
	I0806 08:16:36.686735   60490 client.go:171] duration metric: took 23.61422578s to LocalClient.Create
	I0806 08:16:36.686756   60490 start.go:167] duration metric: took 23.614306962s to libmachine.API.Create "kubernetes-upgrade-122219"
	I0806 08:16:36.686764   60490 start.go:293] postStartSetup for "kubernetes-upgrade-122219" (driver="kvm2")
	I0806 08:16:36.686773   60490 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0806 08:16:36.686790   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .DriverName
	I0806 08:16:36.687116   60490 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0806 08:16:36.687142   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetSSHHostname
	I0806 08:16:36.689539   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | domain kubernetes-upgrade-122219 has defined MAC address 52:54:00:1e:81:b8 in network mk-kubernetes-upgrade-122219
	I0806 08:16:36.689946   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:81:b8", ip: ""} in network mk-kubernetes-upgrade-122219: {Iface:virbr3 ExpiryTime:2024-08-06 09:16:28 +0000 UTC Type:0 Mac:52:54:00:1e:81:b8 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:kubernetes-upgrade-122219 Clientid:01:52:54:00:1e:81:b8}
	I0806 08:16:36.689981   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | domain kubernetes-upgrade-122219 has defined IP address 192.168.50.53 and MAC address 52:54:00:1e:81:b8 in network mk-kubernetes-upgrade-122219
	I0806 08:16:36.690127   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetSSHPort
	I0806 08:16:36.690318   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetSSHKeyPath
	I0806 08:16:36.690539   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetSSHUsername
	I0806 08:16:36.690687   60490 sshutil.go:53] new ssh client: &{IP:192.168.50.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/kubernetes-upgrade-122219/id_rsa Username:docker}
	I0806 08:16:36.771221   60490 ssh_runner.go:195] Run: cat /etc/os-release
	I0806 08:16:36.775394   60490 info.go:137] Remote host: Buildroot 2023.02.9
	I0806 08:16:36.775434   60490 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-13057/.minikube/addons for local assets ...
	I0806 08:16:36.775494   60490 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-13057/.minikube/files for local assets ...
	I0806 08:16:36.775563   60490 filesync.go:149] local asset: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem -> 202462.pem in /etc/ssl/certs
	I0806 08:16:36.775647   60490 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0806 08:16:36.785031   60490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem --> /etc/ssl/certs/202462.pem (1708 bytes)
	I0806 08:16:36.809692   60490 start.go:296] duration metric: took 122.917166ms for postStartSetup
	I0806 08:16:36.809736   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetConfigRaw
	I0806 08:16:36.810321   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetIP
	I0806 08:16:36.813042   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | domain kubernetes-upgrade-122219 has defined MAC address 52:54:00:1e:81:b8 in network mk-kubernetes-upgrade-122219
	I0806 08:16:36.813366   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:81:b8", ip: ""} in network mk-kubernetes-upgrade-122219: {Iface:virbr3 ExpiryTime:2024-08-06 09:16:28 +0000 UTC Type:0 Mac:52:54:00:1e:81:b8 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:kubernetes-upgrade-122219 Clientid:01:52:54:00:1e:81:b8}
	I0806 08:16:36.813407   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | domain kubernetes-upgrade-122219 has defined IP address 192.168.50.53 and MAC address 52:54:00:1e:81:b8 in network mk-kubernetes-upgrade-122219
	I0806 08:16:36.813696   60490 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/kubernetes-upgrade-122219/config.json ...
	I0806 08:16:36.813912   60490 start.go:128] duration metric: took 23.768186594s to createHost
	I0806 08:16:36.813936   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetSSHHostname
	I0806 08:16:36.816168   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | domain kubernetes-upgrade-122219 has defined MAC address 52:54:00:1e:81:b8 in network mk-kubernetes-upgrade-122219
	I0806 08:16:36.816466   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:81:b8", ip: ""} in network mk-kubernetes-upgrade-122219: {Iface:virbr3 ExpiryTime:2024-08-06 09:16:28 +0000 UTC Type:0 Mac:52:54:00:1e:81:b8 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:kubernetes-upgrade-122219 Clientid:01:52:54:00:1e:81:b8}
	I0806 08:16:36.816495   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | domain kubernetes-upgrade-122219 has defined IP address 192.168.50.53 and MAC address 52:54:00:1e:81:b8 in network mk-kubernetes-upgrade-122219
	I0806 08:16:36.816641   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetSSHPort
	I0806 08:16:36.816812   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetSSHKeyPath
	I0806 08:16:36.817088   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetSSHKeyPath
	I0806 08:16:36.817255   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetSSHUsername
	I0806 08:16:36.817461   60490 main.go:141] libmachine: Using SSH client type: native
	I0806 08:16:36.817686   60490 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.53 22 <nil> <nil>}
	I0806 08:16:36.817698   60490 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0806 08:16:36.920978   60490 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722932196.887220672
	
	I0806 08:16:36.921012   60490 fix.go:216] guest clock: 1722932196.887220672
	I0806 08:16:36.921019   60490 fix.go:229] Guest: 2024-08-06 08:16:36.887220672 +0000 UTC Remote: 2024-08-06 08:16:36.813924938 +0000 UTC m=+45.275684481 (delta=73.295734ms)
	I0806 08:16:36.921043   60490 fix.go:200] guest clock delta is within tolerance: 73.295734ms
	I0806 08:16:36.921053   60490 start.go:83] releasing machines lock for "kubernetes-upgrade-122219", held for 23.875489988s
	I0806 08:16:36.921082   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .DriverName
	I0806 08:16:36.921371   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetIP
	I0806 08:16:36.923992   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | domain kubernetes-upgrade-122219 has defined MAC address 52:54:00:1e:81:b8 in network mk-kubernetes-upgrade-122219
	I0806 08:16:36.924452   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:81:b8", ip: ""} in network mk-kubernetes-upgrade-122219: {Iface:virbr3 ExpiryTime:2024-08-06 09:16:28 +0000 UTC Type:0 Mac:52:54:00:1e:81:b8 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:kubernetes-upgrade-122219 Clientid:01:52:54:00:1e:81:b8}
	I0806 08:16:36.924484   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | domain kubernetes-upgrade-122219 has defined IP address 192.168.50.53 and MAC address 52:54:00:1e:81:b8 in network mk-kubernetes-upgrade-122219
	I0806 08:16:36.924670   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .DriverName
	I0806 08:16:36.925182   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .DriverName
	I0806 08:16:36.925375   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .DriverName
	I0806 08:16:36.925472   60490 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0806 08:16:36.925507   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetSSHHostname
	I0806 08:16:36.925596   60490 ssh_runner.go:195] Run: cat /version.json
	I0806 08:16:36.925617   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetSSHHostname
	I0806 08:16:36.928378   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | domain kubernetes-upgrade-122219 has defined MAC address 52:54:00:1e:81:b8 in network mk-kubernetes-upgrade-122219
	I0806 08:16:36.928631   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | domain kubernetes-upgrade-122219 has defined MAC address 52:54:00:1e:81:b8 in network mk-kubernetes-upgrade-122219
	I0806 08:16:36.928724   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:81:b8", ip: ""} in network mk-kubernetes-upgrade-122219: {Iface:virbr3 ExpiryTime:2024-08-06 09:16:28 +0000 UTC Type:0 Mac:52:54:00:1e:81:b8 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:kubernetes-upgrade-122219 Clientid:01:52:54:00:1e:81:b8}
	I0806 08:16:36.928752   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | domain kubernetes-upgrade-122219 has defined IP address 192.168.50.53 and MAC address 52:54:00:1e:81:b8 in network mk-kubernetes-upgrade-122219
	I0806 08:16:36.928875   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetSSHPort
	I0806 08:16:36.929055   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetSSHKeyPath
	I0806 08:16:36.929105   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:81:b8", ip: ""} in network mk-kubernetes-upgrade-122219: {Iface:virbr3 ExpiryTime:2024-08-06 09:16:28 +0000 UTC Type:0 Mac:52:54:00:1e:81:b8 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:kubernetes-upgrade-122219 Clientid:01:52:54:00:1e:81:b8}
	I0806 08:16:36.929134   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | domain kubernetes-upgrade-122219 has defined IP address 192.168.50.53 and MAC address 52:54:00:1e:81:b8 in network mk-kubernetes-upgrade-122219
	I0806 08:16:36.929228   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetSSHUsername
	I0806 08:16:36.929274   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetSSHPort
	I0806 08:16:36.929357   60490 sshutil.go:53] new ssh client: &{IP:192.168.50.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/kubernetes-upgrade-122219/id_rsa Username:docker}
	I0806 08:16:36.929437   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetSSHKeyPath
	I0806 08:16:36.929572   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetSSHUsername
	I0806 08:16:36.929694   60490 sshutil.go:53] new ssh client: &{IP:192.168.50.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/kubernetes-upgrade-122219/id_rsa Username:docker}
	I0806 08:16:37.034428   60490 ssh_runner.go:195] Run: systemctl --version
	I0806 08:16:37.041133   60490 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0806 08:16:37.205865   60490 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0806 08:16:37.212636   60490 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0806 08:16:37.212710   60490 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0806 08:16:37.230384   60490 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0806 08:16:37.230409   60490 start.go:495] detecting cgroup driver to use...
	I0806 08:16:37.230491   60490 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0806 08:16:37.251066   60490 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 08:16:37.265986   60490 docker.go:217] disabling cri-docker service (if available) ...
	I0806 08:16:37.266057   60490 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0806 08:16:37.280479   60490 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0806 08:16:37.296073   60490 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0806 08:16:37.425321   60490 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0806 08:16:37.566683   60490 docker.go:233] disabling docker service ...
	I0806 08:16:37.566750   60490 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0806 08:16:37.581385   60490 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0806 08:16:37.594887   60490 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0806 08:16:37.733438   60490 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0806 08:16:37.857227   60490 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0806 08:16:37.873609   60490 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 08:16:37.892567   60490 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0806 08:16:37.892630   60490 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:16:37.904304   60490 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0806 08:16:37.904397   60490 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:16:37.923630   60490 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:16:37.934011   60490 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:16:37.945394   60490 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0806 08:16:37.956863   60490 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0806 08:16:37.966480   60490 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0806 08:16:37.966533   60490 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0806 08:16:37.980444   60490 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0806 08:16:37.990831   60490 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 08:16:38.117442   60490 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0806 08:16:38.289091   60490 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0806 08:16:38.289193   60490 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0806 08:16:38.294108   60490 start.go:563] Will wait 60s for crictl version
	I0806 08:16:38.294183   60490 ssh_runner.go:195] Run: which crictl
	I0806 08:16:38.298100   60490 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0806 08:16:38.347093   60490 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0806 08:16:38.347179   60490 ssh_runner.go:195] Run: crio --version
	I0806 08:16:38.380183   60490 ssh_runner.go:195] Run: crio --version
	I0806 08:16:38.416332   60490 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0806 08:16:38.417557   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetIP
	I0806 08:16:38.420411   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | domain kubernetes-upgrade-122219 has defined MAC address 52:54:00:1e:81:b8 in network mk-kubernetes-upgrade-122219
	I0806 08:16:38.420891   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:81:b8", ip: ""} in network mk-kubernetes-upgrade-122219: {Iface:virbr3 ExpiryTime:2024-08-06 09:16:28 +0000 UTC Type:0 Mac:52:54:00:1e:81:b8 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:kubernetes-upgrade-122219 Clientid:01:52:54:00:1e:81:b8}
	I0806 08:16:38.420925   60490 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | domain kubernetes-upgrade-122219 has defined IP address 192.168.50.53 and MAC address 52:54:00:1e:81:b8 in network mk-kubernetes-upgrade-122219
	I0806 08:16:38.421089   60490 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0806 08:16:38.425374   60490 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 08:16:38.439341   60490 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-122219 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-122219 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.53 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0806 08:16:38.439431   60490 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0806 08:16:38.439468   60490 ssh_runner.go:195] Run: sudo crictl images --output json
	I0806 08:16:38.473793   60490 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0806 08:16:38.473871   60490 ssh_runner.go:195] Run: which lz4
	I0806 08:16:38.478059   60490 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0806 08:16:38.482315   60490 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0806 08:16:38.482349   60490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0806 08:16:40.213581   60490 crio.go:462] duration metric: took 1.735551543s to copy over tarball
	I0806 08:16:40.213696   60490 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0806 08:16:42.885335   60490 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.671610896s)
	I0806 08:16:42.885379   60490 crio.go:469] duration metric: took 2.671749155s to extract the tarball
	I0806 08:16:42.885390   60490 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0806 08:16:42.927502   60490 ssh_runner.go:195] Run: sudo crictl images --output json
	I0806 08:16:42.971956   60490 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0806 08:16:42.971981   60490 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0806 08:16:42.972051   60490 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 08:16:42.972104   60490 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0806 08:16:42.972118   60490 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0806 08:16:42.972128   60490 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0806 08:16:42.972106   60490 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0806 08:16:42.972250   60490 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0806 08:16:42.972068   60490 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0806 08:16:42.972329   60490 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0806 08:16:42.975165   60490 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0806 08:16:42.975233   60490 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0806 08:16:42.975242   60490 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0806 08:16:42.975170   60490 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 08:16:42.975170   60490 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0806 08:16:42.975180   60490 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0806 08:16:42.975184   60490 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0806 08:16:42.975204   60490 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0806 08:16:43.154656   60490 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0806 08:16:43.191679   60490 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0806 08:16:43.197477   60490 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0806 08:16:43.197525   60490 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0806 08:16:43.197571   60490 ssh_runner.go:195] Run: which crictl
	I0806 08:16:43.242781   60490 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0806 08:16:43.242819   60490 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0806 08:16:43.242854   60490 ssh_runner.go:195] Run: which crictl
	I0806 08:16:43.242855   60490 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0806 08:16:43.281123   60490 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0806 08:16:43.281124   60490 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0806 08:16:43.315382   60490 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0806 08:16:43.316087   60490 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0806 08:16:43.316550   60490 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0806 08:16:43.324624   60490 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0806 08:16:43.327186   60490 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0806 08:16:43.329716   60490 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0806 08:16:43.393173   60490 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0806 08:16:43.393223   60490 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0806 08:16:43.393283   60490 ssh_runner.go:195] Run: which crictl
	I0806 08:16:43.428225   60490 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0806 08:16:43.428258   60490 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0806 08:16:43.428307   60490 ssh_runner.go:195] Run: which crictl
	I0806 08:16:43.450191   60490 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0806 08:16:43.450240   60490 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0806 08:16:43.450289   60490 ssh_runner.go:195] Run: which crictl
	I0806 08:16:43.457025   60490 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0806 08:16:43.457077   60490 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0806 08:16:43.457111   60490 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0806 08:16:43.457114   60490 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0806 08:16:43.457144   60490 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0806 08:16:43.457149   60490 ssh_runner.go:195] Run: which crictl
	I0806 08:16:43.457247   60490 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0806 08:16:43.457150   60490 ssh_runner.go:195] Run: which crictl
	I0806 08:16:43.457217   60490 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0806 08:16:43.471235   60490 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0806 08:16:43.471271   60490 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0806 08:16:43.563420   60490 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0806 08:16:43.563426   60490 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0806 08:16:43.563505   60490 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0806 08:16:43.574707   60490 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0806 08:16:43.574731   60490 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0806 08:16:43.915783   60490 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 08:16:44.058364   60490 cache_images.go:92] duration metric: took 1.086367587s to LoadCachedImages
	W0806 08:16:44.058466   60490 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0806 08:16:44.058480   60490 kubeadm.go:934] updating node { 192.168.50.53 8443 v1.20.0 crio true true} ...
	I0806 08:16:44.058602   60490 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-122219 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.53
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-122219 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0806 08:16:44.058683   60490 ssh_runner.go:195] Run: crio config
	I0806 08:16:44.107309   60490 cni.go:84] Creating CNI manager for ""
	I0806 08:16:44.107333   60490 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0806 08:16:44.107341   60490 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0806 08:16:44.107359   60490 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.53 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-122219 NodeName:kubernetes-upgrade-122219 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.53"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.53 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0806 08:16:44.107483   60490 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.53
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-122219"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.53
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.53"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0806 08:16:44.107543   60490 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0806 08:16:44.118631   60490 binaries.go:44] Found k8s binaries, skipping transfer
	I0806 08:16:44.118705   60490 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0806 08:16:44.128672   60490 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (432 bytes)
	I0806 08:16:44.146854   60490 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0806 08:16:44.164474   60490 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0806 08:16:44.182619   60490 ssh_runner.go:195] Run: grep 192.168.50.53	control-plane.minikube.internal$ /etc/hosts
	I0806 08:16:44.186838   60490 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.53	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 08:16:44.201142   60490 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 08:16:44.348695   60490 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 08:16:44.380022   60490 certs.go:68] Setting up /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/kubernetes-upgrade-122219 for IP: 192.168.50.53
	I0806 08:16:44.380046   60490 certs.go:194] generating shared ca certs ...
	I0806 08:16:44.380068   60490 certs.go:226] acquiring lock for ca certs: {Name:mkf5c5aed91f4108054d9f2c742cad66c86afbed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:16:44.380231   60490 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.key
	I0806 08:16:44.380296   60490 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.key
	I0806 08:16:44.380309   60490 certs.go:256] generating profile certs ...
	I0806 08:16:44.380422   60490 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/kubernetes-upgrade-122219/client.key
	I0806 08:16:44.380441   60490 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/kubernetes-upgrade-122219/client.crt with IP's: []
	I0806 08:16:44.586269   60490 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/kubernetes-upgrade-122219/client.crt ...
	I0806 08:16:44.586305   60490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/kubernetes-upgrade-122219/client.crt: {Name:mk1c3cff7ba14dcbbb18efa518593d1677ba17b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:16:44.586503   60490 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/kubernetes-upgrade-122219/client.key ...
	I0806 08:16:44.586523   60490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/kubernetes-upgrade-122219/client.key: {Name:mk58f756d14cbed851fd1b067b0194af20f99f1f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:16:44.586632   60490 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/kubernetes-upgrade-122219/apiserver.key.4ecc0233
	I0806 08:16:44.586656   60490 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/kubernetes-upgrade-122219/apiserver.crt.4ecc0233 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.53]
	I0806 08:16:44.687014   60490 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/kubernetes-upgrade-122219/apiserver.crt.4ecc0233 ...
	I0806 08:16:44.687044   60490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/kubernetes-upgrade-122219/apiserver.crt.4ecc0233: {Name:mk065ce755e745d0231bdd39fa70cf96191145fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:16:44.687201   60490 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/kubernetes-upgrade-122219/apiserver.key.4ecc0233 ...
	I0806 08:16:44.687214   60490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/kubernetes-upgrade-122219/apiserver.key.4ecc0233: {Name:mk5237cfc9f3663cdf872faa8e1ef32a795873a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:16:44.687279   60490 certs.go:381] copying /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/kubernetes-upgrade-122219/apiserver.crt.4ecc0233 -> /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/kubernetes-upgrade-122219/apiserver.crt
	I0806 08:16:44.687348   60490 certs.go:385] copying /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/kubernetes-upgrade-122219/apiserver.key.4ecc0233 -> /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/kubernetes-upgrade-122219/apiserver.key
	I0806 08:16:44.687401   60490 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/kubernetes-upgrade-122219/proxy-client.key
	I0806 08:16:44.687418   60490 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/kubernetes-upgrade-122219/proxy-client.crt with IP's: []
	I0806 08:16:44.992719   60490 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/kubernetes-upgrade-122219/proxy-client.crt ...
	I0806 08:16:44.992750   60490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/kubernetes-upgrade-122219/proxy-client.crt: {Name:mk6f3c3419b0f87aed1fce6be9260d920fe749fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:16:44.992907   60490 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/kubernetes-upgrade-122219/proxy-client.key ...
	I0806 08:16:44.992922   60490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/kubernetes-upgrade-122219/proxy-client.key: {Name:mk307da79d6d7f3335c903fe8f8e87ff4d5d0d76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:16:44.993149   60490 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246.pem (1338 bytes)
	W0806 08:16:44.993202   60490 certs.go:480] ignoring /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246_empty.pem, impossibly tiny 0 bytes
	I0806 08:16:44.993213   60490 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem (1675 bytes)
	I0806 08:16:44.993233   60490 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem (1078 bytes)
	I0806 08:16:44.993260   60490 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem (1123 bytes)
	I0806 08:16:44.993281   60490 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem (1675 bytes)
	I0806 08:16:44.993323   60490 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem (1708 bytes)
	I0806 08:16:44.994333   60490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0806 08:16:45.021828   60490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0806 08:16:45.049675   60490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0806 08:16:45.079435   60490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0806 08:16:45.104448   60490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/kubernetes-upgrade-122219/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0806 08:16:45.133950   60490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/kubernetes-upgrade-122219/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0806 08:16:45.165379   60490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/kubernetes-upgrade-122219/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0806 08:16:45.191541   60490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/kubernetes-upgrade-122219/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0806 08:16:45.222258   60490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0806 08:16:45.254381   60490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246.pem --> /usr/share/ca-certificates/20246.pem (1338 bytes)
	I0806 08:16:45.282016   60490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem --> /usr/share/ca-certificates/202462.pem (1708 bytes)
	I0806 08:16:45.319801   60490 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0806 08:16:45.338458   60490 ssh_runner.go:195] Run: openssl version
	I0806 08:16:45.344455   60490 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0806 08:16:45.356603   60490 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0806 08:16:45.361619   60490 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  6 07:07 /usr/share/ca-certificates/minikubeCA.pem
	I0806 08:16:45.361690   60490 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0806 08:16:45.368457   60490 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0806 08:16:45.380870   60490 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20246.pem && ln -fs /usr/share/ca-certificates/20246.pem /etc/ssl/certs/20246.pem"
	I0806 08:16:45.394014   60490 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20246.pem
	I0806 08:16:45.398694   60490 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  6 07:17 /usr/share/ca-certificates/20246.pem
	I0806 08:16:45.398760   60490 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20246.pem
	I0806 08:16:45.405337   60490 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20246.pem /etc/ssl/certs/51391683.0"
	I0806 08:16:45.419104   60490 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202462.pem && ln -fs /usr/share/ca-certificates/202462.pem /etc/ssl/certs/202462.pem"
	I0806 08:16:45.431436   60490 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202462.pem
	I0806 08:16:45.436088   60490 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  6 07:17 /usr/share/ca-certificates/202462.pem
	I0806 08:16:45.436149   60490 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202462.pem
	I0806 08:16:45.442954   60490 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202462.pem /etc/ssl/certs/3ec20f2e.0"
	I0806 08:16:45.458401   60490 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0806 08:16:45.464132   60490 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0806 08:16:45.464186   60490 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-122219 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-122219 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.53 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 08:16:45.464274   60490 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0806 08:16:45.464332   60490 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0806 08:16:45.509563   60490 cri.go:89] found id: ""
	I0806 08:16:45.509638   60490 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0806 08:16:45.523830   60490 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0806 08:16:45.536030   60490 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0806 08:16:45.547343   60490 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0806 08:16:45.547369   60490 kubeadm.go:157] found existing configuration files:
	
	I0806 08:16:45.547419   60490 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0806 08:16:45.557957   60490 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0806 08:16:45.558014   60490 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0806 08:16:45.569966   60490 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0806 08:16:45.580229   60490 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0806 08:16:45.580295   60490 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0806 08:16:45.591697   60490 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0806 08:16:45.602391   60490 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0806 08:16:45.602450   60490 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0806 08:16:45.614284   60490 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0806 08:16:45.625004   60490 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0806 08:16:45.625078   60490 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0806 08:16:45.636600   60490 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0806 08:16:45.789552   60490 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0806 08:16:45.789646   60490 kubeadm.go:310] [preflight] Running pre-flight checks
	I0806 08:16:45.964705   60490 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0806 08:16:45.964955   60490 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0806 08:16:45.965096   60490 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0806 08:16:46.199094   60490 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0806 08:16:46.414755   60490 out.go:204]   - Generating certificates and keys ...
	I0806 08:16:46.415006   60490 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0806 08:16:46.415111   60490 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0806 08:16:46.415236   60490 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0806 08:16:46.460878   60490 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0806 08:16:46.650160   60490 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0806 08:16:46.787183   60490 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0806 08:16:46.917697   60490 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0806 08:16:46.917951   60490 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-122219 localhost] and IPs [192.168.50.53 127.0.0.1 ::1]
	I0806 08:16:47.193293   60490 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0806 08:16:47.193636   60490 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-122219 localhost] and IPs [192.168.50.53 127.0.0.1 ::1]
	I0806 08:16:47.335511   60490 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0806 08:16:47.475365   60490 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0806 08:16:47.883919   60490 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0806 08:16:47.883985   60490 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0806 08:16:47.985377   60490 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0806 08:16:48.083086   60490 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0806 08:16:48.366977   60490 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0806 08:16:48.852594   60490 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0806 08:16:48.869370   60490 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0806 08:16:48.871411   60490 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0806 08:16:48.871484   60490 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0806 08:16:49.011667   60490 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0806 08:16:49.013487   60490 out.go:204]   - Booting up control plane ...
	I0806 08:16:49.013646   60490 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0806 08:16:49.018401   60490 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0806 08:16:49.019749   60490 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0806 08:16:49.020758   60490 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0806 08:16:49.025497   60490 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0806 08:17:29.013416   60490 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0806 08:17:29.013898   60490 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0806 08:17:29.014260   60490 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0806 08:17:34.014622   60490 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0806 08:17:34.014833   60490 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0806 08:17:44.014311   60490 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0806 08:17:44.014500   60490 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0806 08:18:04.014617   60490 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0806 08:18:04.014867   60490 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0806 08:18:44.015985   60490 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0806 08:18:44.016182   60490 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0806 08:18:44.016192   60490 kubeadm.go:310] 
	I0806 08:18:44.016223   60490 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0806 08:18:44.016301   60490 kubeadm.go:310] 		timed out waiting for the condition
	I0806 08:18:44.016330   60490 kubeadm.go:310] 
	I0806 08:18:44.016402   60490 kubeadm.go:310] 	This error is likely caused by:
	I0806 08:18:44.016451   60490 kubeadm.go:310] 		- The kubelet is not running
	I0806 08:18:44.016573   60490 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0806 08:18:44.016590   60490 kubeadm.go:310] 
	I0806 08:18:44.016717   60490 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0806 08:18:44.016755   60490 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0806 08:18:44.016781   60490 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0806 08:18:44.016787   60490 kubeadm.go:310] 
	I0806 08:18:44.016933   60490 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0806 08:18:44.017018   60490 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0806 08:18:44.017026   60490 kubeadm.go:310] 
	I0806 08:18:44.017128   60490 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0806 08:18:44.017225   60490 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0806 08:18:44.017352   60490 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0806 08:18:44.017455   60490 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0806 08:18:44.017472   60490 kubeadm.go:310] 
	I0806 08:18:44.018162   60490 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0806 08:18:44.018277   60490 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0806 08:18:44.018376   60490 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0806 08:18:44.018503   60490 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-122219 localhost] and IPs [192.168.50.53 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-122219 localhost] and IPs [192.168.50.53 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-122219 localhost] and IPs [192.168.50.53 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-122219 localhost] and IPs [192.168.50.53 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0806 08:18:44.018553   60490 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0806 08:18:45.653527   60490 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.63492182s)
	I0806 08:18:45.653630   60490 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 08:18:45.670532   60490 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0806 08:18:45.681886   60490 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0806 08:18:45.681906   60490 kubeadm.go:157] found existing configuration files:
	
	I0806 08:18:45.681960   60490 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0806 08:18:45.691543   60490 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0806 08:18:45.691608   60490 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0806 08:18:45.701739   60490 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0806 08:18:45.712393   60490 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0806 08:18:45.712457   60490 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0806 08:18:45.722249   60490 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0806 08:18:45.734339   60490 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0806 08:18:45.734404   60490 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0806 08:18:45.746454   60490 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0806 08:18:45.757585   60490 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0806 08:18:45.757646   60490 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0806 08:18:45.767440   60490 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0806 08:18:45.841921   60490 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0806 08:18:45.842003   60490 kubeadm.go:310] [preflight] Running pre-flight checks
	I0806 08:18:46.012512   60490 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0806 08:18:46.012661   60490 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0806 08:18:46.012810   60490 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0806 08:18:46.248248   60490 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0806 08:18:46.251294   60490 out.go:204]   - Generating certificates and keys ...
	I0806 08:18:46.251402   60490 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0806 08:18:46.251477   60490 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0806 08:18:46.251570   60490 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0806 08:18:46.251641   60490 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0806 08:18:46.251729   60490 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0806 08:18:46.251793   60490 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0806 08:18:46.251878   60490 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0806 08:18:46.251957   60490 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0806 08:18:46.252015   60490 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0806 08:18:46.252093   60490 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0806 08:18:46.252141   60490 kubeadm.go:310] [certs] Using the existing "sa" key
	I0806 08:18:46.252216   60490 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0806 08:18:46.320742   60490 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0806 08:18:46.658801   60490 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0806 08:18:46.988599   60490 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0806 08:18:47.132087   60490 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0806 08:18:47.155163   60490 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0806 08:18:47.155289   60490 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0806 08:18:47.155340   60490 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0806 08:18:47.327204   60490 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0806 08:18:47.329027   60490 out.go:204]   - Booting up control plane ...
	I0806 08:18:47.329185   60490 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0806 08:18:47.349853   60490 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0806 08:18:47.352115   60490 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0806 08:18:47.353096   60490 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0806 08:18:47.356813   60490 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0806 08:19:27.355777   60490 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0806 08:19:27.355889   60490 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0806 08:19:27.356225   60490 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0806 08:19:32.356449   60490 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0806 08:19:32.356714   60490 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0806 08:19:42.356934   60490 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0806 08:19:42.357198   60490 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0806 08:20:02.357370   60490 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0806 08:20:02.357635   60490 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0806 08:20:42.359027   60490 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0806 08:20:42.359284   60490 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0806 08:20:42.359307   60490 kubeadm.go:310] 
	I0806 08:20:42.359366   60490 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0806 08:20:42.359443   60490 kubeadm.go:310] 		timed out waiting for the condition
	I0806 08:20:42.359456   60490 kubeadm.go:310] 
	I0806 08:20:42.359499   60490 kubeadm.go:310] 	This error is likely caused by:
	I0806 08:20:42.359540   60490 kubeadm.go:310] 		- The kubelet is not running
	I0806 08:20:42.359692   60490 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0806 08:20:42.359707   60490 kubeadm.go:310] 
	I0806 08:20:42.359854   60490 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0806 08:20:42.359901   60490 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0806 08:20:42.359949   60490 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0806 08:20:42.359958   60490 kubeadm.go:310] 
	I0806 08:20:42.360111   60490 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0806 08:20:42.360218   60490 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0806 08:20:42.360228   60490 kubeadm.go:310] 
	I0806 08:20:42.360350   60490 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0806 08:20:42.360473   60490 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0806 08:20:42.360570   60490 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0806 08:20:42.360662   60490 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0806 08:20:42.360672   60490 kubeadm.go:310] 
	I0806 08:20:42.361809   60490 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0806 08:20:42.361947   60490 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0806 08:20:42.362050   60490 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0806 08:20:42.362155   60490 kubeadm.go:394] duration metric: took 3m56.897968952s to StartCluster
	I0806 08:20:42.362203   60490 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:20:42.362264   60490 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:20:42.423395   60490 cri.go:89] found id: ""
	I0806 08:20:42.423421   60490 logs.go:276] 0 containers: []
	W0806 08:20:42.423431   60490 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:20:42.423438   60490 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:20:42.423497   60490 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:20:42.472747   60490 cri.go:89] found id: ""
	I0806 08:20:42.472775   60490 logs.go:276] 0 containers: []
	W0806 08:20:42.472785   60490 logs.go:278] No container was found matching "etcd"
	I0806 08:20:42.472792   60490 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:20:42.472854   60490 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:20:42.525254   60490 cri.go:89] found id: ""
	I0806 08:20:42.525280   60490 logs.go:276] 0 containers: []
	W0806 08:20:42.525291   60490 logs.go:278] No container was found matching "coredns"
	I0806 08:20:42.525298   60490 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:20:42.525343   60490 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:20:42.569424   60490 cri.go:89] found id: ""
	I0806 08:20:42.569456   60490 logs.go:276] 0 containers: []
	W0806 08:20:42.569467   60490 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:20:42.569475   60490 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:20:42.569544   60490 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:20:42.620570   60490 cri.go:89] found id: ""
	I0806 08:20:42.620597   60490 logs.go:276] 0 containers: []
	W0806 08:20:42.620607   60490 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:20:42.620615   60490 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:20:42.620675   60490 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:20:42.663083   60490 cri.go:89] found id: ""
	I0806 08:20:42.663111   60490 logs.go:276] 0 containers: []
	W0806 08:20:42.663121   60490 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:20:42.663130   60490 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:20:42.663182   60490 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:20:42.710695   60490 cri.go:89] found id: ""
	I0806 08:20:42.710723   60490 logs.go:276] 0 containers: []
	W0806 08:20:42.710732   60490 logs.go:278] No container was found matching "kindnet"
	I0806 08:20:42.710740   60490 logs.go:123] Gathering logs for dmesg ...
	I0806 08:20:42.710754   60490 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:20:42.726477   60490 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:20:42.726506   60490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:20:42.895721   60490 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:20:42.895743   60490 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:20:42.895758   60490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:20:42.997982   60490 logs.go:123] Gathering logs for container status ...
	I0806 08:20:42.998014   60490 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:20:43.045651   60490 logs.go:123] Gathering logs for kubelet ...
	I0806 08:20:43.045683   60490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0806 08:20:43.112627   60490 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0806 08:20:43.112696   60490 out.go:239] * 
	* 
	W0806 08:20:43.112750   60490 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0806 08:20:43.112794   60490 out.go:239] * 
	* 
	W0806 08:20:43.113810   60490 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0806 08:20:43.117322   60490 out.go:177] 
	W0806 08:20:43.118651   60490 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0806 08:20:43.118979   60490 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0806 08:20:43.119101   60490 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0806 08:20:43.120668   60490 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-122219 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-122219
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-122219: (1.5190951s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-122219 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-122219 status --format={{.Host}}: exit status 7 (76.52346ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-122219 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-122219 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m24.213255214s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-122219 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-122219 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-122219 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (80.76088ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-122219] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19370
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19370-13057/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19370-13057/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0-rc.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-122219
	    minikube start -p kubernetes-upgrade-122219 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1222192 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0-rc.0, by running:
	    
	    minikube start -p kubernetes-upgrade-122219 --kubernetes-version=v1.31.0-rc.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-122219 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-122219 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m22.596893561s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-08-06 08:23:31.74696019 +0000 UTC m=+4734.971770629
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-122219 -n kubernetes-upgrade-122219
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-122219 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-122219 logs -n 25: (2.332038399s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p kindnet-405163 sudo                               | kindnet-405163            | jenkins | v1.33.1 | 06 Aug 24 08:21 UTC |                     |
	|         | systemctl status cri-docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p kindnet-405163 sudo                               | kindnet-405163            | jenkins | v1.33.1 | 06 Aug 24 08:21 UTC | 06 Aug 24 08:21 UTC |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-405163 sudo cat                           | kindnet-405163            | jenkins | v1.33.1 | 06 Aug 24 08:21 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p kindnet-405163 sudo cat                           | kindnet-405163            | jenkins | v1.33.1 | 06 Aug 24 08:21 UTC | 06 Aug 24 08:21 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-405163 sudo                               | kindnet-405163            | jenkins | v1.33.1 | 06 Aug 24 08:21 UTC | 06 Aug 24 08:21 UTC |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p kindnet-405163 sudo                               | kindnet-405163            | jenkins | v1.33.1 | 06 Aug 24 08:21 UTC |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p kindnet-405163 sudo                               | kindnet-405163            | jenkins | v1.33.1 | 06 Aug 24 08:21 UTC | 06 Aug 24 08:21 UTC |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-405163 sudo cat                           | kindnet-405163            | jenkins | v1.33.1 | 06 Aug 24 08:21 UTC | 06 Aug 24 08:21 UTC |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p kindnet-405163 sudo cat                           | kindnet-405163            | jenkins | v1.33.1 | 06 Aug 24 08:21 UTC | 06 Aug 24 08:21 UTC |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p kindnet-405163 sudo                               | kindnet-405163            | jenkins | v1.33.1 | 06 Aug 24 08:21 UTC | 06 Aug 24 08:21 UTC |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p kindnet-405163 sudo                               | kindnet-405163            | jenkins | v1.33.1 | 06 Aug 24 08:21 UTC | 06 Aug 24 08:21 UTC |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p kindnet-405163 sudo                               | kindnet-405163            | jenkins | v1.33.1 | 06 Aug 24 08:21 UTC | 06 Aug 24 08:21 UTC |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p kindnet-405163 sudo find                          | kindnet-405163            | jenkins | v1.33.1 | 06 Aug 24 08:21 UTC | 06 Aug 24 08:21 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p kindnet-405163 sudo crio                          | kindnet-405163            | jenkins | v1.33.1 | 06 Aug 24 08:21 UTC | 06 Aug 24 08:21 UTC |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p kindnet-405163                                    | kindnet-405163            | jenkins | v1.33.1 | 06 Aug 24 08:21 UTC | 06 Aug 24 08:21 UTC |
	| start   | -p custom-flannel-405163                             | custom-flannel-405163     | jenkins | v1.33.1 | 06 Aug 24 08:21 UTC |                     |
	|         | --memory=3072 --alsologtostderr                      |                           |         |         |                     |                     |
	|         | --wait=true --wait-timeout=15m                       |                           |         |         |                     |                     |
	|         | --cni=testdata/kube-flannel.yaml                     |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| pause   | -p pause-728784                                      | pause-728784              | jenkins | v1.33.1 | 06 Aug 24 08:21 UTC | 06 Aug 24 08:21 UTC |
	|         | --alsologtostderr -v=5                               |                           |         |         |                     |                     |
	| unpause | -p pause-728784                                      | pause-728784              | jenkins | v1.33.1 | 06 Aug 24 08:21 UTC | 06 Aug 24 08:21 UTC |
	|         | --alsologtostderr -v=5                               |                           |         |         |                     |                     |
	| pause   | -p pause-728784                                      | pause-728784              | jenkins | v1.33.1 | 06 Aug 24 08:21 UTC | 06 Aug 24 08:22 UTC |
	|         | --alsologtostderr -v=5                               |                           |         |         |                     |                     |
	| delete  | -p pause-728784                                      | pause-728784              | jenkins | v1.33.1 | 06 Aug 24 08:22 UTC | 06 Aug 24 08:22 UTC |
	|         | --alsologtostderr -v=5                               |                           |         |         |                     |                     |
	| delete  | -p pause-728784                                      | pause-728784              | jenkins | v1.33.1 | 06 Aug 24 08:22 UTC | 06 Aug 24 08:22 UTC |
	| start   | -p enable-default-cni-405163                         | enable-default-cni-405163 | jenkins | v1.33.1 | 06 Aug 24 08:22 UTC |                     |
	|         | --memory=3072                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                           |         |         |                     |                     |
	|         | --enable-default-cni=true                            |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-122219                         | kubernetes-upgrade-122219 | jenkins | v1.33.1 | 06 Aug 24 08:22 UTC |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                         |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-122219                         | kubernetes-upgrade-122219 | jenkins | v1.33.1 | 06 Aug 24 08:22 UTC | 06 Aug 24 08:23 UTC |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                    |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| ssh     | -p calico-405163 pgrep -a                            | calico-405163             | jenkins | v1.33.1 | 06 Aug 24 08:23 UTC | 06 Aug 24 08:23 UTC |
	|         | kubelet                                              |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/06 08:22:09
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0806 08:22:09.189024   68099 out.go:291] Setting OutFile to fd 1 ...
	I0806 08:22:09.189166   68099 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 08:22:09.189181   68099 out.go:304] Setting ErrFile to fd 2...
	I0806 08:22:09.189190   68099 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 08:22:09.189411   68099 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19370-13057/.minikube/bin
	I0806 08:22:09.189955   68099 out.go:298] Setting JSON to false
	I0806 08:22:09.190959   68099 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7475,"bootTime":1722925054,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0806 08:22:09.191019   68099 start.go:139] virtualization: kvm guest
	I0806 08:22:09.192946   68099 out.go:177] * [kubernetes-upgrade-122219] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0806 08:22:09.195016   68099 notify.go:220] Checking for updates...
	I0806 08:22:09.195023   68099 out.go:177]   - MINIKUBE_LOCATION=19370
	I0806 08:22:09.196555   68099 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0806 08:22:09.197900   68099 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19370-13057/kubeconfig
	I0806 08:22:09.199414   68099 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19370-13057/.minikube
	I0806 08:22:09.201005   68099 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0806 08:22:09.202344   68099 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0806 08:22:09.204284   68099 config.go:182] Loaded profile config "kubernetes-upgrade-122219": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-rc.0
	I0806 08:22:09.204928   68099 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:22:09.205007   68099 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:22:09.220486   68099 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39943
	I0806 08:22:09.220941   68099 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:22:09.221558   68099 main.go:141] libmachine: Using API Version  1
	I0806 08:22:09.221579   68099 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:22:09.221929   68099 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:22:09.222119   68099 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .DriverName
	I0806 08:22:09.222407   68099 driver.go:392] Setting default libvirt URI to qemu:///system
	I0806 08:22:09.222817   68099 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:22:09.222863   68099 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:22:09.240813   68099 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35123
	I0806 08:22:09.241334   68099 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:22:09.241785   68099 main.go:141] libmachine: Using API Version  1
	I0806 08:22:09.241808   68099 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:22:09.242165   68099 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:22:09.242335   68099 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .DriverName
	I0806 08:22:09.277511   68099 out.go:177] * Using the kvm2 driver based on existing profile
	I0806 08:22:09.279006   68099 start.go:297] selected driver: kvm2
	I0806 08:22:09.279022   68099 start.go:901] validating driver "kvm2" against &{Name:kubernetes-upgrade-122219 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.31.0-rc.0 ClusterName:kubernetes-upgrade-122219 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.53 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 08:22:09.279150   68099 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0806 08:22:09.280150   68099 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 08:22:09.280233   68099 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19370-13057/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0806 08:22:09.296653   68099 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0806 08:22:09.297080   68099 cni.go:84] Creating CNI manager for ""
	I0806 08:22:09.297105   68099 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0806 08:22:09.297143   68099 start.go:340] cluster config:
	{Name:kubernetes-upgrade-122219 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-rc.0 ClusterName:kubernetes-upgrade-122219 Namesp
ace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.53 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 08:22:09.297240   68099 iso.go:125] acquiring lock: {Name:mke58d71de28babe305c5eb0c31c274e9713bb3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 08:22:09.299838   68099 out.go:177] * Starting "kubernetes-upgrade-122219" primary control-plane node in "kubernetes-upgrade-122219" cluster
	I0806 08:22:08.518361   65825 main.go:141] libmachine: (calico-405163) DBG | domain calico-405163 has defined MAC address 52:54:00:40:bf:b8 in network mk-calico-405163
	I0806 08:22:08.518964   65825 main.go:141] libmachine: (calico-405163) DBG | unable to find current IP address of domain calico-405163 in network mk-calico-405163
	I0806 08:22:08.518985   65825 main.go:141] libmachine: (calico-405163) DBG | I0806 08:22:08.518926   67511 retry.go:31] will retry after 4.277840305s: waiting for machine to come up
	I0806 08:22:09.301302   68099 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime crio
	I0806 08:22:09.301341   68099 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-cri-o-overlay-amd64.tar.lz4
	I0806 08:22:09.301355   68099 cache.go:56] Caching tarball of preloaded images
	I0806 08:22:09.301437   68099 preload.go:172] Found /home/jenkins/minikube-integration/19370-13057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0806 08:22:09.301449   68099 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-rc.0 on crio
	I0806 08:22:09.301552   68099 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/kubernetes-upgrade-122219/config.json ...
	I0806 08:22:09.301779   68099 start.go:360] acquireMachinesLock for kubernetes-upgrade-122219: {Name:mkc602ac9c8cc3360dcc0fcb8104303837aaab61 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 08:22:12.800705   65825 main.go:141] libmachine: (calico-405163) DBG | domain calico-405163 has defined MAC address 52:54:00:40:bf:b8 in network mk-calico-405163
	I0806 08:22:12.801228   65825 main.go:141] libmachine: (calico-405163) Found IP for machine: 192.168.39.37
	I0806 08:22:12.801246   65825 main.go:141] libmachine: (calico-405163) Reserving static IP address...
	I0806 08:22:12.801270   65825 main.go:141] libmachine: (calico-405163) DBG | domain calico-405163 has current primary IP address 192.168.39.37 and MAC address 52:54:00:40:bf:b8 in network mk-calico-405163
	I0806 08:22:12.801588   65825 main.go:141] libmachine: (calico-405163) DBG | unable to find host DHCP lease matching {name: "calico-405163", mac: "52:54:00:40:bf:b8", ip: "192.168.39.37"} in network mk-calico-405163
	I0806 08:22:12.880150   65825 main.go:141] libmachine: (calico-405163) DBG | Getting to WaitForSSH function...
	I0806 08:22:12.880194   65825 main.go:141] libmachine: (calico-405163) Reserved static IP address: 192.168.39.37
	I0806 08:22:12.880208   65825 main.go:141] libmachine: (calico-405163) Waiting for SSH to be available...
	I0806 08:22:12.882966   65825 main.go:141] libmachine: (calico-405163) DBG | domain calico-405163 has defined MAC address 52:54:00:40:bf:b8 in network mk-calico-405163
	I0806 08:22:12.883510   65825 main.go:141] libmachine: (calico-405163) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:40:bf:b8", ip: ""} in network mk-calico-405163
	I0806 08:22:12.883531   65825 main.go:141] libmachine: (calico-405163) DBG | unable to find defined IP address of network mk-calico-405163 interface with MAC address 52:54:00:40:bf:b8
	I0806 08:22:12.883701   65825 main.go:141] libmachine: (calico-405163) DBG | Using SSH client type: external
	I0806 08:22:12.883727   65825 main.go:141] libmachine: (calico-405163) DBG | Using SSH private key: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/calico-405163/id_rsa (-rw-------)
	I0806 08:22:12.883768   65825 main.go:141] libmachine: (calico-405163) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19370-13057/.minikube/machines/calico-405163/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0806 08:22:12.883787   65825 main.go:141] libmachine: (calico-405163) DBG | About to run SSH command:
	I0806 08:22:12.883803   65825 main.go:141] libmachine: (calico-405163) DBG | exit 0
	I0806 08:22:12.887396   65825 main.go:141] libmachine: (calico-405163) DBG | SSH cmd err, output: exit status 255: 
	I0806 08:22:12.887425   65825 main.go:141] libmachine: (calico-405163) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0806 08:22:12.887432   65825 main.go:141] libmachine: (calico-405163) DBG | command : exit 0
	I0806 08:22:12.887437   65825 main.go:141] libmachine: (calico-405163) DBG | err     : exit status 255
	I0806 08:22:12.887470   65825 main.go:141] libmachine: (calico-405163) DBG | output  : 
	I0806 08:22:15.888456   65825 main.go:141] libmachine: (calico-405163) DBG | Getting to WaitForSSH function...
	I0806 08:22:15.891011   65825 main.go:141] libmachine: (calico-405163) DBG | domain calico-405163 has defined MAC address 52:54:00:40:bf:b8 in network mk-calico-405163
	I0806 08:22:15.891440   65825 main.go:141] libmachine: (calico-405163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:bf:b8", ip: ""} in network mk-calico-405163: {Iface:virbr2 ExpiryTime:2024-08-06 09:22:04 +0000 UTC Type:0 Mac:52:54:00:40:bf:b8 Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:calico-405163 Clientid:01:52:54:00:40:bf:b8}
	I0806 08:22:15.891467   65825 main.go:141] libmachine: (calico-405163) DBG | domain calico-405163 has defined IP address 192.168.39.37 and MAC address 52:54:00:40:bf:b8 in network mk-calico-405163
	I0806 08:22:15.891570   65825 main.go:141] libmachine: (calico-405163) DBG | Using SSH client type: external
	I0806 08:22:15.891625   65825 main.go:141] libmachine: (calico-405163) DBG | Using SSH private key: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/calico-405163/id_rsa (-rw-------)
	I0806 08:22:15.891650   65825 main.go:141] libmachine: (calico-405163) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.37 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19370-13057/.minikube/machines/calico-405163/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0806 08:22:15.891658   65825 main.go:141] libmachine: (calico-405163) DBG | About to run SSH command:
	I0806 08:22:15.891667   65825 main.go:141] libmachine: (calico-405163) DBG | exit 0
	I0806 08:22:16.020803   65825 main.go:141] libmachine: (calico-405163) DBG | SSH cmd err, output: <nil>: 
	I0806 08:22:16.021113   65825 main.go:141] libmachine: (calico-405163) KVM machine creation complete!
	I0806 08:22:16.021402   65825 main.go:141] libmachine: (calico-405163) Calling .GetConfigRaw
	I0806 08:22:16.021878   65825 main.go:141] libmachine: (calico-405163) Calling .DriverName
	I0806 08:22:16.022076   65825 main.go:141] libmachine: (calico-405163) Calling .DriverName
	I0806 08:22:16.022195   65825 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0806 08:22:16.022209   65825 main.go:141] libmachine: (calico-405163) Calling .GetState
	I0806 08:22:16.023794   65825 main.go:141] libmachine: Detecting operating system of created instance...
	I0806 08:22:16.023806   65825 main.go:141] libmachine: Waiting for SSH to be available...
	I0806 08:22:16.023810   65825 main.go:141] libmachine: Getting to WaitForSSH function...
	I0806 08:22:16.023815   65825 main.go:141] libmachine: (calico-405163) Calling .GetSSHHostname
	I0806 08:22:16.026346   65825 main.go:141] libmachine: (calico-405163) DBG | domain calico-405163 has defined MAC address 52:54:00:40:bf:b8 in network mk-calico-405163
	I0806 08:22:16.026681   65825 main.go:141] libmachine: (calico-405163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:bf:b8", ip: ""} in network mk-calico-405163: {Iface:virbr2 ExpiryTime:2024-08-06 09:22:04 +0000 UTC Type:0 Mac:52:54:00:40:bf:b8 Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:calico-405163 Clientid:01:52:54:00:40:bf:b8}
	I0806 08:22:16.026711   65825 main.go:141] libmachine: (calico-405163) DBG | domain calico-405163 has defined IP address 192.168.39.37 and MAC address 52:54:00:40:bf:b8 in network mk-calico-405163
	I0806 08:22:16.026860   65825 main.go:141] libmachine: (calico-405163) Calling .GetSSHPort
	I0806 08:22:16.027012   65825 main.go:141] libmachine: (calico-405163) Calling .GetSSHKeyPath
	I0806 08:22:16.027151   65825 main.go:141] libmachine: (calico-405163) Calling .GetSSHKeyPath
	I0806 08:22:16.027267   65825 main.go:141] libmachine: (calico-405163) Calling .GetSSHUsername
	I0806 08:22:16.027473   65825 main.go:141] libmachine: Using SSH client type: native
	I0806 08:22:16.027746   65825 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.37 22 <nil> <nil>}
	I0806 08:22:16.027760   65825 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0806 08:22:17.577328   67412 start.go:364] duration metric: took 47.177784212s to acquireMachinesLock for "custom-flannel-405163"
	I0806 08:22:17.577418   67412 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-405163 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-405163 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0806 08:22:17.577533   67412 start.go:125] createHost starting for "" (driver="kvm2")
	I0806 08:22:16.139911   65825 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 08:22:16.139934   65825 main.go:141] libmachine: Detecting the provisioner...
	I0806 08:22:16.139945   65825 main.go:141] libmachine: (calico-405163) Calling .GetSSHHostname
	I0806 08:22:16.143006   65825 main.go:141] libmachine: (calico-405163) DBG | domain calico-405163 has defined MAC address 52:54:00:40:bf:b8 in network mk-calico-405163
	I0806 08:22:16.143388   65825 main.go:141] libmachine: (calico-405163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:bf:b8", ip: ""} in network mk-calico-405163: {Iface:virbr2 ExpiryTime:2024-08-06 09:22:04 +0000 UTC Type:0 Mac:52:54:00:40:bf:b8 Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:calico-405163 Clientid:01:52:54:00:40:bf:b8}
	I0806 08:22:16.143418   65825 main.go:141] libmachine: (calico-405163) DBG | domain calico-405163 has defined IP address 192.168.39.37 and MAC address 52:54:00:40:bf:b8 in network mk-calico-405163
	I0806 08:22:16.143599   65825 main.go:141] libmachine: (calico-405163) Calling .GetSSHPort
	I0806 08:22:16.143768   65825 main.go:141] libmachine: (calico-405163) Calling .GetSSHKeyPath
	I0806 08:22:16.143891   65825 main.go:141] libmachine: (calico-405163) Calling .GetSSHKeyPath
	I0806 08:22:16.143995   65825 main.go:141] libmachine: (calico-405163) Calling .GetSSHUsername
	I0806 08:22:16.144145   65825 main.go:141] libmachine: Using SSH client type: native
	I0806 08:22:16.144407   65825 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.37 22 <nil> <nil>}
	I0806 08:22:16.144424   65825 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0806 08:22:16.261475   65825 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0806 08:22:16.261536   65825 main.go:141] libmachine: found compatible host: buildroot
	I0806 08:22:16.261544   65825 main.go:141] libmachine: Provisioning with buildroot...
	I0806 08:22:16.261663   65825 main.go:141] libmachine: (calico-405163) Calling .GetMachineName
	I0806 08:22:16.261927   65825 buildroot.go:166] provisioning hostname "calico-405163"
	I0806 08:22:16.261963   65825 main.go:141] libmachine: (calico-405163) Calling .GetMachineName
	I0806 08:22:16.262282   65825 main.go:141] libmachine: (calico-405163) Calling .GetSSHHostname
	I0806 08:22:16.264959   65825 main.go:141] libmachine: (calico-405163) DBG | domain calico-405163 has defined MAC address 52:54:00:40:bf:b8 in network mk-calico-405163
	I0806 08:22:16.265381   65825 main.go:141] libmachine: (calico-405163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:bf:b8", ip: ""} in network mk-calico-405163: {Iface:virbr2 ExpiryTime:2024-08-06 09:22:04 +0000 UTC Type:0 Mac:52:54:00:40:bf:b8 Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:calico-405163 Clientid:01:52:54:00:40:bf:b8}
	I0806 08:22:16.265410   65825 main.go:141] libmachine: (calico-405163) DBG | domain calico-405163 has defined IP address 192.168.39.37 and MAC address 52:54:00:40:bf:b8 in network mk-calico-405163
	I0806 08:22:16.265580   65825 main.go:141] libmachine: (calico-405163) Calling .GetSSHPort
	I0806 08:22:16.265774   65825 main.go:141] libmachine: (calico-405163) Calling .GetSSHKeyPath
	I0806 08:22:16.265956   65825 main.go:141] libmachine: (calico-405163) Calling .GetSSHKeyPath
	I0806 08:22:16.266069   65825 main.go:141] libmachine: (calico-405163) Calling .GetSSHUsername
	I0806 08:22:16.266247   65825 main.go:141] libmachine: Using SSH client type: native
	I0806 08:22:16.266456   65825 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.37 22 <nil> <nil>}
	I0806 08:22:16.266478   65825 main.go:141] libmachine: About to run SSH command:
	sudo hostname calico-405163 && echo "calico-405163" | sudo tee /etc/hostname
	I0806 08:22:16.395544   65825 main.go:141] libmachine: SSH cmd err, output: <nil>: calico-405163
	
	I0806 08:22:16.395571   65825 main.go:141] libmachine: (calico-405163) Calling .GetSSHHostname
	I0806 08:22:16.399774   65825 main.go:141] libmachine: (calico-405163) DBG | domain calico-405163 has defined MAC address 52:54:00:40:bf:b8 in network mk-calico-405163
	I0806 08:22:16.400199   65825 main.go:141] libmachine: (calico-405163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:bf:b8", ip: ""} in network mk-calico-405163: {Iface:virbr2 ExpiryTime:2024-08-06 09:22:04 +0000 UTC Type:0 Mac:52:54:00:40:bf:b8 Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:calico-405163 Clientid:01:52:54:00:40:bf:b8}
	I0806 08:22:16.400233   65825 main.go:141] libmachine: (calico-405163) DBG | domain calico-405163 has defined IP address 192.168.39.37 and MAC address 52:54:00:40:bf:b8 in network mk-calico-405163
	I0806 08:22:16.400469   65825 main.go:141] libmachine: (calico-405163) Calling .GetSSHPort
	I0806 08:22:16.400636   65825 main.go:141] libmachine: (calico-405163) Calling .GetSSHKeyPath
	I0806 08:22:16.400824   65825 main.go:141] libmachine: (calico-405163) Calling .GetSSHKeyPath
	I0806 08:22:16.401013   65825 main.go:141] libmachine: (calico-405163) Calling .GetSSHUsername
	I0806 08:22:16.401200   65825 main.go:141] libmachine: Using SSH client type: native
	I0806 08:22:16.401380   65825 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.37 22 <nil> <nil>}
	I0806 08:22:16.401406   65825 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-405163' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-405163/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-405163' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0806 08:22:16.525430   65825 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 08:22:16.525457   65825 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19370-13057/.minikube CaCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19370-13057/.minikube}
	I0806 08:22:16.525485   65825 buildroot.go:174] setting up certificates
	I0806 08:22:16.525493   65825 provision.go:84] configureAuth start
	I0806 08:22:16.525502   65825 main.go:141] libmachine: (calico-405163) Calling .GetMachineName
	I0806 08:22:16.525774   65825 main.go:141] libmachine: (calico-405163) Calling .GetIP
	I0806 08:22:16.528522   65825 main.go:141] libmachine: (calico-405163) DBG | domain calico-405163 has defined MAC address 52:54:00:40:bf:b8 in network mk-calico-405163
	I0806 08:22:16.528937   65825 main.go:141] libmachine: (calico-405163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:bf:b8", ip: ""} in network mk-calico-405163: {Iface:virbr2 ExpiryTime:2024-08-06 09:22:04 +0000 UTC Type:0 Mac:52:54:00:40:bf:b8 Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:calico-405163 Clientid:01:52:54:00:40:bf:b8}
	I0806 08:22:16.528964   65825 main.go:141] libmachine: (calico-405163) DBG | domain calico-405163 has defined IP address 192.168.39.37 and MAC address 52:54:00:40:bf:b8 in network mk-calico-405163
	I0806 08:22:16.529116   65825 main.go:141] libmachine: (calico-405163) Calling .GetSSHHostname
	I0806 08:22:16.531662   65825 main.go:141] libmachine: (calico-405163) DBG | domain calico-405163 has defined MAC address 52:54:00:40:bf:b8 in network mk-calico-405163
	I0806 08:22:16.532031   65825 main.go:141] libmachine: (calico-405163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:bf:b8", ip: ""} in network mk-calico-405163: {Iface:virbr2 ExpiryTime:2024-08-06 09:22:04 +0000 UTC Type:0 Mac:52:54:00:40:bf:b8 Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:calico-405163 Clientid:01:52:54:00:40:bf:b8}
	I0806 08:22:16.532063   65825 main.go:141] libmachine: (calico-405163) DBG | domain calico-405163 has defined IP address 192.168.39.37 and MAC address 52:54:00:40:bf:b8 in network mk-calico-405163
	I0806 08:22:16.532225   65825 provision.go:143] copyHostCerts
	I0806 08:22:16.532296   65825 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem, removing ...
	I0806 08:22:16.532308   65825 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem
	I0806 08:22:16.532406   65825 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem (1078 bytes)
	I0806 08:22:16.532528   65825 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem, removing ...
	I0806 08:22:16.532539   65825 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem
	I0806 08:22:16.532565   65825 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem (1123 bytes)
	I0806 08:22:16.532643   65825 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem, removing ...
	I0806 08:22:16.532651   65825 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem
	I0806 08:22:16.532671   65825 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem (1675 bytes)
	I0806 08:22:16.532760   65825 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem org=jenkins.calico-405163 san=[127.0.0.1 192.168.39.37 calico-405163 localhost minikube]
	I0806 08:22:16.860818   65825 provision.go:177] copyRemoteCerts
	I0806 08:22:16.860878   65825 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0806 08:22:16.860923   65825 main.go:141] libmachine: (calico-405163) Calling .GetSSHHostname
	I0806 08:22:16.863830   65825 main.go:141] libmachine: (calico-405163) DBG | domain calico-405163 has defined MAC address 52:54:00:40:bf:b8 in network mk-calico-405163
	I0806 08:22:16.864132   65825 main.go:141] libmachine: (calico-405163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:bf:b8", ip: ""} in network mk-calico-405163: {Iface:virbr2 ExpiryTime:2024-08-06 09:22:04 +0000 UTC Type:0 Mac:52:54:00:40:bf:b8 Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:calico-405163 Clientid:01:52:54:00:40:bf:b8}
	I0806 08:22:16.864159   65825 main.go:141] libmachine: (calico-405163) DBG | domain calico-405163 has defined IP address 192.168.39.37 and MAC address 52:54:00:40:bf:b8 in network mk-calico-405163
	I0806 08:22:16.864408   65825 main.go:141] libmachine: (calico-405163) Calling .GetSSHPort
	I0806 08:22:16.864623   65825 main.go:141] libmachine: (calico-405163) Calling .GetSSHKeyPath
	I0806 08:22:16.864769   65825 main.go:141] libmachine: (calico-405163) Calling .GetSSHUsername
	I0806 08:22:16.864987   65825 sshutil.go:53] new ssh client: &{IP:192.168.39.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/calico-405163/id_rsa Username:docker}
	I0806 08:22:16.954783   65825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0806 08:22:16.981164   65825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0806 08:22:17.005058   65825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0806 08:22:17.029207   65825 provision.go:87] duration metric: took 503.702118ms to configureAuth
	I0806 08:22:17.029241   65825 buildroot.go:189] setting minikube options for container-runtime
	I0806 08:22:17.029399   65825 config.go:182] Loaded profile config "calico-405163": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 08:22:17.029500   65825 main.go:141] libmachine: (calico-405163) Calling .GetSSHHostname
	I0806 08:22:17.032206   65825 main.go:141] libmachine: (calico-405163) DBG | domain calico-405163 has defined MAC address 52:54:00:40:bf:b8 in network mk-calico-405163
	I0806 08:22:17.032660   65825 main.go:141] libmachine: (calico-405163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:bf:b8", ip: ""} in network mk-calico-405163: {Iface:virbr2 ExpiryTime:2024-08-06 09:22:04 +0000 UTC Type:0 Mac:52:54:00:40:bf:b8 Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:calico-405163 Clientid:01:52:54:00:40:bf:b8}
	I0806 08:22:17.032683   65825 main.go:141] libmachine: (calico-405163) DBG | domain calico-405163 has defined IP address 192.168.39.37 and MAC address 52:54:00:40:bf:b8 in network mk-calico-405163
	I0806 08:22:17.032944   65825 main.go:141] libmachine: (calico-405163) Calling .GetSSHPort
	I0806 08:22:17.033169   65825 main.go:141] libmachine: (calico-405163) Calling .GetSSHKeyPath
	I0806 08:22:17.033338   65825 main.go:141] libmachine: (calico-405163) Calling .GetSSHKeyPath
	I0806 08:22:17.033467   65825 main.go:141] libmachine: (calico-405163) Calling .GetSSHUsername
	I0806 08:22:17.033603   65825 main.go:141] libmachine: Using SSH client type: native
	I0806 08:22:17.033766   65825 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.37 22 <nil> <nil>}
	I0806 08:22:17.033784   65825 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0806 08:22:17.319991   65825 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0806 08:22:17.320030   65825 main.go:141] libmachine: Checking connection to Docker...
	I0806 08:22:17.320040   65825 main.go:141] libmachine: (calico-405163) Calling .GetURL
	I0806 08:22:17.321443   65825 main.go:141] libmachine: (calico-405163) DBG | Using libvirt version 6000000
	I0806 08:22:17.323694   65825 main.go:141] libmachine: (calico-405163) DBG | domain calico-405163 has defined MAC address 52:54:00:40:bf:b8 in network mk-calico-405163
	I0806 08:22:17.324032   65825 main.go:141] libmachine: (calico-405163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:bf:b8", ip: ""} in network mk-calico-405163: {Iface:virbr2 ExpiryTime:2024-08-06 09:22:04 +0000 UTC Type:0 Mac:52:54:00:40:bf:b8 Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:calico-405163 Clientid:01:52:54:00:40:bf:b8}
	I0806 08:22:17.324071   65825 main.go:141] libmachine: (calico-405163) DBG | domain calico-405163 has defined IP address 192.168.39.37 and MAC address 52:54:00:40:bf:b8 in network mk-calico-405163
	I0806 08:22:17.324266   65825 main.go:141] libmachine: Docker is up and running!
	I0806 08:22:17.324277   65825 main.go:141] libmachine: Reticulating splines...
	I0806 08:22:17.324283   65825 client.go:171] duration metric: took 28.123822473s to LocalClient.Create
	I0806 08:22:17.324313   65825 start.go:167] duration metric: took 28.123918522s to libmachine.API.Create "calico-405163"
	I0806 08:22:17.324325   65825 start.go:293] postStartSetup for "calico-405163" (driver="kvm2")
	I0806 08:22:17.324341   65825 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0806 08:22:17.324380   65825 main.go:141] libmachine: (calico-405163) Calling .DriverName
	I0806 08:22:17.324605   65825 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0806 08:22:17.324635   65825 main.go:141] libmachine: (calico-405163) Calling .GetSSHHostname
	I0806 08:22:17.326769   65825 main.go:141] libmachine: (calico-405163) DBG | domain calico-405163 has defined MAC address 52:54:00:40:bf:b8 in network mk-calico-405163
	I0806 08:22:17.327097   65825 main.go:141] libmachine: (calico-405163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:bf:b8", ip: ""} in network mk-calico-405163: {Iface:virbr2 ExpiryTime:2024-08-06 09:22:04 +0000 UTC Type:0 Mac:52:54:00:40:bf:b8 Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:calico-405163 Clientid:01:52:54:00:40:bf:b8}
	I0806 08:22:17.327122   65825 main.go:141] libmachine: (calico-405163) DBG | domain calico-405163 has defined IP address 192.168.39.37 and MAC address 52:54:00:40:bf:b8 in network mk-calico-405163
	I0806 08:22:17.327287   65825 main.go:141] libmachine: (calico-405163) Calling .GetSSHPort
	I0806 08:22:17.327462   65825 main.go:141] libmachine: (calico-405163) Calling .GetSSHKeyPath
	I0806 08:22:17.327633   65825 main.go:141] libmachine: (calico-405163) Calling .GetSSHUsername
	I0806 08:22:17.327806   65825 sshutil.go:53] new ssh client: &{IP:192.168.39.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/calico-405163/id_rsa Username:docker}
	I0806 08:22:17.415343   65825 ssh_runner.go:195] Run: cat /etc/os-release
	I0806 08:22:17.419552   65825 info.go:137] Remote host: Buildroot 2023.02.9
	I0806 08:22:17.419575   65825 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-13057/.minikube/addons for local assets ...
	I0806 08:22:17.419653   65825 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-13057/.minikube/files for local assets ...
	I0806 08:22:17.419766   65825 filesync.go:149] local asset: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem -> 202462.pem in /etc/ssl/certs
	I0806 08:22:17.419868   65825 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0806 08:22:17.429544   65825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem --> /etc/ssl/certs/202462.pem (1708 bytes)
	I0806 08:22:17.453110   65825 start.go:296] duration metric: took 128.768515ms for postStartSetup
	I0806 08:22:17.453182   65825 main.go:141] libmachine: (calico-405163) Calling .GetConfigRaw
	I0806 08:22:17.453738   65825 main.go:141] libmachine: (calico-405163) Calling .GetIP
	I0806 08:22:17.456154   65825 main.go:141] libmachine: (calico-405163) DBG | domain calico-405163 has defined MAC address 52:54:00:40:bf:b8 in network mk-calico-405163
	I0806 08:22:17.456485   65825 main.go:141] libmachine: (calico-405163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:bf:b8", ip: ""} in network mk-calico-405163: {Iface:virbr2 ExpiryTime:2024-08-06 09:22:04 +0000 UTC Type:0 Mac:52:54:00:40:bf:b8 Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:calico-405163 Clientid:01:52:54:00:40:bf:b8}
	I0806 08:22:17.456518   65825 main.go:141] libmachine: (calico-405163) DBG | domain calico-405163 has defined IP address 192.168.39.37 and MAC address 52:54:00:40:bf:b8 in network mk-calico-405163
	I0806 08:22:17.456870   65825 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/calico-405163/config.json ...
	I0806 08:22:17.457118   65825 start.go:128] duration metric: took 28.279276097s to createHost
	I0806 08:22:17.457146   65825 main.go:141] libmachine: (calico-405163) Calling .GetSSHHostname
	I0806 08:22:17.459579   65825 main.go:141] libmachine: (calico-405163) DBG | domain calico-405163 has defined MAC address 52:54:00:40:bf:b8 in network mk-calico-405163
	I0806 08:22:17.460079   65825 main.go:141] libmachine: (calico-405163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:bf:b8", ip: ""} in network mk-calico-405163: {Iface:virbr2 ExpiryTime:2024-08-06 09:22:04 +0000 UTC Type:0 Mac:52:54:00:40:bf:b8 Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:calico-405163 Clientid:01:52:54:00:40:bf:b8}
	I0806 08:22:17.460106   65825 main.go:141] libmachine: (calico-405163) DBG | domain calico-405163 has defined IP address 192.168.39.37 and MAC address 52:54:00:40:bf:b8 in network mk-calico-405163
	I0806 08:22:17.460282   65825 main.go:141] libmachine: (calico-405163) Calling .GetSSHPort
	I0806 08:22:17.460521   65825 main.go:141] libmachine: (calico-405163) Calling .GetSSHKeyPath
	I0806 08:22:17.460743   65825 main.go:141] libmachine: (calico-405163) Calling .GetSSHKeyPath
	I0806 08:22:17.460951   65825 main.go:141] libmachine: (calico-405163) Calling .GetSSHUsername
	I0806 08:22:17.461125   65825 main.go:141] libmachine: Using SSH client type: native
	I0806 08:22:17.461312   65825 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.37 22 <nil> <nil>}
	I0806 08:22:17.461326   65825 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0806 08:22:17.577172   65825 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722932537.557180024
	
	I0806 08:22:17.577190   65825 fix.go:216] guest clock: 1722932537.557180024
	I0806 08:22:17.577197   65825 fix.go:229] Guest: 2024-08-06 08:22:17.557180024 +0000 UTC Remote: 2024-08-06 08:22:17.457132886 +0000 UTC m=+91.409493967 (delta=100.047138ms)
	I0806 08:22:17.577241   65825 fix.go:200] guest clock delta is within tolerance: 100.047138ms
	I0806 08:22:17.577248   65825 start.go:83] releasing machines lock for "calico-405163", held for 28.399600406s
	I0806 08:22:17.577278   65825 main.go:141] libmachine: (calico-405163) Calling .DriverName
	I0806 08:22:17.577536   65825 main.go:141] libmachine: (calico-405163) Calling .GetIP
	I0806 08:22:17.580620   65825 main.go:141] libmachine: (calico-405163) DBG | domain calico-405163 has defined MAC address 52:54:00:40:bf:b8 in network mk-calico-405163
	I0806 08:22:17.580991   65825 main.go:141] libmachine: (calico-405163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:bf:b8", ip: ""} in network mk-calico-405163: {Iface:virbr2 ExpiryTime:2024-08-06 09:22:04 +0000 UTC Type:0 Mac:52:54:00:40:bf:b8 Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:calico-405163 Clientid:01:52:54:00:40:bf:b8}
	I0806 08:22:17.581014   65825 main.go:141] libmachine: (calico-405163) DBG | domain calico-405163 has defined IP address 192.168.39.37 and MAC address 52:54:00:40:bf:b8 in network mk-calico-405163
	I0806 08:22:17.581178   65825 main.go:141] libmachine: (calico-405163) Calling .DriverName
	I0806 08:22:17.581763   65825 main.go:141] libmachine: (calico-405163) Calling .DriverName
	I0806 08:22:17.581971   65825 main.go:141] libmachine: (calico-405163) Calling .DriverName
	I0806 08:22:17.582083   65825 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0806 08:22:17.582125   65825 main.go:141] libmachine: (calico-405163) Calling .GetSSHHostname
	I0806 08:22:17.582237   65825 ssh_runner.go:195] Run: cat /version.json
	I0806 08:22:17.582261   65825 main.go:141] libmachine: (calico-405163) Calling .GetSSHHostname
	I0806 08:22:17.584828   65825 main.go:141] libmachine: (calico-405163) DBG | domain calico-405163 has defined MAC address 52:54:00:40:bf:b8 in network mk-calico-405163
	I0806 08:22:17.585073   65825 main.go:141] libmachine: (calico-405163) DBG | domain calico-405163 has defined MAC address 52:54:00:40:bf:b8 in network mk-calico-405163
	I0806 08:22:17.585219   65825 main.go:141] libmachine: (calico-405163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:bf:b8", ip: ""} in network mk-calico-405163: {Iface:virbr2 ExpiryTime:2024-08-06 09:22:04 +0000 UTC Type:0 Mac:52:54:00:40:bf:b8 Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:calico-405163 Clientid:01:52:54:00:40:bf:b8}
	I0806 08:22:17.585242   65825 main.go:141] libmachine: (calico-405163) DBG | domain calico-405163 has defined IP address 192.168.39.37 and MAC address 52:54:00:40:bf:b8 in network mk-calico-405163
	I0806 08:22:17.585488   65825 main.go:141] libmachine: (calico-405163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:bf:b8", ip: ""} in network mk-calico-405163: {Iface:virbr2 ExpiryTime:2024-08-06 09:22:04 +0000 UTC Type:0 Mac:52:54:00:40:bf:b8 Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:calico-405163 Clientid:01:52:54:00:40:bf:b8}
	I0806 08:22:17.585507   65825 main.go:141] libmachine: (calico-405163) Calling .GetSSHPort
	I0806 08:22:17.585518   65825 main.go:141] libmachine: (calico-405163) DBG | domain calico-405163 has defined IP address 192.168.39.37 and MAC address 52:54:00:40:bf:b8 in network mk-calico-405163
	I0806 08:22:17.585632   65825 main.go:141] libmachine: (calico-405163) Calling .GetSSHPort
	I0806 08:22:17.585737   65825 main.go:141] libmachine: (calico-405163) Calling .GetSSHKeyPath
	I0806 08:22:17.585809   65825 main.go:141] libmachine: (calico-405163) Calling .GetSSHKeyPath
	I0806 08:22:17.585881   65825 main.go:141] libmachine: (calico-405163) Calling .GetSSHUsername
	I0806 08:22:17.585940   65825 main.go:141] libmachine: (calico-405163) Calling .GetSSHUsername
	I0806 08:22:17.586027   65825 sshutil.go:53] new ssh client: &{IP:192.168.39.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/calico-405163/id_rsa Username:docker}
	I0806 08:22:17.586102   65825 sshutil.go:53] new ssh client: &{IP:192.168.39.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/calico-405163/id_rsa Username:docker}
	I0806 08:22:17.694070   65825 ssh_runner.go:195] Run: systemctl --version
	I0806 08:22:17.700978   65825 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0806 08:22:17.863931   65825 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0806 08:22:17.870225   65825 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0806 08:22:17.870302   65825 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0806 08:22:17.887245   65825 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0806 08:22:17.887270   65825 start.go:495] detecting cgroup driver to use...
	I0806 08:22:17.887341   65825 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0806 08:22:17.906511   65825 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 08:22:17.921344   65825 docker.go:217] disabling cri-docker service (if available) ...
	I0806 08:22:17.921411   65825 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0806 08:22:17.938044   65825 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0806 08:22:17.952618   65825 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0806 08:22:18.093970   65825 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0806 08:22:18.237986   65825 docker.go:233] disabling docker service ...
	I0806 08:22:18.238064   65825 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0806 08:22:18.252538   65825 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0806 08:22:18.266690   65825 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0806 08:22:18.406776   65825 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0806 08:22:18.524150   65825 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0806 08:22:18.539063   65825 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 08:22:18.557511   65825 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0806 08:22:18.557585   65825 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:22:18.568071   65825 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0806 08:22:18.568132   65825 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:22:18.578749   65825 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:22:18.589604   65825 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:22:18.600078   65825 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0806 08:22:18.610687   65825 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:22:18.621293   65825 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:22:18.640525   65825 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:22:18.652721   65825 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0806 08:22:18.663384   65825 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0806 08:22:18.663454   65825 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0806 08:22:18.677802   65825 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0806 08:22:18.688338   65825 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 08:22:18.837530   65825 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0806 08:22:19.002438   65825 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0806 08:22:19.002529   65825 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0806 08:22:19.007750   65825 start.go:563] Will wait 60s for crictl version
	I0806 08:22:19.007813   65825 ssh_runner.go:195] Run: which crictl
	I0806 08:22:19.012064   65825 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0806 08:22:19.052542   65825 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0806 08:22:19.052616   65825 ssh_runner.go:195] Run: crio --version
	I0806 08:22:19.087474   65825 ssh_runner.go:195] Run: crio --version
	I0806 08:22:19.126215   65825 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0806 08:22:17.579790   67412 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0806 08:22:17.579993   67412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:22:17.580056   67412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:22:17.597538   67412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39553
	I0806 08:22:17.597943   67412 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:22:17.598587   67412 main.go:141] libmachine: Using API Version  1
	I0806 08:22:17.598611   67412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:22:17.598998   67412 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:22:17.599205   67412 main.go:141] libmachine: (custom-flannel-405163) Calling .GetMachineName
	I0806 08:22:17.599362   67412 main.go:141] libmachine: (custom-flannel-405163) Calling .DriverName
	I0806 08:22:17.599538   67412 start.go:159] libmachine.API.Create for "custom-flannel-405163" (driver="kvm2")
	I0806 08:22:17.599568   67412 client.go:168] LocalClient.Create starting
	I0806 08:22:17.599604   67412 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem
	I0806 08:22:17.599642   67412 main.go:141] libmachine: Decoding PEM data...
	I0806 08:22:17.599668   67412 main.go:141] libmachine: Parsing certificate...
	I0806 08:22:17.599738   67412 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem
	I0806 08:22:17.599764   67412 main.go:141] libmachine: Decoding PEM data...
	I0806 08:22:17.599779   67412 main.go:141] libmachine: Parsing certificate...
	I0806 08:22:17.599810   67412 main.go:141] libmachine: Running pre-create checks...
	I0806 08:22:17.599823   67412 main.go:141] libmachine: (custom-flannel-405163) Calling .PreCreateCheck
	I0806 08:22:17.600167   67412 main.go:141] libmachine: (custom-flannel-405163) Calling .GetConfigRaw
	I0806 08:22:17.600600   67412 main.go:141] libmachine: Creating machine...
	I0806 08:22:17.600616   67412 main.go:141] libmachine: (custom-flannel-405163) Calling .Create
	I0806 08:22:17.600735   67412 main.go:141] libmachine: (custom-flannel-405163) Creating KVM machine...
	I0806 08:22:17.602055   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | found existing default KVM network
	I0806 08:22:17.603216   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | I0806 08:22:17.603074   68195 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:f9:59:d1} reservation:<nil>}
	I0806 08:22:17.603955   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | I0806 08:22:17.603878   68195 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:94:01:fc} reservation:<nil>}
	I0806 08:22:17.604875   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | I0806 08:22:17.604786   68195 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000262dc0}
	I0806 08:22:17.604909   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | created network xml: 
	I0806 08:22:17.604920   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | <network>
	I0806 08:22:17.604929   67412 main.go:141] libmachine: (custom-flannel-405163) DBG |   <name>mk-custom-flannel-405163</name>
	I0806 08:22:17.604947   67412 main.go:141] libmachine: (custom-flannel-405163) DBG |   <dns enable='no'/>
	I0806 08:22:17.604956   67412 main.go:141] libmachine: (custom-flannel-405163) DBG |   
	I0806 08:22:17.604965   67412 main.go:141] libmachine: (custom-flannel-405163) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0806 08:22:17.604983   67412 main.go:141] libmachine: (custom-flannel-405163) DBG |     <dhcp>
	I0806 08:22:17.605011   67412 main.go:141] libmachine: (custom-flannel-405163) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0806 08:22:17.605033   67412 main.go:141] libmachine: (custom-flannel-405163) DBG |     </dhcp>
	I0806 08:22:17.605046   67412 main.go:141] libmachine: (custom-flannel-405163) DBG |   </ip>
	I0806 08:22:17.605057   67412 main.go:141] libmachine: (custom-flannel-405163) DBG |   
	I0806 08:22:17.605067   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | </network>
	I0806 08:22:17.605077   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | 
	I0806 08:22:17.610662   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | trying to create private KVM network mk-custom-flannel-405163 192.168.61.0/24...
	I0806 08:22:17.683397   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | private KVM network mk-custom-flannel-405163 192.168.61.0/24 created
	I0806 08:22:17.683446   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | I0806 08:22:17.683356   68195 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19370-13057/.minikube
	I0806 08:22:17.683462   67412 main.go:141] libmachine: (custom-flannel-405163) Setting up store path in /home/jenkins/minikube-integration/19370-13057/.minikube/machines/custom-flannel-405163 ...
	I0806 08:22:17.683491   67412 main.go:141] libmachine: (custom-flannel-405163) Building disk image from file:///home/jenkins/minikube-integration/19370-13057/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0806 08:22:17.683506   67412 main.go:141] libmachine: (custom-flannel-405163) Downloading /home/jenkins/minikube-integration/19370-13057/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19370-13057/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0806 08:22:17.929461   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | I0806 08:22:17.929355   68195 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/custom-flannel-405163/id_rsa...
	I0806 08:22:18.001888   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | I0806 08:22:18.001767   68195 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/custom-flannel-405163/custom-flannel-405163.rawdisk...
	I0806 08:22:18.001920   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | Writing magic tar header
	I0806 08:22:18.002003   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | Writing SSH key tar header
	I0806 08:22:18.002035   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | I0806 08:22:18.001909   68195 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19370-13057/.minikube/machines/custom-flannel-405163 ...
	I0806 08:22:18.002068   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/custom-flannel-405163
	I0806 08:22:18.002083   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19370-13057/.minikube/machines
	I0806 08:22:18.002097   67412 main.go:141] libmachine: (custom-flannel-405163) Setting executable bit set on /home/jenkins/minikube-integration/19370-13057/.minikube/machines/custom-flannel-405163 (perms=drwx------)
	I0806 08:22:18.002115   67412 main.go:141] libmachine: (custom-flannel-405163) Setting executable bit set on /home/jenkins/minikube-integration/19370-13057/.minikube/machines (perms=drwxr-xr-x)
	I0806 08:22:18.002128   67412 main.go:141] libmachine: (custom-flannel-405163) Setting executable bit set on /home/jenkins/minikube-integration/19370-13057/.minikube (perms=drwxr-xr-x)
	I0806 08:22:18.002136   67412 main.go:141] libmachine: (custom-flannel-405163) Setting executable bit set on /home/jenkins/minikube-integration/19370-13057 (perms=drwxrwxr-x)
	I0806 08:22:18.002145   67412 main.go:141] libmachine: (custom-flannel-405163) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0806 08:22:18.002156   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19370-13057/.minikube
	I0806 08:22:18.002168   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19370-13057
	I0806 08:22:18.002182   67412 main.go:141] libmachine: (custom-flannel-405163) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0806 08:22:18.002192   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0806 08:22:18.002204   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | Checking permissions on dir: /home/jenkins
	I0806 08:22:18.002214   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | Checking permissions on dir: /home
	I0806 08:22:18.002224   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | Skipping /home - not owner
	I0806 08:22:18.002235   67412 main.go:141] libmachine: (custom-flannel-405163) Creating domain...
	I0806 08:22:18.003428   67412 main.go:141] libmachine: (custom-flannel-405163) define libvirt domain using xml: 
	I0806 08:22:18.003451   67412 main.go:141] libmachine: (custom-flannel-405163) <domain type='kvm'>
	I0806 08:22:18.003474   67412 main.go:141] libmachine: (custom-flannel-405163)   <name>custom-flannel-405163</name>
	I0806 08:22:18.003490   67412 main.go:141] libmachine: (custom-flannel-405163)   <memory unit='MiB'>3072</memory>
	I0806 08:22:18.003502   67412 main.go:141] libmachine: (custom-flannel-405163)   <vcpu>2</vcpu>
	I0806 08:22:18.003517   67412 main.go:141] libmachine: (custom-flannel-405163)   <features>
	I0806 08:22:18.003529   67412 main.go:141] libmachine: (custom-flannel-405163)     <acpi/>
	I0806 08:22:18.003540   67412 main.go:141] libmachine: (custom-flannel-405163)     <apic/>
	I0806 08:22:18.003549   67412 main.go:141] libmachine: (custom-flannel-405163)     <pae/>
	I0806 08:22:18.003557   67412 main.go:141] libmachine: (custom-flannel-405163)     
	I0806 08:22:18.003568   67412 main.go:141] libmachine: (custom-flannel-405163)   </features>
	I0806 08:22:18.003578   67412 main.go:141] libmachine: (custom-flannel-405163)   <cpu mode='host-passthrough'>
	I0806 08:22:18.003586   67412 main.go:141] libmachine: (custom-flannel-405163)   
	I0806 08:22:18.003593   67412 main.go:141] libmachine: (custom-flannel-405163)   </cpu>
	I0806 08:22:18.003612   67412 main.go:141] libmachine: (custom-flannel-405163)   <os>
	I0806 08:22:18.003627   67412 main.go:141] libmachine: (custom-flannel-405163)     <type>hvm</type>
	I0806 08:22:18.003639   67412 main.go:141] libmachine: (custom-flannel-405163)     <boot dev='cdrom'/>
	I0806 08:22:18.003650   67412 main.go:141] libmachine: (custom-flannel-405163)     <boot dev='hd'/>
	I0806 08:22:18.003659   67412 main.go:141] libmachine: (custom-flannel-405163)     <bootmenu enable='no'/>
	I0806 08:22:18.003668   67412 main.go:141] libmachine: (custom-flannel-405163)   </os>
	I0806 08:22:18.003676   67412 main.go:141] libmachine: (custom-flannel-405163)   <devices>
	I0806 08:22:18.003687   67412 main.go:141] libmachine: (custom-flannel-405163)     <disk type='file' device='cdrom'>
	I0806 08:22:18.003701   67412 main.go:141] libmachine: (custom-flannel-405163)       <source file='/home/jenkins/minikube-integration/19370-13057/.minikube/machines/custom-flannel-405163/boot2docker.iso'/>
	I0806 08:22:18.003717   67412 main.go:141] libmachine: (custom-flannel-405163)       <target dev='hdc' bus='scsi'/>
	I0806 08:22:18.003731   67412 main.go:141] libmachine: (custom-flannel-405163)       <readonly/>
	I0806 08:22:18.003741   67412 main.go:141] libmachine: (custom-flannel-405163)     </disk>
	I0806 08:22:18.003750   67412 main.go:141] libmachine: (custom-flannel-405163)     <disk type='file' device='disk'>
	I0806 08:22:18.003769   67412 main.go:141] libmachine: (custom-flannel-405163)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0806 08:22:18.003788   67412 main.go:141] libmachine: (custom-flannel-405163)       <source file='/home/jenkins/minikube-integration/19370-13057/.minikube/machines/custom-flannel-405163/custom-flannel-405163.rawdisk'/>
	I0806 08:22:18.003804   67412 main.go:141] libmachine: (custom-flannel-405163)       <target dev='hda' bus='virtio'/>
	I0806 08:22:18.003814   67412 main.go:141] libmachine: (custom-flannel-405163)     </disk>
	I0806 08:22:18.003825   67412 main.go:141] libmachine: (custom-flannel-405163)     <interface type='network'>
	I0806 08:22:18.003834   67412 main.go:141] libmachine: (custom-flannel-405163)       <source network='mk-custom-flannel-405163'/>
	I0806 08:22:18.003844   67412 main.go:141] libmachine: (custom-flannel-405163)       <model type='virtio'/>
	I0806 08:22:18.003853   67412 main.go:141] libmachine: (custom-flannel-405163)     </interface>
	I0806 08:22:18.003864   67412 main.go:141] libmachine: (custom-flannel-405163)     <interface type='network'>
	I0806 08:22:18.003873   67412 main.go:141] libmachine: (custom-flannel-405163)       <source network='default'/>
	I0806 08:22:18.003885   67412 main.go:141] libmachine: (custom-flannel-405163)       <model type='virtio'/>
	I0806 08:22:18.003924   67412 main.go:141] libmachine: (custom-flannel-405163)     </interface>
	I0806 08:22:18.003946   67412 main.go:141] libmachine: (custom-flannel-405163)     <serial type='pty'>
	I0806 08:22:18.003957   67412 main.go:141] libmachine: (custom-flannel-405163)       <target port='0'/>
	I0806 08:22:18.003967   67412 main.go:141] libmachine: (custom-flannel-405163)     </serial>
	I0806 08:22:18.003979   67412 main.go:141] libmachine: (custom-flannel-405163)     <console type='pty'>
	I0806 08:22:18.003991   67412 main.go:141] libmachine: (custom-flannel-405163)       <target type='serial' port='0'/>
	I0806 08:22:18.004022   67412 main.go:141] libmachine: (custom-flannel-405163)     </console>
	I0806 08:22:18.004044   67412 main.go:141] libmachine: (custom-flannel-405163)     <rng model='virtio'>
	I0806 08:22:18.004060   67412 main.go:141] libmachine: (custom-flannel-405163)       <backend model='random'>/dev/random</backend>
	I0806 08:22:18.004071   67412 main.go:141] libmachine: (custom-flannel-405163)     </rng>
	I0806 08:22:18.004083   67412 main.go:141] libmachine: (custom-flannel-405163)     
	I0806 08:22:18.004090   67412 main.go:141] libmachine: (custom-flannel-405163)     
	I0806 08:22:18.004111   67412 main.go:141] libmachine: (custom-flannel-405163)   </devices>
	I0806 08:22:18.004122   67412 main.go:141] libmachine: (custom-flannel-405163) </domain>
	I0806 08:22:18.004152   67412 main.go:141] libmachine: (custom-flannel-405163) 
	I0806 08:22:18.008498   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | domain custom-flannel-405163 has defined MAC address 52:54:00:f4:8a:32 in network default
	I0806 08:22:18.009155   67412 main.go:141] libmachine: (custom-flannel-405163) Ensuring networks are active...
	I0806 08:22:18.009183   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | domain custom-flannel-405163 has defined MAC address 52:54:00:53:15:a4 in network mk-custom-flannel-405163
	I0806 08:22:18.009889   67412 main.go:141] libmachine: (custom-flannel-405163) Ensuring network default is active
	I0806 08:22:18.010272   67412 main.go:141] libmachine: (custom-flannel-405163) Ensuring network mk-custom-flannel-405163 is active
	I0806 08:22:18.010848   67412 main.go:141] libmachine: (custom-flannel-405163) Getting domain xml...
	I0806 08:22:18.011641   67412 main.go:141] libmachine: (custom-flannel-405163) Creating domain...
	I0806 08:22:19.314016   67412 main.go:141] libmachine: (custom-flannel-405163) Waiting to get IP...
	I0806 08:22:19.314851   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | domain custom-flannel-405163 has defined MAC address 52:54:00:53:15:a4 in network mk-custom-flannel-405163
	I0806 08:22:19.315392   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | unable to find current IP address of domain custom-flannel-405163 in network mk-custom-flannel-405163
	I0806 08:22:19.315430   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | I0806 08:22:19.315368   68195 retry.go:31] will retry after 254.477788ms: waiting for machine to come up
	I0806 08:22:19.572038   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | domain custom-flannel-405163 has defined MAC address 52:54:00:53:15:a4 in network mk-custom-flannel-405163
	I0806 08:22:19.572561   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | unable to find current IP address of domain custom-flannel-405163 in network mk-custom-flannel-405163
	I0806 08:22:19.572590   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | I0806 08:22:19.572515   68195 retry.go:31] will retry after 339.102136ms: waiting for machine to come up
	I0806 08:22:19.913106   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | domain custom-flannel-405163 has defined MAC address 52:54:00:53:15:a4 in network mk-custom-flannel-405163
	I0806 08:22:19.913732   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | unable to find current IP address of domain custom-flannel-405163 in network mk-custom-flannel-405163
	I0806 08:22:19.913762   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | I0806 08:22:19.913696   68195 retry.go:31] will retry after 362.15851ms: waiting for machine to come up
	I0806 08:22:20.277277   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | domain custom-flannel-405163 has defined MAC address 52:54:00:53:15:a4 in network mk-custom-flannel-405163
	I0806 08:22:20.277877   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | unable to find current IP address of domain custom-flannel-405163 in network mk-custom-flannel-405163
	I0806 08:22:20.277911   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | I0806 08:22:20.277845   68195 retry.go:31] will retry after 598.860535ms: waiting for machine to come up
	I0806 08:22:19.127510   65825 main.go:141] libmachine: (calico-405163) Calling .GetIP
	I0806 08:22:19.130807   65825 main.go:141] libmachine: (calico-405163) DBG | domain calico-405163 has defined MAC address 52:54:00:40:bf:b8 in network mk-calico-405163
	I0806 08:22:19.131307   65825 main.go:141] libmachine: (calico-405163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:bf:b8", ip: ""} in network mk-calico-405163: {Iface:virbr2 ExpiryTime:2024-08-06 09:22:04 +0000 UTC Type:0 Mac:52:54:00:40:bf:b8 Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:calico-405163 Clientid:01:52:54:00:40:bf:b8}
	I0806 08:22:19.131339   65825 main.go:141] libmachine: (calico-405163) DBG | domain calico-405163 has defined IP address 192.168.39.37 and MAC address 52:54:00:40:bf:b8 in network mk-calico-405163
	I0806 08:22:19.131557   65825 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0806 08:22:19.136488   65825 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 08:22:19.152108   65825 kubeadm.go:883] updating cluster {Name:calico-405163 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:calico-405163 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.39.37 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0806 08:22:19.152240   65825 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0806 08:22:19.152297   65825 ssh_runner.go:195] Run: sudo crictl images --output json
	I0806 08:22:19.191012   65825 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0806 08:22:19.191095   65825 ssh_runner.go:195] Run: which lz4
	I0806 08:22:19.196052   65825 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0806 08:22:19.200770   65825 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0806 08:22:19.200825   65825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0806 08:22:20.804164   65825 crio.go:462] duration metric: took 1.608137744s to copy over tarball
	I0806 08:22:20.804256   65825 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0806 08:22:23.238520   65825 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.434232671s)
	I0806 08:22:23.238607   65825 crio.go:469] duration metric: took 2.434396917s to extract the tarball
	I0806 08:22:23.238629   65825 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0806 08:22:23.282675   65825 ssh_runner.go:195] Run: sudo crictl images --output json
	I0806 08:22:23.327056   65825 crio.go:514] all images are preloaded for cri-o runtime.
	I0806 08:22:23.327080   65825 cache_images.go:84] Images are preloaded, skipping loading
	I0806 08:22:23.327089   65825 kubeadm.go:934] updating node { 192.168.39.37 8443 v1.30.3 crio true true} ...
	I0806 08:22:23.327214   65825 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=calico-405163 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.37
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:calico-405163 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico}
	I0806 08:22:23.327284   65825 ssh_runner.go:195] Run: crio config
	I0806 08:22:23.378777   65825 cni.go:84] Creating CNI manager for "calico"
	I0806 08:22:23.378800   65825 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0806 08:22:23.378820   65825 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.37 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-405163 NodeName:calico-405163 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.37"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.37 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0806 08:22:23.378953   65825 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.37
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "calico-405163"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.37
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.37"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0806 08:22:23.379011   65825 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0806 08:22:23.389080   65825 binaries.go:44] Found k8s binaries, skipping transfer
	I0806 08:22:23.389154   65825 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0806 08:22:23.398648   65825 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0806 08:22:23.415751   65825 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0806 08:22:23.432434   65825 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0806 08:22:23.449343   65825 ssh_runner.go:195] Run: grep 192.168.39.37	control-plane.minikube.internal$ /etc/hosts
	I0806 08:22:23.453244   65825 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.37	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 08:22:23.466027   65825 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 08:22:23.598821   65825 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 08:22:23.619581   65825 certs.go:68] Setting up /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/calico-405163 for IP: 192.168.39.37
	I0806 08:22:23.619606   65825 certs.go:194] generating shared ca certs ...
	I0806 08:22:23.619623   65825 certs.go:226] acquiring lock for ca certs: {Name:mkf5c5aed91f4108054d9f2c742cad66c86afbed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:22:23.619793   65825 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.key
	I0806 08:22:23.619848   65825 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.key
	I0806 08:22:23.619861   65825 certs.go:256] generating profile certs ...
	I0806 08:22:23.619921   65825 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/calico-405163/client.key
	I0806 08:22:23.619939   65825 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/calico-405163/client.crt with IP's: []
	I0806 08:22:23.747460   65825 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/calico-405163/client.crt ...
	I0806 08:22:23.747485   65825 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/calico-405163/client.crt: {Name:mk113ad668a1187ccacebfade4ff911ef3187f6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:22:23.747671   65825 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/calico-405163/client.key ...
	I0806 08:22:23.747685   65825 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/calico-405163/client.key: {Name:mk8996e0547e25e2508d0773c7d80f50eab6c139 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:22:23.747780   65825 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/calico-405163/apiserver.key.5ecfaba4
	I0806 08:22:23.747795   65825 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/calico-405163/apiserver.crt.5ecfaba4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.37]
	I0806 08:22:23.919015   65825 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/calico-405163/apiserver.crt.5ecfaba4 ...
	I0806 08:22:23.919042   65825 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/calico-405163/apiserver.crt.5ecfaba4: {Name:mk6e9fd698f208086a4411198059b1ebea9e2a0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:22:23.919217   65825 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/calico-405163/apiserver.key.5ecfaba4 ...
	I0806 08:22:23.919233   65825 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/calico-405163/apiserver.key.5ecfaba4: {Name:mka2ddfe4649d4f114970958777aa03a7dacc690 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:22:23.919329   65825 certs.go:381] copying /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/calico-405163/apiserver.crt.5ecfaba4 -> /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/calico-405163/apiserver.crt
	I0806 08:22:23.919400   65825 certs.go:385] copying /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/calico-405163/apiserver.key.5ecfaba4 -> /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/calico-405163/apiserver.key
	I0806 08:22:23.919451   65825 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/calico-405163/proxy-client.key
	I0806 08:22:23.919465   65825 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/calico-405163/proxy-client.crt with IP's: []
	I0806 08:22:24.169374   65825 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/calico-405163/proxy-client.crt ...
	I0806 08:22:24.169400   65825 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/calico-405163/proxy-client.crt: {Name:mk1b0c1e04e615a9d650741b2b4e2c6e8207f097 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:22:24.169574   65825 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/calico-405163/proxy-client.key ...
	I0806 08:22:24.169587   65825 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/calico-405163/proxy-client.key: {Name:mk14cf720f47bcbef923b96f4c49c614e75d2123 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:22:24.169788   65825 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246.pem (1338 bytes)
	W0806 08:22:24.169824   65825 certs.go:480] ignoring /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246_empty.pem, impossibly tiny 0 bytes
	I0806 08:22:24.169843   65825 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem (1675 bytes)
	I0806 08:22:24.169866   65825 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem (1078 bytes)
	I0806 08:22:24.169889   65825 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem (1123 bytes)
	I0806 08:22:24.169922   65825 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem (1675 bytes)
	I0806 08:22:24.169961   65825 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem (1708 bytes)
	I0806 08:22:24.170636   65825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0806 08:22:24.200806   65825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0806 08:22:24.232246   65825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0806 08:22:24.261892   65825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0806 08:22:24.294886   65825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/calico-405163/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0806 08:22:24.327659   65825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/calico-405163/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0806 08:22:24.364903   65825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/calico-405163/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0806 08:22:24.397221   65825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/calico-405163/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0806 08:22:24.420960   65825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem --> /usr/share/ca-certificates/202462.pem (1708 bytes)
	I0806 08:22:24.444654   65825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0806 08:22:24.468716   65825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246.pem --> /usr/share/ca-certificates/20246.pem (1338 bytes)
	I0806 08:22:24.493117   65825 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0806 08:22:24.510329   65825 ssh_runner.go:195] Run: openssl version
	I0806 08:22:24.516195   65825 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202462.pem && ln -fs /usr/share/ca-certificates/202462.pem /etc/ssl/certs/202462.pem"
	I0806 08:22:24.527947   65825 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202462.pem
	I0806 08:22:24.532423   65825 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  6 07:17 /usr/share/ca-certificates/202462.pem
	I0806 08:22:24.532480   65825 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202462.pem
	I0806 08:22:24.538703   65825 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202462.pem /etc/ssl/certs/3ec20f2e.0"
	I0806 08:22:24.550479   65825 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0806 08:22:24.562333   65825 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0806 08:22:24.567188   65825 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  6 07:07 /usr/share/ca-certificates/minikubeCA.pem
	I0806 08:22:24.567252   65825 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0806 08:22:24.573028   65825 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0806 08:22:24.584443   65825 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20246.pem && ln -fs /usr/share/ca-certificates/20246.pem /etc/ssl/certs/20246.pem"
	I0806 08:22:24.595698   65825 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20246.pem
	I0806 08:22:24.600647   65825 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  6 07:17 /usr/share/ca-certificates/20246.pem
	I0806 08:22:24.600715   65825 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20246.pem
	I0806 08:22:24.606643   65825 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20246.pem /etc/ssl/certs/51391683.0"
	I0806 08:22:24.617946   65825 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0806 08:22:24.622035   65825 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0806 08:22:24.622092   65825 kubeadm.go:392] StartCluster: {Name:calico-405163 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 C
lusterName:calico-405163 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.39.37 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 08:22:24.622176   65825 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0806 08:22:24.622227   65825 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0806 08:22:24.660542   65825 cri.go:89] found id: ""
	I0806 08:22:24.660623   65825 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0806 08:22:24.672120   65825 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0806 08:22:24.682871   65825 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0806 08:22:24.694640   65825 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0806 08:22:24.694664   65825 kubeadm.go:157] found existing configuration files:
	
	I0806 08:22:24.694717   65825 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0806 08:22:24.704698   65825 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0806 08:22:24.704766   65825 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0806 08:22:24.715335   65825 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0806 08:22:24.724834   65825 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0806 08:22:24.724896   65825 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0806 08:22:24.734944   65825 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0806 08:22:24.747935   65825 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0806 08:22:24.747999   65825 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0806 08:22:24.761524   65825 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0806 08:22:24.774554   65825 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0806 08:22:24.774624   65825 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0806 08:22:24.788430   65825 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0806 08:22:24.856126   65825 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0806 08:22:24.856196   65825 kubeadm.go:310] [preflight] Running pre-flight checks
	I0806 08:22:24.978754   65825 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0806 08:22:24.978934   65825 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0806 08:22:24.979097   65825 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0806 08:22:25.216273   65825 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0806 08:22:20.878597   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | domain custom-flannel-405163 has defined MAC address 52:54:00:53:15:a4 in network mk-custom-flannel-405163
	I0806 08:22:20.879054   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | unable to find current IP address of domain custom-flannel-405163 in network mk-custom-flannel-405163
	I0806 08:22:20.879090   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | I0806 08:22:20.879001   68195 retry.go:31] will retry after 729.646504ms: waiting for machine to come up
	I0806 08:22:21.610640   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | domain custom-flannel-405163 has defined MAC address 52:54:00:53:15:a4 in network mk-custom-flannel-405163
	I0806 08:22:21.611150   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | unable to find current IP address of domain custom-flannel-405163 in network mk-custom-flannel-405163
	I0806 08:22:21.611175   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | I0806 08:22:21.611099   68195 retry.go:31] will retry after 783.225285ms: waiting for machine to come up
	I0806 08:22:22.396078   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | domain custom-flannel-405163 has defined MAC address 52:54:00:53:15:a4 in network mk-custom-flannel-405163
	I0806 08:22:22.396587   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | unable to find current IP address of domain custom-flannel-405163 in network mk-custom-flannel-405163
	I0806 08:22:22.396614   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | I0806 08:22:22.396530   68195 retry.go:31] will retry after 1.160723791s: waiting for machine to come up
	I0806 08:22:23.559257   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | domain custom-flannel-405163 has defined MAC address 52:54:00:53:15:a4 in network mk-custom-flannel-405163
	I0806 08:22:23.559641   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | unable to find current IP address of domain custom-flannel-405163 in network mk-custom-flannel-405163
	I0806 08:22:23.559663   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | I0806 08:22:23.559591   68195 retry.go:31] will retry after 1.47632297s: waiting for machine to come up
	I0806 08:22:25.038277   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | domain custom-flannel-405163 has defined MAC address 52:54:00:53:15:a4 in network mk-custom-flannel-405163
	I0806 08:22:25.038831   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | unable to find current IP address of domain custom-flannel-405163 in network mk-custom-flannel-405163
	I0806 08:22:25.038859   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | I0806 08:22:25.038777   68195 retry.go:31] will retry after 1.787454249s: waiting for machine to come up
	I0806 08:22:25.467139   65825 out.go:204]   - Generating certificates and keys ...
	I0806 08:22:25.467250   65825 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0806 08:22:25.467324   65825 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0806 08:22:25.467409   65825 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0806 08:22:25.576998   65825 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0806 08:22:25.778452   65825 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0806 08:22:26.063740   65825 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0806 08:22:26.187345   65825 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0806 08:22:26.187500   65825 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [calico-405163 localhost] and IPs [192.168.39.37 127.0.0.1 ::1]
	I0806 08:22:26.342938   65825 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0806 08:22:26.343117   65825 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [calico-405163 localhost] and IPs [192.168.39.37 127.0.0.1 ::1]
	I0806 08:22:26.536925   65825 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0806 08:22:26.623800   65825 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0806 08:22:26.991171   65825 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0806 08:22:26.991288   65825 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0806 08:22:27.098590   65825 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0806 08:22:27.264522   65825 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0806 08:22:27.386992   65825 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0806 08:22:27.490803   65825 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0806 08:22:27.571985   65825 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0806 08:22:27.572434   65825 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0806 08:22:27.576490   65825 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0806 08:22:26.828745   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | domain custom-flannel-405163 has defined MAC address 52:54:00:53:15:a4 in network mk-custom-flannel-405163
	I0806 08:22:26.829329   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | unable to find current IP address of domain custom-flannel-405163 in network mk-custom-flannel-405163
	I0806 08:22:26.829352   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | I0806 08:22:26.829285   68195 retry.go:31] will retry after 1.572137244s: waiting for machine to come up
	I0806 08:22:28.403795   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | domain custom-flannel-405163 has defined MAC address 52:54:00:53:15:a4 in network mk-custom-flannel-405163
	I0806 08:22:28.404308   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | unable to find current IP address of domain custom-flannel-405163 in network mk-custom-flannel-405163
	I0806 08:22:28.404384   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | I0806 08:22:28.404283   68195 retry.go:31] will retry after 1.853819805s: waiting for machine to come up
	I0806 08:22:30.261604   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | domain custom-flannel-405163 has defined MAC address 52:54:00:53:15:a4 in network mk-custom-flannel-405163
	I0806 08:22:30.262111   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | unable to find current IP address of domain custom-flannel-405163 in network mk-custom-flannel-405163
	I0806 08:22:30.262136   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | I0806 08:22:30.262062   68195 retry.go:31] will retry after 2.458349125s: waiting for machine to come up
	I0806 08:22:27.578369   65825 out.go:204]   - Booting up control plane ...
	I0806 08:22:27.578514   65825 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0806 08:22:27.578635   65825 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0806 08:22:27.578730   65825 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0806 08:22:27.601456   65825 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0806 08:22:27.602938   65825 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0806 08:22:27.603001   65825 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0806 08:22:27.745533   65825 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0806 08:22:27.745646   65825 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0806 08:22:28.251226   65825 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 505.990477ms
	I0806 08:22:28.251366   65825 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0806 08:22:33.250893   65825 kubeadm.go:310] [api-check] The API server is healthy after 5.001641224s
	I0806 08:22:33.266754   65825 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0806 08:22:33.283667   65825 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0806 08:22:33.313765   65825 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0806 08:22:33.314038   65825 kubeadm.go:310] [mark-control-plane] Marking the node calico-405163 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0806 08:22:33.326903   65825 kubeadm.go:310] [bootstrap-token] Using token: pl8c5c.gijzjuxdx5p07ugf
	I0806 08:22:33.328429   65825 out.go:204]   - Configuring RBAC rules ...
	I0806 08:22:33.328567   65825 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0806 08:22:33.339783   65825 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0806 08:22:33.350602   65825 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0806 08:22:33.355213   65825 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0806 08:22:33.359750   65825 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0806 08:22:33.364027   65825 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0806 08:22:33.658518   65825 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0806 08:22:34.105567   65825 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0806 08:22:34.657262   65825 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0806 08:22:34.658026   65825 kubeadm.go:310] 
	I0806 08:22:34.658141   65825 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0806 08:22:34.658158   65825 kubeadm.go:310] 
	I0806 08:22:34.658242   65825 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0806 08:22:34.658253   65825 kubeadm.go:310] 
	I0806 08:22:34.658295   65825 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0806 08:22:34.658396   65825 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0806 08:22:34.658487   65825 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0806 08:22:34.658496   65825 kubeadm.go:310] 
	I0806 08:22:34.658581   65825 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0806 08:22:34.658594   65825 kubeadm.go:310] 
	I0806 08:22:34.658645   65825 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0806 08:22:34.658659   65825 kubeadm.go:310] 
	I0806 08:22:34.658739   65825 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0806 08:22:34.658838   65825 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0806 08:22:34.658942   65825 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0806 08:22:34.658956   65825 kubeadm.go:310] 
	I0806 08:22:34.659056   65825 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0806 08:22:34.659171   65825 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0806 08:22:34.659188   65825 kubeadm.go:310] 
	I0806 08:22:34.659305   65825 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token pl8c5c.gijzjuxdx5p07ugf \
	I0806 08:22:34.659463   65825 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d36e5c1dfe989c7d561c809d7c06e0fc13926ac8f6d664cc5d6865ea5f79931b \
	I0806 08:22:34.659512   65825 kubeadm.go:310] 	--control-plane 
	I0806 08:22:34.659522   65825 kubeadm.go:310] 
	I0806 08:22:34.659619   65825 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0806 08:22:34.659628   65825 kubeadm.go:310] 
	I0806 08:22:34.659735   65825 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token pl8c5c.gijzjuxdx5p07ugf \
	I0806 08:22:34.659851   65825 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d36e5c1dfe989c7d561c809d7c06e0fc13926ac8f6d664cc5d6865ea5f79931b 
	I0806 08:22:34.660169   65825 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0806 08:22:34.660190   65825 cni.go:84] Creating CNI manager for "calico"
	I0806 08:22:34.662226   65825 out.go:177] * Configuring Calico (Container Networking Interface) ...
	I0806 08:22:32.721870   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | domain custom-flannel-405163 has defined MAC address 52:54:00:53:15:a4 in network mk-custom-flannel-405163
	I0806 08:22:32.722441   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | unable to find current IP address of domain custom-flannel-405163 in network mk-custom-flannel-405163
	I0806 08:22:32.722472   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | I0806 08:22:32.722387   68195 retry.go:31] will retry after 4.074517023s: waiting for machine to come up
	I0806 08:22:34.663792   65825 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0806 08:22:34.663808   65825 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (253923 bytes)
	I0806 08:22:34.686156   65825 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0806 08:22:36.057632   65825 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.371438539s)
	I0806 08:22:36.057684   65825 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0806 08:22:36.057776   65825 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:22:36.057811   65825 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes calico-405163 minikube.k8s.io/updated_at=2024_08_06T08_22_36_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=e92cb06692f5ea1ba801d10d148e5e92e807f9c8 minikube.k8s.io/name=calico-405163 minikube.k8s.io/primary=true
	I0806 08:22:36.801503   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | domain custom-flannel-405163 has defined MAC address 52:54:00:53:15:a4 in network mk-custom-flannel-405163
	I0806 08:22:36.801921   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | unable to find current IP address of domain custom-flannel-405163 in network mk-custom-flannel-405163
	I0806 08:22:36.801944   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | I0806 08:22:36.801877   68195 retry.go:31] will retry after 3.641779495s: waiting for machine to come up
	I0806 08:22:36.098284   65825 ops.go:34] apiserver oom_adj: -16
	I0806 08:22:36.198595   65825 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:22:36.699539   65825 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:22:37.199601   65825 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:22:37.699119   65825 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:22:38.198696   65825 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:22:38.698597   65825 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:22:39.198615   65825 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:22:39.699040   65825 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:22:40.199322   65825 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:22:40.698609   65825 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:22:44.893550   67989 start.go:364] duration metric: took 40.936433241s to acquireMachinesLock for "enable-default-cni-405163"
	I0806 08:22:44.893635   67989 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-405163 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-405163 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26214
4 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0806 08:22:44.893749   67989 start.go:125] createHost starting for "" (driver="kvm2")
	I0806 08:22:40.447075   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | domain custom-flannel-405163 has defined MAC address 52:54:00:53:15:a4 in network mk-custom-flannel-405163
	I0806 08:22:40.447585   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | domain custom-flannel-405163 has current primary IP address 192.168.61.6 and MAC address 52:54:00:53:15:a4 in network mk-custom-flannel-405163
	I0806 08:22:40.447609   67412 main.go:141] libmachine: (custom-flannel-405163) Found IP for machine: 192.168.61.6
	I0806 08:22:40.447632   67412 main.go:141] libmachine: (custom-flannel-405163) Reserving static IP address...
	I0806 08:22:40.447956   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | unable to find host DHCP lease matching {name: "custom-flannel-405163", mac: "52:54:00:53:15:a4", ip: "192.168.61.6"} in network mk-custom-flannel-405163
	I0806 08:22:40.521395   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | Getting to WaitForSSH function...
	I0806 08:22:40.521430   67412 main.go:141] libmachine: (custom-flannel-405163) Reserved static IP address: 192.168.61.6
	I0806 08:22:40.521443   67412 main.go:141] libmachine: (custom-flannel-405163) Waiting for SSH to be available...
	I0806 08:22:40.523977   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | domain custom-flannel-405163 has defined MAC address 52:54:00:53:15:a4 in network mk-custom-flannel-405163
	I0806 08:22:40.524246   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:53:15:a4", ip: ""} in network mk-custom-flannel-405163
	I0806 08:22:40.524279   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | unable to find defined IP address of network mk-custom-flannel-405163 interface with MAC address 52:54:00:53:15:a4
	I0806 08:22:40.524392   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | Using SSH client type: external
	I0806 08:22:40.524429   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | Using SSH private key: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/custom-flannel-405163/id_rsa (-rw-------)
	I0806 08:22:40.524472   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19370-13057/.minikube/machines/custom-flannel-405163/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0806 08:22:40.524491   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | About to run SSH command:
	I0806 08:22:40.524508   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | exit 0
	I0806 08:22:40.527927   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | SSH cmd err, output: exit status 255: 
	I0806 08:22:40.527951   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0806 08:22:40.527959   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | command : exit 0
	I0806 08:22:40.527965   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | err     : exit status 255
	I0806 08:22:40.527974   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | output  : 
	I0806 08:22:43.528690   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | Getting to WaitForSSH function...
	I0806 08:22:43.531221   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | domain custom-flannel-405163 has defined MAC address 52:54:00:53:15:a4 in network mk-custom-flannel-405163
	I0806 08:22:43.531630   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:15:a4", ip: ""} in network mk-custom-flannel-405163: {Iface:virbr1 ExpiryTime:2024-08-06 09:22:33 +0000 UTC Type:0 Mac:52:54:00:53:15:a4 Iaid: IPaddr:192.168.61.6 Prefix:24 Hostname:custom-flannel-405163 Clientid:01:52:54:00:53:15:a4}
	I0806 08:22:43.531661   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | domain custom-flannel-405163 has defined IP address 192.168.61.6 and MAC address 52:54:00:53:15:a4 in network mk-custom-flannel-405163
	I0806 08:22:43.531785   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | Using SSH client type: external
	I0806 08:22:43.531810   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | Using SSH private key: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/custom-flannel-405163/id_rsa (-rw-------)
	I0806 08:22:43.531855   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.6 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19370-13057/.minikube/machines/custom-flannel-405163/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0806 08:22:43.531868   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | About to run SSH command:
	I0806 08:22:43.531881   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | exit 0
	I0806 08:22:43.657265   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | SSH cmd err, output: <nil>: 
	I0806 08:22:43.657586   67412 main.go:141] libmachine: (custom-flannel-405163) KVM machine creation complete!
	I0806 08:22:43.657908   67412 main.go:141] libmachine: (custom-flannel-405163) Calling .GetConfigRaw
	I0806 08:22:43.658448   67412 main.go:141] libmachine: (custom-flannel-405163) Calling .DriverName
	I0806 08:22:43.658622   67412 main.go:141] libmachine: (custom-flannel-405163) Calling .DriverName
	I0806 08:22:43.658802   67412 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0806 08:22:43.658824   67412 main.go:141] libmachine: (custom-flannel-405163) Calling .GetState
	I0806 08:22:43.660167   67412 main.go:141] libmachine: Detecting operating system of created instance...
	I0806 08:22:43.660184   67412 main.go:141] libmachine: Waiting for SSH to be available...
	I0806 08:22:43.660192   67412 main.go:141] libmachine: Getting to WaitForSSH function...
	I0806 08:22:43.660201   67412 main.go:141] libmachine: (custom-flannel-405163) Calling .GetSSHHostname
	I0806 08:22:43.662531   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | domain custom-flannel-405163 has defined MAC address 52:54:00:53:15:a4 in network mk-custom-flannel-405163
	I0806 08:22:43.662901   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:15:a4", ip: ""} in network mk-custom-flannel-405163: {Iface:virbr1 ExpiryTime:2024-08-06 09:22:33 +0000 UTC Type:0 Mac:52:54:00:53:15:a4 Iaid: IPaddr:192.168.61.6 Prefix:24 Hostname:custom-flannel-405163 Clientid:01:52:54:00:53:15:a4}
	I0806 08:22:43.662922   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | domain custom-flannel-405163 has defined IP address 192.168.61.6 and MAC address 52:54:00:53:15:a4 in network mk-custom-flannel-405163
	I0806 08:22:43.663168   67412 main.go:141] libmachine: (custom-flannel-405163) Calling .GetSSHPort
	I0806 08:22:43.663319   67412 main.go:141] libmachine: (custom-flannel-405163) Calling .GetSSHKeyPath
	I0806 08:22:43.663484   67412 main.go:141] libmachine: (custom-flannel-405163) Calling .GetSSHKeyPath
	I0806 08:22:43.663614   67412 main.go:141] libmachine: (custom-flannel-405163) Calling .GetSSHUsername
	I0806 08:22:43.663742   67412 main.go:141] libmachine: Using SSH client type: native
	I0806 08:22:43.663926   67412 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.6 22 <nil> <nil>}
	I0806 08:22:43.663938   67412 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0806 08:22:43.771782   67412 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 08:22:43.771819   67412 main.go:141] libmachine: Detecting the provisioner...
	I0806 08:22:43.771831   67412 main.go:141] libmachine: (custom-flannel-405163) Calling .GetSSHHostname
	I0806 08:22:43.774485   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | domain custom-flannel-405163 has defined MAC address 52:54:00:53:15:a4 in network mk-custom-flannel-405163
	I0806 08:22:43.775005   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:15:a4", ip: ""} in network mk-custom-flannel-405163: {Iface:virbr1 ExpiryTime:2024-08-06 09:22:33 +0000 UTC Type:0 Mac:52:54:00:53:15:a4 Iaid: IPaddr:192.168.61.6 Prefix:24 Hostname:custom-flannel-405163 Clientid:01:52:54:00:53:15:a4}
	I0806 08:22:43.775032   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | domain custom-flannel-405163 has defined IP address 192.168.61.6 and MAC address 52:54:00:53:15:a4 in network mk-custom-flannel-405163
	I0806 08:22:43.775212   67412 main.go:141] libmachine: (custom-flannel-405163) Calling .GetSSHPort
	I0806 08:22:43.775394   67412 main.go:141] libmachine: (custom-flannel-405163) Calling .GetSSHKeyPath
	I0806 08:22:43.775591   67412 main.go:141] libmachine: (custom-flannel-405163) Calling .GetSSHKeyPath
	I0806 08:22:43.775715   67412 main.go:141] libmachine: (custom-flannel-405163) Calling .GetSSHUsername
	I0806 08:22:43.775933   67412 main.go:141] libmachine: Using SSH client type: native
	I0806 08:22:43.776107   67412 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.6 22 <nil> <nil>}
	I0806 08:22:43.776118   67412 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0806 08:22:43.881364   67412 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0806 08:22:43.881469   67412 main.go:141] libmachine: found compatible host: buildroot
	I0806 08:22:43.881489   67412 main.go:141] libmachine: Provisioning with buildroot...
	I0806 08:22:43.881501   67412 main.go:141] libmachine: (custom-flannel-405163) Calling .GetMachineName
	I0806 08:22:43.881747   67412 buildroot.go:166] provisioning hostname "custom-flannel-405163"
	I0806 08:22:43.881777   67412 main.go:141] libmachine: (custom-flannel-405163) Calling .GetMachineName
	I0806 08:22:43.881974   67412 main.go:141] libmachine: (custom-flannel-405163) Calling .GetSSHHostname
	I0806 08:22:43.884500   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | domain custom-flannel-405163 has defined MAC address 52:54:00:53:15:a4 in network mk-custom-flannel-405163
	I0806 08:22:43.884859   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:15:a4", ip: ""} in network mk-custom-flannel-405163: {Iface:virbr1 ExpiryTime:2024-08-06 09:22:33 +0000 UTC Type:0 Mac:52:54:00:53:15:a4 Iaid: IPaddr:192.168.61.6 Prefix:24 Hostname:custom-flannel-405163 Clientid:01:52:54:00:53:15:a4}
	I0806 08:22:43.884897   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | domain custom-flannel-405163 has defined IP address 192.168.61.6 and MAC address 52:54:00:53:15:a4 in network mk-custom-flannel-405163
	I0806 08:22:43.885149   67412 main.go:141] libmachine: (custom-flannel-405163) Calling .GetSSHPort
	I0806 08:22:43.885340   67412 main.go:141] libmachine: (custom-flannel-405163) Calling .GetSSHKeyPath
	I0806 08:22:43.885480   67412 main.go:141] libmachine: (custom-flannel-405163) Calling .GetSSHKeyPath
	I0806 08:22:43.885627   67412 main.go:141] libmachine: (custom-flannel-405163) Calling .GetSSHUsername
	I0806 08:22:43.885790   67412 main.go:141] libmachine: Using SSH client type: native
	I0806 08:22:43.885948   67412 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.6 22 <nil> <nil>}
	I0806 08:22:43.885962   67412 main.go:141] libmachine: About to run SSH command:
	sudo hostname custom-flannel-405163 && echo "custom-flannel-405163" | sudo tee /etc/hostname
	I0806 08:22:44.003061   67412 main.go:141] libmachine: SSH cmd err, output: <nil>: custom-flannel-405163
	
	I0806 08:22:44.003095   67412 main.go:141] libmachine: (custom-flannel-405163) Calling .GetSSHHostname
	I0806 08:22:44.006166   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | domain custom-flannel-405163 has defined MAC address 52:54:00:53:15:a4 in network mk-custom-flannel-405163
	I0806 08:22:44.006470   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:15:a4", ip: ""} in network mk-custom-flannel-405163: {Iface:virbr1 ExpiryTime:2024-08-06 09:22:33 +0000 UTC Type:0 Mac:52:54:00:53:15:a4 Iaid: IPaddr:192.168.61.6 Prefix:24 Hostname:custom-flannel-405163 Clientid:01:52:54:00:53:15:a4}
	I0806 08:22:44.006504   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | domain custom-flannel-405163 has defined IP address 192.168.61.6 and MAC address 52:54:00:53:15:a4 in network mk-custom-flannel-405163
	I0806 08:22:44.006629   67412 main.go:141] libmachine: (custom-flannel-405163) Calling .GetSSHPort
	I0806 08:22:44.006809   67412 main.go:141] libmachine: (custom-flannel-405163) Calling .GetSSHKeyPath
	I0806 08:22:44.006976   67412 main.go:141] libmachine: (custom-flannel-405163) Calling .GetSSHKeyPath
	I0806 08:22:44.007107   67412 main.go:141] libmachine: (custom-flannel-405163) Calling .GetSSHUsername
	I0806 08:22:44.007256   67412 main.go:141] libmachine: Using SSH client type: native
	I0806 08:22:44.007469   67412 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.6 22 <nil> <nil>}
	I0806 08:22:44.007493   67412 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scustom-flannel-405163' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 custom-flannel-405163/g' /etc/hosts;
				else 
					echo '127.0.1.1 custom-flannel-405163' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0806 08:22:44.122038   67412 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 08:22:44.122072   67412 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19370-13057/.minikube CaCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19370-13057/.minikube}
	I0806 08:22:44.122119   67412 buildroot.go:174] setting up certificates
	I0806 08:22:44.122132   67412 provision.go:84] configureAuth start
	I0806 08:22:44.122145   67412 main.go:141] libmachine: (custom-flannel-405163) Calling .GetMachineName
	I0806 08:22:44.122433   67412 main.go:141] libmachine: (custom-flannel-405163) Calling .GetIP
	I0806 08:22:44.125142   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | domain custom-flannel-405163 has defined MAC address 52:54:00:53:15:a4 in network mk-custom-flannel-405163
	I0806 08:22:44.125535   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:15:a4", ip: ""} in network mk-custom-flannel-405163: {Iface:virbr1 ExpiryTime:2024-08-06 09:22:33 +0000 UTC Type:0 Mac:52:54:00:53:15:a4 Iaid: IPaddr:192.168.61.6 Prefix:24 Hostname:custom-flannel-405163 Clientid:01:52:54:00:53:15:a4}
	I0806 08:22:44.125563   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | domain custom-flannel-405163 has defined IP address 192.168.61.6 and MAC address 52:54:00:53:15:a4 in network mk-custom-flannel-405163
	I0806 08:22:44.125760   67412 main.go:141] libmachine: (custom-flannel-405163) Calling .GetSSHHostname
	I0806 08:22:44.128228   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | domain custom-flannel-405163 has defined MAC address 52:54:00:53:15:a4 in network mk-custom-flannel-405163
	I0806 08:22:44.128597   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:15:a4", ip: ""} in network mk-custom-flannel-405163: {Iface:virbr1 ExpiryTime:2024-08-06 09:22:33 +0000 UTC Type:0 Mac:52:54:00:53:15:a4 Iaid: IPaddr:192.168.61.6 Prefix:24 Hostname:custom-flannel-405163 Clientid:01:52:54:00:53:15:a4}
	I0806 08:22:44.128622   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | domain custom-flannel-405163 has defined IP address 192.168.61.6 and MAC address 52:54:00:53:15:a4 in network mk-custom-flannel-405163
	I0806 08:22:44.128776   67412 provision.go:143] copyHostCerts
	I0806 08:22:44.128861   67412 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem, removing ...
	I0806 08:22:44.128871   67412 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem
	I0806 08:22:44.128940   67412 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem (1078 bytes)
	I0806 08:22:44.129093   67412 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem, removing ...
	I0806 08:22:44.129102   67412 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem
	I0806 08:22:44.129127   67412 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem (1123 bytes)
	I0806 08:22:44.129194   67412 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem, removing ...
	I0806 08:22:44.129201   67412 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem
	I0806 08:22:44.129218   67412 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem (1675 bytes)
	I0806 08:22:44.129277   67412 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem org=jenkins.custom-flannel-405163 san=[127.0.0.1 192.168.61.6 custom-flannel-405163 localhost minikube]
	I0806 08:22:44.190082   67412 provision.go:177] copyRemoteCerts
	I0806 08:22:44.190138   67412 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0806 08:22:44.190160   67412 main.go:141] libmachine: (custom-flannel-405163) Calling .GetSSHHostname
	I0806 08:22:44.192848   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | domain custom-flannel-405163 has defined MAC address 52:54:00:53:15:a4 in network mk-custom-flannel-405163
	I0806 08:22:44.193263   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:15:a4", ip: ""} in network mk-custom-flannel-405163: {Iface:virbr1 ExpiryTime:2024-08-06 09:22:33 +0000 UTC Type:0 Mac:52:54:00:53:15:a4 Iaid: IPaddr:192.168.61.6 Prefix:24 Hostname:custom-flannel-405163 Clientid:01:52:54:00:53:15:a4}
	I0806 08:22:44.193293   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | domain custom-flannel-405163 has defined IP address 192.168.61.6 and MAC address 52:54:00:53:15:a4 in network mk-custom-flannel-405163
	I0806 08:22:44.193514   67412 main.go:141] libmachine: (custom-flannel-405163) Calling .GetSSHPort
	I0806 08:22:44.193713   67412 main.go:141] libmachine: (custom-flannel-405163) Calling .GetSSHKeyPath
	I0806 08:22:44.193878   67412 main.go:141] libmachine: (custom-flannel-405163) Calling .GetSSHUsername
	I0806 08:22:44.194031   67412 sshutil.go:53] new ssh client: &{IP:192.168.61.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/custom-flannel-405163/id_rsa Username:docker}
	I0806 08:22:44.278657   67412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0806 08:22:44.304249   67412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0806 08:22:44.328654   67412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0806 08:22:44.352410   67412 provision.go:87] duration metric: took 230.264376ms to configureAuth
	I0806 08:22:44.352441   67412 buildroot.go:189] setting minikube options for container-runtime
	I0806 08:22:44.352634   67412 config.go:182] Loaded profile config "custom-flannel-405163": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 08:22:44.352733   67412 main.go:141] libmachine: (custom-flannel-405163) Calling .GetSSHHostname
	I0806 08:22:44.355264   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | domain custom-flannel-405163 has defined MAC address 52:54:00:53:15:a4 in network mk-custom-flannel-405163
	I0806 08:22:44.355667   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:15:a4", ip: ""} in network mk-custom-flannel-405163: {Iface:virbr1 ExpiryTime:2024-08-06 09:22:33 +0000 UTC Type:0 Mac:52:54:00:53:15:a4 Iaid: IPaddr:192.168.61.6 Prefix:24 Hostname:custom-flannel-405163 Clientid:01:52:54:00:53:15:a4}
	I0806 08:22:44.355693   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | domain custom-flannel-405163 has defined IP address 192.168.61.6 and MAC address 52:54:00:53:15:a4 in network mk-custom-flannel-405163
	I0806 08:22:44.355908   67412 main.go:141] libmachine: (custom-flannel-405163) Calling .GetSSHPort
	I0806 08:22:44.356188   67412 main.go:141] libmachine: (custom-flannel-405163) Calling .GetSSHKeyPath
	I0806 08:22:44.356406   67412 main.go:141] libmachine: (custom-flannel-405163) Calling .GetSSHKeyPath
	I0806 08:22:44.356564   67412 main.go:141] libmachine: (custom-flannel-405163) Calling .GetSSHUsername
	I0806 08:22:44.356725   67412 main.go:141] libmachine: Using SSH client type: native
	I0806 08:22:44.356903   67412 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.6 22 <nil> <nil>}
	I0806 08:22:44.356917   67412 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0806 08:22:44.644554   67412 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0806 08:22:44.644580   67412 main.go:141] libmachine: Checking connection to Docker...
	I0806 08:22:44.644590   67412 main.go:141] libmachine: (custom-flannel-405163) Calling .GetURL
	I0806 08:22:44.645885   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | Using libvirt version 6000000
	I0806 08:22:44.648424   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | domain custom-flannel-405163 has defined MAC address 52:54:00:53:15:a4 in network mk-custom-flannel-405163
	I0806 08:22:44.648809   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:15:a4", ip: ""} in network mk-custom-flannel-405163: {Iface:virbr1 ExpiryTime:2024-08-06 09:22:33 +0000 UTC Type:0 Mac:52:54:00:53:15:a4 Iaid: IPaddr:192.168.61.6 Prefix:24 Hostname:custom-flannel-405163 Clientid:01:52:54:00:53:15:a4}
	I0806 08:22:44.648836   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | domain custom-flannel-405163 has defined IP address 192.168.61.6 and MAC address 52:54:00:53:15:a4 in network mk-custom-flannel-405163
	I0806 08:22:44.649016   67412 main.go:141] libmachine: Docker is up and running!
	I0806 08:22:44.649034   67412 main.go:141] libmachine: Reticulating splines...
	I0806 08:22:44.649050   67412 client.go:171] duration metric: took 27.049471917s to LocalClient.Create
	I0806 08:22:44.649076   67412 start.go:167] duration metric: took 27.049540092s to libmachine.API.Create "custom-flannel-405163"
	I0806 08:22:44.649086   67412 start.go:293] postStartSetup for "custom-flannel-405163" (driver="kvm2")
	I0806 08:22:44.649096   67412 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0806 08:22:44.649112   67412 main.go:141] libmachine: (custom-flannel-405163) Calling .DriverName
	I0806 08:22:44.649334   67412 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0806 08:22:44.649358   67412 main.go:141] libmachine: (custom-flannel-405163) Calling .GetSSHHostname
	I0806 08:22:44.651900   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | domain custom-flannel-405163 has defined MAC address 52:54:00:53:15:a4 in network mk-custom-flannel-405163
	I0806 08:22:44.652239   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:15:a4", ip: ""} in network mk-custom-flannel-405163: {Iface:virbr1 ExpiryTime:2024-08-06 09:22:33 +0000 UTC Type:0 Mac:52:54:00:53:15:a4 Iaid: IPaddr:192.168.61.6 Prefix:24 Hostname:custom-flannel-405163 Clientid:01:52:54:00:53:15:a4}
	I0806 08:22:44.652269   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | domain custom-flannel-405163 has defined IP address 192.168.61.6 and MAC address 52:54:00:53:15:a4 in network mk-custom-flannel-405163
	I0806 08:22:44.652419   67412 main.go:141] libmachine: (custom-flannel-405163) Calling .GetSSHPort
	I0806 08:22:44.652588   67412 main.go:141] libmachine: (custom-flannel-405163) Calling .GetSSHKeyPath
	I0806 08:22:44.652737   67412 main.go:141] libmachine: (custom-flannel-405163) Calling .GetSSHUsername
	I0806 08:22:44.652918   67412 sshutil.go:53] new ssh client: &{IP:192.168.61.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/custom-flannel-405163/id_rsa Username:docker}
	I0806 08:22:44.734671   67412 ssh_runner.go:195] Run: cat /etc/os-release
	I0806 08:22:44.739098   67412 info.go:137] Remote host: Buildroot 2023.02.9
	I0806 08:22:44.739127   67412 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-13057/.minikube/addons for local assets ...
	I0806 08:22:44.739207   67412 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-13057/.minikube/files for local assets ...
	I0806 08:22:44.739327   67412 filesync.go:149] local asset: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem -> 202462.pem in /etc/ssl/certs
	I0806 08:22:44.739496   67412 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0806 08:22:44.750313   67412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem --> /etc/ssl/certs/202462.pem (1708 bytes)
	I0806 08:22:44.776707   67412 start.go:296] duration metric: took 127.606196ms for postStartSetup
	I0806 08:22:44.776769   67412 main.go:141] libmachine: (custom-flannel-405163) Calling .GetConfigRaw
	I0806 08:22:44.777392   67412 main.go:141] libmachine: (custom-flannel-405163) Calling .GetIP
	I0806 08:22:44.780166   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | domain custom-flannel-405163 has defined MAC address 52:54:00:53:15:a4 in network mk-custom-flannel-405163
	I0806 08:22:44.780504   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:15:a4", ip: ""} in network mk-custom-flannel-405163: {Iface:virbr1 ExpiryTime:2024-08-06 09:22:33 +0000 UTC Type:0 Mac:52:54:00:53:15:a4 Iaid: IPaddr:192.168.61.6 Prefix:24 Hostname:custom-flannel-405163 Clientid:01:52:54:00:53:15:a4}
	I0806 08:22:44.780557   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | domain custom-flannel-405163 has defined IP address 192.168.61.6 and MAC address 52:54:00:53:15:a4 in network mk-custom-flannel-405163
	I0806 08:22:44.780852   67412 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/custom-flannel-405163/config.json ...
	I0806 08:22:44.781067   67412 start.go:128] duration metric: took 27.20351429s to createHost
	I0806 08:22:44.781096   67412 main.go:141] libmachine: (custom-flannel-405163) Calling .GetSSHHostname
	I0806 08:22:44.783690   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | domain custom-flannel-405163 has defined MAC address 52:54:00:53:15:a4 in network mk-custom-flannel-405163
	I0806 08:22:44.784119   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:15:a4", ip: ""} in network mk-custom-flannel-405163: {Iface:virbr1 ExpiryTime:2024-08-06 09:22:33 +0000 UTC Type:0 Mac:52:54:00:53:15:a4 Iaid: IPaddr:192.168.61.6 Prefix:24 Hostname:custom-flannel-405163 Clientid:01:52:54:00:53:15:a4}
	I0806 08:22:44.784146   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | domain custom-flannel-405163 has defined IP address 192.168.61.6 and MAC address 52:54:00:53:15:a4 in network mk-custom-flannel-405163
	I0806 08:22:44.784309   67412 main.go:141] libmachine: (custom-flannel-405163) Calling .GetSSHPort
	I0806 08:22:44.784508   67412 main.go:141] libmachine: (custom-flannel-405163) Calling .GetSSHKeyPath
	I0806 08:22:44.784662   67412 main.go:141] libmachine: (custom-flannel-405163) Calling .GetSSHKeyPath
	I0806 08:22:44.784825   67412 main.go:141] libmachine: (custom-flannel-405163) Calling .GetSSHUsername
	I0806 08:22:44.784988   67412 main.go:141] libmachine: Using SSH client type: native
	I0806 08:22:44.785206   67412 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.6 22 <nil> <nil>}
	I0806 08:22:44.785222   67412 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0806 08:22:44.893379   67412 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722932564.871454444
	
	I0806 08:22:44.893402   67412 fix.go:216] guest clock: 1722932564.871454444
	I0806 08:22:44.893412   67412 fix.go:229] Guest: 2024-08-06 08:22:44.871454444 +0000 UTC Remote: 2024-08-06 08:22:44.781080944 +0000 UTC m=+74.497710376 (delta=90.3735ms)
	I0806 08:22:44.893453   67412 fix.go:200] guest clock delta is within tolerance: 90.3735ms
	I0806 08:22:44.893461   67412 start.go:83] releasing machines lock for "custom-flannel-405163", held for 27.316064795s
	I0806 08:22:44.893494   67412 main.go:141] libmachine: (custom-flannel-405163) Calling .DriverName
	I0806 08:22:44.893783   67412 main.go:141] libmachine: (custom-flannel-405163) Calling .GetIP
	I0806 08:22:44.896712   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | domain custom-flannel-405163 has defined MAC address 52:54:00:53:15:a4 in network mk-custom-flannel-405163
	I0806 08:22:44.897126   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:15:a4", ip: ""} in network mk-custom-flannel-405163: {Iface:virbr1 ExpiryTime:2024-08-06 09:22:33 +0000 UTC Type:0 Mac:52:54:00:53:15:a4 Iaid: IPaddr:192.168.61.6 Prefix:24 Hostname:custom-flannel-405163 Clientid:01:52:54:00:53:15:a4}
	I0806 08:22:44.897157   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | domain custom-flannel-405163 has defined IP address 192.168.61.6 and MAC address 52:54:00:53:15:a4 in network mk-custom-flannel-405163
	I0806 08:22:44.897353   67412 main.go:141] libmachine: (custom-flannel-405163) Calling .DriverName
	I0806 08:22:44.897811   67412 main.go:141] libmachine: (custom-flannel-405163) Calling .DriverName
	I0806 08:22:44.897955   67412 main.go:141] libmachine: (custom-flannel-405163) Calling .DriverName
	I0806 08:22:44.898039   67412 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0806 08:22:44.898089   67412 main.go:141] libmachine: (custom-flannel-405163) Calling .GetSSHHostname
	I0806 08:22:44.898146   67412 ssh_runner.go:195] Run: cat /version.json
	I0806 08:22:44.898265   67412 main.go:141] libmachine: (custom-flannel-405163) Calling .GetSSHHostname
	I0806 08:22:44.901141   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | domain custom-flannel-405163 has defined MAC address 52:54:00:53:15:a4 in network mk-custom-flannel-405163
	I0806 08:22:44.901174   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | domain custom-flannel-405163 has defined MAC address 52:54:00:53:15:a4 in network mk-custom-flannel-405163
	I0806 08:22:44.901465   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:15:a4", ip: ""} in network mk-custom-flannel-405163: {Iface:virbr1 ExpiryTime:2024-08-06 09:22:33 +0000 UTC Type:0 Mac:52:54:00:53:15:a4 Iaid: IPaddr:192.168.61.6 Prefix:24 Hostname:custom-flannel-405163 Clientid:01:52:54:00:53:15:a4}
	I0806 08:22:44.901487   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | domain custom-flannel-405163 has defined IP address 192.168.61.6 and MAC address 52:54:00:53:15:a4 in network mk-custom-flannel-405163
	I0806 08:22:44.901564   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:15:a4", ip: ""} in network mk-custom-flannel-405163: {Iface:virbr1 ExpiryTime:2024-08-06 09:22:33 +0000 UTC Type:0 Mac:52:54:00:53:15:a4 Iaid: IPaddr:192.168.61.6 Prefix:24 Hostname:custom-flannel-405163 Clientid:01:52:54:00:53:15:a4}
	I0806 08:22:44.901628   67412 main.go:141] libmachine: (custom-flannel-405163) Calling .GetSSHPort
	I0806 08:22:44.901688   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | domain custom-flannel-405163 has defined IP address 192.168.61.6 and MAC address 52:54:00:53:15:a4 in network mk-custom-flannel-405163
	I0806 08:22:44.901835   67412 main.go:141] libmachine: (custom-flannel-405163) Calling .GetSSHKeyPath
	I0806 08:22:44.901923   67412 main.go:141] libmachine: (custom-flannel-405163) Calling .GetSSHPort
	I0806 08:22:44.902099   67412 main.go:141] libmachine: (custom-flannel-405163) Calling .GetSSHUsername
	I0806 08:22:44.902102   67412 main.go:141] libmachine: (custom-flannel-405163) Calling .GetSSHKeyPath
	I0806 08:22:44.902264   67412 main.go:141] libmachine: (custom-flannel-405163) Calling .GetSSHUsername
	I0806 08:22:44.902279   67412 sshutil.go:53] new ssh client: &{IP:192.168.61.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/custom-flannel-405163/id_rsa Username:docker}
	I0806 08:22:44.902394   67412 sshutil.go:53] new ssh client: &{IP:192.168.61.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/custom-flannel-405163/id_rsa Username:docker}
	I0806 08:22:45.004273   67412 ssh_runner.go:195] Run: systemctl --version
	I0806 08:22:45.013150   67412 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0806 08:22:45.176324   67412 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0806 08:22:45.183755   67412 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0806 08:22:45.183841   67412 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0806 08:22:45.202574   67412 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0806 08:22:45.202599   67412 start.go:495] detecting cgroup driver to use...
	I0806 08:22:45.202680   67412 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0806 08:22:45.225763   67412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 08:22:45.241981   67412 docker.go:217] disabling cri-docker service (if available) ...
	I0806 08:22:45.242048   67412 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0806 08:22:45.256399   67412 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0806 08:22:45.270992   67412 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0806 08:22:41.199024   65825 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:22:41.698730   65825 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:22:42.199383   65825 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:22:42.699606   65825 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:22:43.199294   65825 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:22:43.698788   65825 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:22:44.198920   65825 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:22:44.699059   65825 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:22:45.198939   65825 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:22:45.699564   65825 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:22:45.398205   67412 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0806 08:22:45.550411   67412 docker.go:233] disabling docker service ...
	I0806 08:22:45.550479   67412 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0806 08:22:45.565808   67412 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0806 08:22:45.580262   67412 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0806 08:22:45.745218   67412 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0806 08:22:45.888429   67412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0806 08:22:45.903876   67412 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 08:22:45.923442   67412 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0806 08:22:45.923510   67412 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:22:45.934823   67412 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0806 08:22:45.934893   67412 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:22:45.946907   67412 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:22:45.958151   67412 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:22:45.971502   67412 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0806 08:22:45.987531   67412 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:22:46.002178   67412 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:22:46.025380   67412 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:22:46.036322   67412 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0806 08:22:46.046959   67412 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0806 08:22:46.047028   67412 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0806 08:22:46.060089   67412 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0806 08:22:46.069581   67412 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 08:22:46.209482   67412 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0806 08:22:46.392517   67412 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0806 08:22:46.392585   67412 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0806 08:22:46.399242   67412 start.go:563] Will wait 60s for crictl version
	I0806 08:22:46.399311   67412 ssh_runner.go:195] Run: which crictl
	I0806 08:22:46.403852   67412 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0806 08:22:46.449530   67412 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0806 08:22:46.449621   67412 ssh_runner.go:195] Run: crio --version
	I0806 08:22:46.479295   67412 ssh_runner.go:195] Run: crio --version
	I0806 08:22:46.508518   67412 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0806 08:22:46.199679   65825 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:22:46.699228   65825 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:22:47.199546   65825 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:22:47.698689   65825 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:22:47.848436   65825 kubeadm.go:1113] duration metric: took 11.790725853s to wait for elevateKubeSystemPrivileges
	I0806 08:22:47.848470   65825 kubeadm.go:394] duration metric: took 23.226382049s to StartCluster
	I0806 08:22:47.848491   65825 settings.go:142] acquiring lock: {Name:mkb0bfbcae3b4205ef43487a9de84cfd8afd2e5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:22:47.848575   65825 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19370-13057/kubeconfig
	I0806 08:22:47.849885   65825 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/kubeconfig: {Name:mk5c066914acf8e36556b29792f75b98076ca34a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:22:47.850169   65825 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.39.37 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0806 08:22:47.850318   65825 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0806 08:22:47.850391   65825 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0806 08:22:47.850465   65825 addons.go:69] Setting storage-provisioner=true in profile "calico-405163"
	I0806 08:22:47.850499   65825 addons.go:234] Setting addon storage-provisioner=true in "calico-405163"
	I0806 08:22:47.850531   65825 host.go:66] Checking if "calico-405163" exists ...
	I0806 08:22:47.850526   65825 addons.go:69] Setting default-storageclass=true in profile "calico-405163"
	I0806 08:22:47.850555   65825 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-405163"
	I0806 08:22:47.850565   65825 config.go:182] Loaded profile config "calico-405163": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 08:22:47.851006   65825 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:22:47.851032   65825 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:22:47.851175   65825 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:22:47.851205   65825 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:22:47.851827   65825 out.go:177] * Verifying Kubernetes components...
	I0806 08:22:47.856992   65825 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 08:22:47.873361   65825 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35251
	I0806 08:22:47.873659   65825 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38065
	I0806 08:22:47.874122   65825 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:22:47.874165   65825 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:22:47.874631   65825 main.go:141] libmachine: Using API Version  1
	I0806 08:22:47.874652   65825 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:22:47.874978   65825 main.go:141] libmachine: Using API Version  1
	I0806 08:22:47.874997   65825 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:22:47.875048   65825 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:22:47.875213   65825 main.go:141] libmachine: (calico-405163) Calling .GetState
	I0806 08:22:47.875565   65825 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:22:47.876424   65825 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:22:47.876459   65825 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:22:47.879214   65825 addons.go:234] Setting addon default-storageclass=true in "calico-405163"
	I0806 08:22:47.879258   65825 host.go:66] Checking if "calico-405163" exists ...
	I0806 08:22:47.879621   65825 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:22:47.879648   65825 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:22:47.899075   65825 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40877
	I0806 08:22:47.899103   65825 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36659
	I0806 08:22:47.899610   65825 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:22:47.899773   65825 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:22:47.900136   65825 main.go:141] libmachine: Using API Version  1
	I0806 08:22:47.900164   65825 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:22:47.900482   65825 main.go:141] libmachine: Using API Version  1
	I0806 08:22:47.900502   65825 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:22:47.900554   65825 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:22:47.901107   65825 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:22:47.901124   65825 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:22:47.901403   65825 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:22:47.901567   65825 main.go:141] libmachine: (calico-405163) Calling .GetState
	I0806 08:22:47.903554   65825 main.go:141] libmachine: (calico-405163) Calling .DriverName
	I0806 08:22:47.906636   65825 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 08:22:44.896318   67989 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0806 08:22:44.896521   67989 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:22:44.896587   67989 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:22:44.914092   67989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39101
	I0806 08:22:44.914587   67989 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:22:44.915218   67989 main.go:141] libmachine: Using API Version  1
	I0806 08:22:44.915244   67989 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:22:44.915551   67989 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:22:44.915721   67989 main.go:141] libmachine: (enable-default-cni-405163) Calling .GetMachineName
	I0806 08:22:44.915918   67989 main.go:141] libmachine: (enable-default-cni-405163) Calling .DriverName
	I0806 08:22:44.916101   67989 start.go:159] libmachine.API.Create for "enable-default-cni-405163" (driver="kvm2")
	I0806 08:22:44.916132   67989 client.go:168] LocalClient.Create starting
	I0806 08:22:44.916179   67989 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem
	I0806 08:22:44.916220   67989 main.go:141] libmachine: Decoding PEM data...
	I0806 08:22:44.916240   67989 main.go:141] libmachine: Parsing certificate...
	I0806 08:22:44.916300   67989 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem
	I0806 08:22:44.916327   67989 main.go:141] libmachine: Decoding PEM data...
	I0806 08:22:44.916343   67989 main.go:141] libmachine: Parsing certificate...
	I0806 08:22:44.916386   67989 main.go:141] libmachine: Running pre-create checks...
	I0806 08:22:44.916400   67989 main.go:141] libmachine: (enable-default-cni-405163) Calling .PreCreateCheck
	I0806 08:22:44.916791   67989 main.go:141] libmachine: (enable-default-cni-405163) Calling .GetConfigRaw
	I0806 08:22:44.917241   67989 main.go:141] libmachine: Creating machine...
	I0806 08:22:44.917259   67989 main.go:141] libmachine: (enable-default-cni-405163) Calling .Create
	I0806 08:22:44.917384   67989 main.go:141] libmachine: (enable-default-cni-405163) Creating KVM machine...
	I0806 08:22:44.918779   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | found existing default KVM network
	I0806 08:22:44.919950   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | I0806 08:22:44.919809   68467 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:f9:59:d1} reservation:<nil>}
	I0806 08:22:44.920880   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | I0806 08:22:44.920795   68467 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:94:01:fc} reservation:<nil>}
	I0806 08:22:44.922028   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | I0806 08:22:44.921934   68467 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:ea:4b:44} reservation:<nil>}
	I0806 08:22:44.923348   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | I0806 08:22:44.923263   68467 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0003091c0}
	I0806 08:22:44.923373   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | created network xml: 
	I0806 08:22:44.923402   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | <network>
	I0806 08:22:44.923440   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG |   <name>mk-enable-default-cni-405163</name>
	I0806 08:22:44.923457   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG |   <dns enable='no'/>
	I0806 08:22:44.923469   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG |   
	I0806 08:22:44.923486   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0806 08:22:44.923498   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG |     <dhcp>
	I0806 08:22:44.923579   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0806 08:22:44.923614   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG |     </dhcp>
	I0806 08:22:44.923631   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG |   </ip>
	I0806 08:22:44.923642   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG |   
	I0806 08:22:44.923654   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | </network>
	I0806 08:22:44.923661   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | 
	I0806 08:22:44.929022   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | trying to create private KVM network mk-enable-default-cni-405163 192.168.72.0/24...
	I0806 08:22:45.001903   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | private KVM network mk-enable-default-cni-405163 192.168.72.0/24 created
	I0806 08:22:45.001940   67989 main.go:141] libmachine: (enable-default-cni-405163) Setting up store path in /home/jenkins/minikube-integration/19370-13057/.minikube/machines/enable-default-cni-405163 ...
	I0806 08:22:45.001960   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | I0806 08:22:45.001855   68467 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19370-13057/.minikube
	I0806 08:22:45.001980   67989 main.go:141] libmachine: (enable-default-cni-405163) Building disk image from file:///home/jenkins/minikube-integration/19370-13057/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0806 08:22:45.002185   67989 main.go:141] libmachine: (enable-default-cni-405163) Downloading /home/jenkins/minikube-integration/19370-13057/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19370-13057/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0806 08:22:45.254285   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | I0806 08:22:45.254158   68467 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/enable-default-cni-405163/id_rsa...
	I0806 08:22:45.505086   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | I0806 08:22:45.504950   68467 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/enable-default-cni-405163/enable-default-cni-405163.rawdisk...
	I0806 08:22:45.505117   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | Writing magic tar header
	I0806 08:22:45.505134   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | Writing SSH key tar header
	I0806 08:22:45.505148   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | I0806 08:22:45.505070   68467 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19370-13057/.minikube/machines/enable-default-cni-405163 ...
	I0806 08:22:45.505253   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/enable-default-cni-405163
	I0806 08:22:45.505280   67989 main.go:141] libmachine: (enable-default-cni-405163) Setting executable bit set on /home/jenkins/minikube-integration/19370-13057/.minikube/machines/enable-default-cni-405163 (perms=drwx------)
	I0806 08:22:45.505293   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19370-13057/.minikube/machines
	I0806 08:22:45.505323   67989 main.go:141] libmachine: (enable-default-cni-405163) Setting executable bit set on /home/jenkins/minikube-integration/19370-13057/.minikube/machines (perms=drwxr-xr-x)
	I0806 08:22:45.505346   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19370-13057/.minikube
	I0806 08:22:45.505360   67989 main.go:141] libmachine: (enable-default-cni-405163) Setting executable bit set on /home/jenkins/minikube-integration/19370-13057/.minikube (perms=drwxr-xr-x)
	I0806 08:22:45.505378   67989 main.go:141] libmachine: (enable-default-cni-405163) Setting executable bit set on /home/jenkins/minikube-integration/19370-13057 (perms=drwxrwxr-x)
	I0806 08:22:45.505392   67989 main.go:141] libmachine: (enable-default-cni-405163) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0806 08:22:45.505410   67989 main.go:141] libmachine: (enable-default-cni-405163) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0806 08:22:45.505423   67989 main.go:141] libmachine: (enable-default-cni-405163) Creating domain...
	I0806 08:22:45.505462   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19370-13057
	I0806 08:22:45.505483   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0806 08:22:45.505496   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | Checking permissions on dir: /home/jenkins
	I0806 08:22:45.505513   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | Checking permissions on dir: /home
	I0806 08:22:45.505525   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | Skipping /home - not owner
	I0806 08:22:45.506584   67989 main.go:141] libmachine: (enable-default-cni-405163) define libvirt domain using xml: 
	I0806 08:22:45.506606   67989 main.go:141] libmachine: (enable-default-cni-405163) <domain type='kvm'>
	I0806 08:22:45.506617   67989 main.go:141] libmachine: (enable-default-cni-405163)   <name>enable-default-cni-405163</name>
	I0806 08:22:45.506624   67989 main.go:141] libmachine: (enable-default-cni-405163)   <memory unit='MiB'>3072</memory>
	I0806 08:22:45.506630   67989 main.go:141] libmachine: (enable-default-cni-405163)   <vcpu>2</vcpu>
	I0806 08:22:45.506636   67989 main.go:141] libmachine: (enable-default-cni-405163)   <features>
	I0806 08:22:45.506649   67989 main.go:141] libmachine: (enable-default-cni-405163)     <acpi/>
	I0806 08:22:45.506659   67989 main.go:141] libmachine: (enable-default-cni-405163)     <apic/>
	I0806 08:22:45.506669   67989 main.go:141] libmachine: (enable-default-cni-405163)     <pae/>
	I0806 08:22:45.506679   67989 main.go:141] libmachine: (enable-default-cni-405163)     
	I0806 08:22:45.506688   67989 main.go:141] libmachine: (enable-default-cni-405163)   </features>
	I0806 08:22:45.506699   67989 main.go:141] libmachine: (enable-default-cni-405163)   <cpu mode='host-passthrough'>
	I0806 08:22:45.506728   67989 main.go:141] libmachine: (enable-default-cni-405163)   
	I0806 08:22:45.506751   67989 main.go:141] libmachine: (enable-default-cni-405163)   </cpu>
	I0806 08:22:45.506766   67989 main.go:141] libmachine: (enable-default-cni-405163)   <os>
	I0806 08:22:45.506777   67989 main.go:141] libmachine: (enable-default-cni-405163)     <type>hvm</type>
	I0806 08:22:45.506789   67989 main.go:141] libmachine: (enable-default-cni-405163)     <boot dev='cdrom'/>
	I0806 08:22:45.506800   67989 main.go:141] libmachine: (enable-default-cni-405163)     <boot dev='hd'/>
	I0806 08:22:45.506816   67989 main.go:141] libmachine: (enable-default-cni-405163)     <bootmenu enable='no'/>
	I0806 08:22:45.506832   67989 main.go:141] libmachine: (enable-default-cni-405163)   </os>
	I0806 08:22:45.506841   67989 main.go:141] libmachine: (enable-default-cni-405163)   <devices>
	I0806 08:22:45.506850   67989 main.go:141] libmachine: (enable-default-cni-405163)     <disk type='file' device='cdrom'>
	I0806 08:22:45.506865   67989 main.go:141] libmachine: (enable-default-cni-405163)       <source file='/home/jenkins/minikube-integration/19370-13057/.minikube/machines/enable-default-cni-405163/boot2docker.iso'/>
	I0806 08:22:45.506889   67989 main.go:141] libmachine: (enable-default-cni-405163)       <target dev='hdc' bus='scsi'/>
	I0806 08:22:45.506917   67989 main.go:141] libmachine: (enable-default-cni-405163)       <readonly/>
	I0806 08:22:45.506934   67989 main.go:141] libmachine: (enable-default-cni-405163)     </disk>
	I0806 08:22:45.506943   67989 main.go:141] libmachine: (enable-default-cni-405163)     <disk type='file' device='disk'>
	I0806 08:22:45.506953   67989 main.go:141] libmachine: (enable-default-cni-405163)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0806 08:22:45.506964   67989 main.go:141] libmachine: (enable-default-cni-405163)       <source file='/home/jenkins/minikube-integration/19370-13057/.minikube/machines/enable-default-cni-405163/enable-default-cni-405163.rawdisk'/>
	I0806 08:22:45.506971   67989 main.go:141] libmachine: (enable-default-cni-405163)       <target dev='hda' bus='virtio'/>
	I0806 08:22:45.506977   67989 main.go:141] libmachine: (enable-default-cni-405163)     </disk>
	I0806 08:22:45.506983   67989 main.go:141] libmachine: (enable-default-cni-405163)     <interface type='network'>
	I0806 08:22:45.506990   67989 main.go:141] libmachine: (enable-default-cni-405163)       <source network='mk-enable-default-cni-405163'/>
	I0806 08:22:45.506997   67989 main.go:141] libmachine: (enable-default-cni-405163)       <model type='virtio'/>
	I0806 08:22:45.507002   67989 main.go:141] libmachine: (enable-default-cni-405163)     </interface>
	I0806 08:22:45.507007   67989 main.go:141] libmachine: (enable-default-cni-405163)     <interface type='network'>
	I0806 08:22:45.507014   67989 main.go:141] libmachine: (enable-default-cni-405163)       <source network='default'/>
	I0806 08:22:45.507022   67989 main.go:141] libmachine: (enable-default-cni-405163)       <model type='virtio'/>
	I0806 08:22:45.507033   67989 main.go:141] libmachine: (enable-default-cni-405163)     </interface>
	I0806 08:22:45.507048   67989 main.go:141] libmachine: (enable-default-cni-405163)     <serial type='pty'>
	I0806 08:22:45.507060   67989 main.go:141] libmachine: (enable-default-cni-405163)       <target port='0'/>
	I0806 08:22:45.507068   67989 main.go:141] libmachine: (enable-default-cni-405163)     </serial>
	I0806 08:22:45.507074   67989 main.go:141] libmachine: (enable-default-cni-405163)     <console type='pty'>
	I0806 08:22:45.507081   67989 main.go:141] libmachine: (enable-default-cni-405163)       <target type='serial' port='0'/>
	I0806 08:22:45.507086   67989 main.go:141] libmachine: (enable-default-cni-405163)     </console>
	I0806 08:22:45.507096   67989 main.go:141] libmachine: (enable-default-cni-405163)     <rng model='virtio'>
	I0806 08:22:45.507107   67989 main.go:141] libmachine: (enable-default-cni-405163)       <backend model='random'>/dev/random</backend>
	I0806 08:22:45.507130   67989 main.go:141] libmachine: (enable-default-cni-405163)     </rng>
	I0806 08:22:45.507143   67989 main.go:141] libmachine: (enable-default-cni-405163)     
	I0806 08:22:45.507158   67989 main.go:141] libmachine: (enable-default-cni-405163)     
	I0806 08:22:45.507173   67989 main.go:141] libmachine: (enable-default-cni-405163)   </devices>
	I0806 08:22:45.507186   67989 main.go:141] libmachine: (enable-default-cni-405163) </domain>
	I0806 08:22:45.507200   67989 main.go:141] libmachine: (enable-default-cni-405163) 
	I0806 08:22:45.515155   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | domain enable-default-cni-405163 has defined MAC address 52:54:00:5b:22:da in network default
	I0806 08:22:45.516011   67989 main.go:141] libmachine: (enable-default-cni-405163) Ensuring networks are active...
	I0806 08:22:45.516039   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | domain enable-default-cni-405163 has defined MAC address 52:54:00:44:6c:10 in network mk-enable-default-cni-405163
	I0806 08:22:45.516841   67989 main.go:141] libmachine: (enable-default-cni-405163) Ensuring network default is active
	I0806 08:22:45.517219   67989 main.go:141] libmachine: (enable-default-cni-405163) Ensuring network mk-enable-default-cni-405163 is active
	I0806 08:22:45.517957   67989 main.go:141] libmachine: (enable-default-cni-405163) Getting domain xml...
	I0806 08:22:45.518855   67989 main.go:141] libmachine: (enable-default-cni-405163) Creating domain...
	I0806 08:22:46.978426   67989 main.go:141] libmachine: (enable-default-cni-405163) Waiting to get IP...
	I0806 08:22:46.979608   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | domain enable-default-cni-405163 has defined MAC address 52:54:00:44:6c:10 in network mk-enable-default-cni-405163
	I0806 08:22:46.980117   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | unable to find current IP address of domain enable-default-cni-405163 in network mk-enable-default-cni-405163
	I0806 08:22:46.980271   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | I0806 08:22:46.980193   68467 retry.go:31] will retry after 286.375974ms: waiting for machine to come up
	I0806 08:22:47.268890   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | domain enable-default-cni-405163 has defined MAC address 52:54:00:44:6c:10 in network mk-enable-default-cni-405163
	I0806 08:22:47.269718   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | unable to find current IP address of domain enable-default-cni-405163 in network mk-enable-default-cni-405163
	I0806 08:22:47.269742   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | I0806 08:22:47.269673   68467 retry.go:31] will retry after 331.02095ms: waiting for machine to come up
	I0806 08:22:47.602449   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | domain enable-default-cni-405163 has defined MAC address 52:54:00:44:6c:10 in network mk-enable-default-cni-405163
	I0806 08:22:47.603161   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | unable to find current IP address of domain enable-default-cni-405163 in network mk-enable-default-cni-405163
	I0806 08:22:47.603208   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | I0806 08:22:47.603113   68467 retry.go:31] will retry after 433.124904ms: waiting for machine to come up
	I0806 08:22:48.045356   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | domain enable-default-cni-405163 has defined MAC address 52:54:00:44:6c:10 in network mk-enable-default-cni-405163
	I0806 08:22:48.045936   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | unable to find current IP address of domain enable-default-cni-405163 in network mk-enable-default-cni-405163
	I0806 08:22:48.045963   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | I0806 08:22:48.045894   68467 retry.go:31] will retry after 521.83016ms: waiting for machine to come up
	I0806 08:22:48.569745   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | domain enable-default-cni-405163 has defined MAC address 52:54:00:44:6c:10 in network mk-enable-default-cni-405163
	I0806 08:22:48.570268   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | unable to find current IP address of domain enable-default-cni-405163 in network mk-enable-default-cni-405163
	I0806 08:22:48.570289   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | I0806 08:22:48.570217   68467 retry.go:31] will retry after 560.102081ms: waiting for machine to come up
	I0806 08:22:47.908025   65825 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0806 08:22:47.908037   65825 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0806 08:22:47.908059   65825 main.go:141] libmachine: (calico-405163) Calling .GetSSHHostname
	I0806 08:22:47.914143   65825 main.go:141] libmachine: (calico-405163) DBG | domain calico-405163 has defined MAC address 52:54:00:40:bf:b8 in network mk-calico-405163
	I0806 08:22:47.914178   65825 main.go:141] libmachine: (calico-405163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:bf:b8", ip: ""} in network mk-calico-405163: {Iface:virbr2 ExpiryTime:2024-08-06 09:22:04 +0000 UTC Type:0 Mac:52:54:00:40:bf:b8 Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:calico-405163 Clientid:01:52:54:00:40:bf:b8}
	I0806 08:22:47.914198   65825 main.go:141] libmachine: (calico-405163) DBG | domain calico-405163 has defined IP address 192.168.39.37 and MAC address 52:54:00:40:bf:b8 in network mk-calico-405163
	I0806 08:22:47.914293   65825 main.go:141] libmachine: (calico-405163) Calling .GetSSHPort
	I0806 08:22:47.918196   65825 main.go:141] libmachine: (calico-405163) Calling .GetSSHKeyPath
	I0806 08:22:47.918519   65825 main.go:141] libmachine: (calico-405163) Calling .GetSSHUsername
	I0806 08:22:47.918799   65825 sshutil.go:53] new ssh client: &{IP:192.168.39.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/calico-405163/id_rsa Username:docker}
	I0806 08:22:47.937286   65825 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42067
	I0806 08:22:47.937728   65825 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:22:47.938272   65825 main.go:141] libmachine: Using API Version  1
	I0806 08:22:47.938291   65825 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:22:47.938662   65825 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:22:47.938849   65825 main.go:141] libmachine: (calico-405163) Calling .GetState
	I0806 08:22:47.940823   65825 main.go:141] libmachine: (calico-405163) Calling .DriverName
	I0806 08:22:47.941067   65825 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0806 08:22:47.941083   65825 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0806 08:22:47.941105   65825 main.go:141] libmachine: (calico-405163) Calling .GetSSHHostname
	I0806 08:22:47.944174   65825 main.go:141] libmachine: (calico-405163) DBG | domain calico-405163 has defined MAC address 52:54:00:40:bf:b8 in network mk-calico-405163
	I0806 08:22:47.944707   65825 main.go:141] libmachine: (calico-405163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:bf:b8", ip: ""} in network mk-calico-405163: {Iface:virbr2 ExpiryTime:2024-08-06 09:22:04 +0000 UTC Type:0 Mac:52:54:00:40:bf:b8 Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:calico-405163 Clientid:01:52:54:00:40:bf:b8}
	I0806 08:22:47.944726   65825 main.go:141] libmachine: (calico-405163) DBG | domain calico-405163 has defined IP address 192.168.39.37 and MAC address 52:54:00:40:bf:b8 in network mk-calico-405163
	I0806 08:22:47.945060   65825 main.go:141] libmachine: (calico-405163) Calling .GetSSHPort
	I0806 08:22:47.945280   65825 main.go:141] libmachine: (calico-405163) Calling .GetSSHKeyPath
	I0806 08:22:47.945437   65825 main.go:141] libmachine: (calico-405163) Calling .GetSSHUsername
	I0806 08:22:47.945556   65825 sshutil.go:53] new ssh client: &{IP:192.168.39.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/calico-405163/id_rsa Username:docker}
	I0806 08:22:48.314876   65825 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 08:22:48.314950   65825 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0806 08:22:48.439931   65825 node_ready.go:35] waiting up to 15m0s for node "calico-405163" to be "Ready" ...
	I0806 08:22:48.449547   65825 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0806 08:22:48.604486   65825 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0806 08:22:49.014552   65825 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0806 08:22:49.014626   65825 main.go:141] libmachine: Making call to close driver server
	I0806 08:22:49.014646   65825 main.go:141] libmachine: (calico-405163) Calling .Close
	I0806 08:22:49.014930   65825 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:22:49.014945   65825 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:22:49.014954   65825 main.go:141] libmachine: Making call to close driver server
	I0806 08:22:49.014961   65825 main.go:141] libmachine: (calico-405163) Calling .Close
	I0806 08:22:49.014962   65825 main.go:141] libmachine: (calico-405163) DBG | Closing plugin on server side
	I0806 08:22:49.015231   65825 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:22:49.015246   65825 main.go:141] libmachine: (calico-405163) DBG | Closing plugin on server side
	I0806 08:22:49.015249   65825 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:22:49.031608   65825 main.go:141] libmachine: Making call to close driver server
	I0806 08:22:49.031633   65825 main.go:141] libmachine: (calico-405163) Calling .Close
	I0806 08:22:49.031932   65825 main.go:141] libmachine: (calico-405163) DBG | Closing plugin on server side
	I0806 08:22:49.032427   65825 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:22:49.032455   65825 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:22:49.443914   65825 main.go:141] libmachine: Making call to close driver server
	I0806 08:22:49.443943   65825 main.go:141] libmachine: (calico-405163) Calling .Close
	I0806 08:22:49.444204   65825 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:22:49.444223   65825 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:22:49.444233   65825 main.go:141] libmachine: Making call to close driver server
	I0806 08:22:49.444240   65825 main.go:141] libmachine: (calico-405163) Calling .Close
	I0806 08:22:49.444629   65825 main.go:141] libmachine: (calico-405163) DBG | Closing plugin on server side
	I0806 08:22:49.444639   65825 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:22:49.444652   65825 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:22:49.446534   65825 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0806 08:22:46.509694   67412 main.go:141] libmachine: (custom-flannel-405163) Calling .GetIP
	I0806 08:22:46.513129   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | domain custom-flannel-405163 has defined MAC address 52:54:00:53:15:a4 in network mk-custom-flannel-405163
	I0806 08:22:46.513596   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:15:a4", ip: ""} in network mk-custom-flannel-405163: {Iface:virbr1 ExpiryTime:2024-08-06 09:22:33 +0000 UTC Type:0 Mac:52:54:00:53:15:a4 Iaid: IPaddr:192.168.61.6 Prefix:24 Hostname:custom-flannel-405163 Clientid:01:52:54:00:53:15:a4}
	I0806 08:22:46.513634   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | domain custom-flannel-405163 has defined IP address 192.168.61.6 and MAC address 52:54:00:53:15:a4 in network mk-custom-flannel-405163
	I0806 08:22:46.513847   67412 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0806 08:22:46.518277   67412 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 08:22:46.531733   67412 kubeadm.go:883] updating cluster {Name:custom-flannel-405163 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.30.3 ClusterName:custom-flannel-405163 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.61.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0806 08:22:46.531868   67412 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0806 08:22:46.531937   67412 ssh_runner.go:195] Run: sudo crictl images --output json
	I0806 08:22:46.567360   67412 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0806 08:22:46.567433   67412 ssh_runner.go:195] Run: which lz4
	I0806 08:22:46.572025   67412 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0806 08:22:46.576659   67412 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0806 08:22:46.576686   67412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0806 08:22:48.209346   67412 crio.go:462] duration metric: took 1.637346886s to copy over tarball
	I0806 08:22:48.209445   67412 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0806 08:22:49.448060   65825 addons.go:510] duration metric: took 1.597669907s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0806 08:22:49.523722   65825 kapi.go:214] "coredns" deployment in "kube-system" namespace and "calico-405163" context rescaled to 1 replicas
	I0806 08:22:50.444948   65825 node_ready.go:53] node "calico-405163" has status "Ready":"False"
	I0806 08:22:50.892050   67412 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.682554871s)
	I0806 08:22:50.892084   67412 crio.go:469] duration metric: took 2.68270572s to extract the tarball
	I0806 08:22:50.892093   67412 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0806 08:22:50.934193   67412 ssh_runner.go:195] Run: sudo crictl images --output json
	I0806 08:22:50.981804   67412 crio.go:514] all images are preloaded for cri-o runtime.
	I0806 08:22:50.981829   67412 cache_images.go:84] Images are preloaded, skipping loading
	I0806 08:22:50.981838   67412 kubeadm.go:934] updating node { 192.168.61.6 8443 v1.30.3 crio true true} ...
	I0806 08:22:50.981981   67412 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=custom-flannel-405163 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-405163 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml}
	I0806 08:22:50.982064   67412 ssh_runner.go:195] Run: crio config
	I0806 08:22:51.032478   67412 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0806 08:22:51.032529   67412 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0806 08:22:51.032563   67412 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.6 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:custom-flannel-405163 NodeName:custom-flannel-405163 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.6"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.6 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0806 08:22:51.032735   67412 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.6
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "custom-flannel-405163"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.6
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.6"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0806 08:22:51.032798   67412 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0806 08:22:51.046824   67412 binaries.go:44] Found k8s binaries, skipping transfer
	I0806 08:22:51.046932   67412 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0806 08:22:51.060618   67412 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I0806 08:22:51.082951   67412 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0806 08:22:51.103844   67412 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0806 08:22:51.126497   67412 ssh_runner.go:195] Run: grep 192.168.61.6	control-plane.minikube.internal$ /etc/hosts
	I0806 08:22:51.132019   67412 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.6	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 08:22:51.148994   67412 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 08:22:51.286654   67412 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 08:22:51.307607   67412 certs.go:68] Setting up /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/custom-flannel-405163 for IP: 192.168.61.6
	I0806 08:22:51.307637   67412 certs.go:194] generating shared ca certs ...
	I0806 08:22:51.307661   67412 certs.go:226] acquiring lock for ca certs: {Name:mkf5c5aed91f4108054d9f2c742cad66c86afbed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:22:51.307857   67412 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.key
	I0806 08:22:51.307919   67412 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.key
	I0806 08:22:51.307930   67412 certs.go:256] generating profile certs ...
	I0806 08:22:51.307999   67412 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/custom-flannel-405163/client.key
	I0806 08:22:51.308016   67412 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/custom-flannel-405163/client.crt with IP's: []
	I0806 08:22:51.728251   67412 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/custom-flannel-405163/client.crt ...
	I0806 08:22:51.728285   67412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/custom-flannel-405163/client.crt: {Name:mk4c6e783b93b635eeafeb3d553576a061cdde1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:22:51.728487   67412 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/custom-flannel-405163/client.key ...
	I0806 08:22:51.728509   67412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/custom-flannel-405163/client.key: {Name:mk8e23480e66ffdcdbf700279d8f66add5b59db3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:22:51.728618   67412 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/custom-flannel-405163/apiserver.key.15f15ba3
	I0806 08:22:51.728638   67412 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/custom-flannel-405163/apiserver.crt.15f15ba3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.6]
	I0806 08:22:51.875569   67412 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/custom-flannel-405163/apiserver.crt.15f15ba3 ...
	I0806 08:22:51.875600   67412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/custom-flannel-405163/apiserver.crt.15f15ba3: {Name:mk6fa47ec79a4bdec592e2235ed01006a81a47f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:22:51.875763   67412 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/custom-flannel-405163/apiserver.key.15f15ba3 ...
	I0806 08:22:51.875778   67412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/custom-flannel-405163/apiserver.key.15f15ba3: {Name:mk4506e6fe1e1e3765f8aa3d34ec9db980390ff5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:22:51.875870   67412 certs.go:381] copying /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/custom-flannel-405163/apiserver.crt.15f15ba3 -> /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/custom-flannel-405163/apiserver.crt
	I0806 08:22:51.875974   67412 certs.go:385] copying /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/custom-flannel-405163/apiserver.key.15f15ba3 -> /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/custom-flannel-405163/apiserver.key
	I0806 08:22:51.876056   67412 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/custom-flannel-405163/proxy-client.key
	I0806 08:22:51.876079   67412 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/custom-flannel-405163/proxy-client.crt with IP's: []
	I0806 08:22:52.030421   67412 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/custom-flannel-405163/proxy-client.crt ...
	I0806 08:22:52.030453   67412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/custom-flannel-405163/proxy-client.crt: {Name:mk89b4b29a3a30167ebed2cfbe85df154d6019e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:22:52.030619   67412 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/custom-flannel-405163/proxy-client.key ...
	I0806 08:22:52.030635   67412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/custom-flannel-405163/proxy-client.key: {Name:mkf5299ea5545d1ecd4541893a5039395defe8ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:22:52.030823   67412 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246.pem (1338 bytes)
	W0806 08:22:52.030883   67412 certs.go:480] ignoring /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246_empty.pem, impossibly tiny 0 bytes
	I0806 08:22:52.030899   67412 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem (1675 bytes)
	I0806 08:22:52.030937   67412 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem (1078 bytes)
	I0806 08:22:52.030976   67412 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem (1123 bytes)
	I0806 08:22:52.031014   67412 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem (1675 bytes)
	I0806 08:22:52.031058   67412 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem (1708 bytes)
	I0806 08:22:52.031710   67412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0806 08:22:52.068212   67412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0806 08:22:52.094125   67412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0806 08:22:52.119623   67412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0806 08:22:52.147097   67412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/custom-flannel-405163/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0806 08:22:52.175088   67412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/custom-flannel-405163/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0806 08:22:52.205917   67412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/custom-flannel-405163/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0806 08:22:52.235985   67412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/custom-flannel-405163/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0806 08:22:52.264578   67412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem --> /usr/share/ca-certificates/202462.pem (1708 bytes)
	I0806 08:22:52.294809   67412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0806 08:22:52.324856   67412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246.pem --> /usr/share/ca-certificates/20246.pem (1338 bytes)
	I0806 08:22:52.355633   67412 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0806 08:22:52.380958   67412 ssh_runner.go:195] Run: openssl version
	I0806 08:22:52.388761   67412 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202462.pem && ln -fs /usr/share/ca-certificates/202462.pem /etc/ssl/certs/202462.pem"
	I0806 08:22:52.400841   67412 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202462.pem
	I0806 08:22:52.406887   67412 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  6 07:17 /usr/share/ca-certificates/202462.pem
	I0806 08:22:52.406967   67412 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202462.pem
	I0806 08:22:52.414776   67412 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202462.pem /etc/ssl/certs/3ec20f2e.0"
	I0806 08:22:52.427296   67412 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0806 08:22:52.443165   67412 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0806 08:22:52.449283   67412 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  6 07:07 /usr/share/ca-certificates/minikubeCA.pem
	I0806 08:22:52.449349   67412 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0806 08:22:52.457570   67412 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0806 08:22:52.473003   67412 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20246.pem && ln -fs /usr/share/ca-certificates/20246.pem /etc/ssl/certs/20246.pem"
	I0806 08:22:52.488808   67412 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20246.pem
	I0806 08:22:52.495067   67412 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  6 07:17 /usr/share/ca-certificates/20246.pem
	I0806 08:22:52.495147   67412 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20246.pem
	I0806 08:22:52.503498   67412 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20246.pem /etc/ssl/certs/51391683.0"
	I0806 08:22:52.519066   67412 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0806 08:22:52.524864   67412 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0806 08:22:52.524942   67412 kubeadm.go:392] StartCluster: {Name:custom-flannel-405163 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.30.3 ClusterName:custom-flannel-405163 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.61.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 08:22:52.525050   67412 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0806 08:22:52.525123   67412 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0806 08:22:52.571701   67412 cri.go:89] found id: ""
	I0806 08:22:52.571779   67412 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0806 08:22:52.582792   67412 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0806 08:22:52.593113   67412 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0806 08:22:52.607199   67412 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0806 08:22:52.607221   67412 kubeadm.go:157] found existing configuration files:
	
	I0806 08:22:52.607269   67412 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0806 08:22:52.621799   67412 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0806 08:22:52.621862   67412 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0806 08:22:52.633654   67412 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0806 08:22:52.644604   67412 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0806 08:22:52.644689   67412 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0806 08:22:52.655255   67412 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0806 08:22:52.665314   67412 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0806 08:22:52.665370   67412 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0806 08:22:52.677888   67412 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0806 08:22:52.691936   67412 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0806 08:22:52.692022   67412 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0806 08:22:52.705903   67412 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0806 08:22:52.788380   67412 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0806 08:22:52.788908   67412 kubeadm.go:310] [preflight] Running pre-flight checks
	I0806 08:22:52.966866   67412 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0806 08:22:52.967008   67412 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0806 08:22:52.967162   67412 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0806 08:22:53.211584   67412 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0806 08:22:49.131840   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | domain enable-default-cni-405163 has defined MAC address 52:54:00:44:6c:10 in network mk-enable-default-cni-405163
	I0806 08:22:49.132444   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | unable to find current IP address of domain enable-default-cni-405163 in network mk-enable-default-cni-405163
	I0806 08:22:49.132487   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | I0806 08:22:49.132395   68467 retry.go:31] will retry after 694.171569ms: waiting for machine to come up
	I0806 08:22:49.828389   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | domain enable-default-cni-405163 has defined MAC address 52:54:00:44:6c:10 in network mk-enable-default-cni-405163
	I0806 08:22:49.828970   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | unable to find current IP address of domain enable-default-cni-405163 in network mk-enable-default-cni-405163
	I0806 08:22:49.829000   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | I0806 08:22:49.828909   68467 retry.go:31] will retry after 1.164366067s: waiting for machine to come up
	I0806 08:22:50.994985   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | domain enable-default-cni-405163 has defined MAC address 52:54:00:44:6c:10 in network mk-enable-default-cni-405163
	I0806 08:22:50.995518   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | unable to find current IP address of domain enable-default-cni-405163 in network mk-enable-default-cni-405163
	I0806 08:22:50.995547   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | I0806 08:22:50.995471   68467 retry.go:31] will retry after 1.297533833s: waiting for machine to come up
	I0806 08:22:52.295142   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | domain enable-default-cni-405163 has defined MAC address 52:54:00:44:6c:10 in network mk-enable-default-cni-405163
	I0806 08:22:52.295746   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | unable to find current IP address of domain enable-default-cni-405163 in network mk-enable-default-cni-405163
	I0806 08:22:52.295773   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | I0806 08:22:52.295682   68467 retry.go:31] will retry after 1.662787422s: waiting for machine to come up
	I0806 08:22:53.408605   67412 out.go:204]   - Generating certificates and keys ...
	I0806 08:22:53.408747   67412 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0806 08:22:53.408843   67412 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0806 08:22:53.408966   67412 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0806 08:22:53.542417   67412 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0806 08:22:53.721366   67412 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0806 08:22:53.858207   67412 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0806 08:22:53.986052   67412 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0806 08:22:53.986246   67412 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [custom-flannel-405163 localhost] and IPs [192.168.61.6 127.0.0.1 ::1]
	I0806 08:22:54.165728   67412 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0806 08:22:54.165965   67412 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [custom-flannel-405163 localhost] and IPs [192.168.61.6 127.0.0.1 ::1]
	I0806 08:22:54.235946   67412 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0806 08:22:54.474501   67412 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0806 08:22:54.902803   67412 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0806 08:22:54.903103   67412 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0806 08:22:55.078416   67412 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0806 08:22:55.246959   67412 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0806 08:22:55.668789   67412 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0806 08:22:55.752009   67412 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0806 08:22:56.034544   67412 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0806 08:22:56.035247   67412 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0806 08:22:56.037960   67412 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0806 08:22:52.628330   65825 node_ready.go:53] node "calico-405163" has status "Ready":"False"
	I0806 08:22:54.945911   65825 node_ready.go:53] node "calico-405163" has status "Ready":"False"
	I0806 08:22:53.960601   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | domain enable-default-cni-405163 has defined MAC address 52:54:00:44:6c:10 in network mk-enable-default-cni-405163
	I0806 08:22:53.961125   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | unable to find current IP address of domain enable-default-cni-405163 in network mk-enable-default-cni-405163
	I0806 08:22:53.961147   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | I0806 08:22:53.961093   68467 retry.go:31] will retry after 1.689428199s: waiting for machine to come up
	I0806 08:22:55.652584   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | domain enable-default-cni-405163 has defined MAC address 52:54:00:44:6c:10 in network mk-enable-default-cni-405163
	I0806 08:22:55.653154   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | unable to find current IP address of domain enable-default-cni-405163 in network mk-enable-default-cni-405163
	I0806 08:22:55.653184   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | I0806 08:22:55.653098   68467 retry.go:31] will retry after 2.368953129s: waiting for machine to come up
	I0806 08:22:58.024225   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | domain enable-default-cni-405163 has defined MAC address 52:54:00:44:6c:10 in network mk-enable-default-cni-405163
	I0806 08:22:58.024851   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | unable to find current IP address of domain enable-default-cni-405163 in network mk-enable-default-cni-405163
	I0806 08:22:58.024887   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | I0806 08:22:58.024753   68467 retry.go:31] will retry after 3.18368882s: waiting for machine to come up
	I0806 08:22:56.243361   67412 out.go:204]   - Booting up control plane ...
	I0806 08:22:56.243567   67412 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0806 08:22:56.243729   67412 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0806 08:22:56.243833   67412 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0806 08:22:56.244016   67412 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0806 08:22:56.244189   67412 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0806 08:22:56.244267   67412 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0806 08:22:56.244478   67412 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0806 08:22:56.244613   67412 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0806 08:22:57.225445   67412 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001667019s
	I0806 08:22:57.225653   67412 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0806 08:22:57.444933   65825 node_ready.go:53] node "calico-405163" has status "Ready":"False"
	I0806 08:22:58.944622   65825 node_ready.go:49] node "calico-405163" has status "Ready":"True"
	I0806 08:22:58.944660   65825 node_ready.go:38] duration metric: took 10.50468717s for node "calico-405163" to be "Ready" ...
	I0806 08:22:58.944683   65825 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 08:22:58.956212   65825 pod_ready.go:78] waiting up to 15m0s for pod "calico-kube-controllers-77d59654f4-jlr9z" in "kube-system" namespace to be "Ready" ...
	I0806 08:23:00.962899   65825 pod_ready.go:102] pod "calico-kube-controllers-77d59654f4-jlr9z" in "kube-system" namespace has status "Ready":"False"
	I0806 08:23:03.226583   67412 kubeadm.go:310] [api-check] The API server is healthy after 6.002396843s
	I0806 08:23:03.238547   67412 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0806 08:23:03.259860   67412 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0806 08:23:03.308712   67412 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0806 08:23:03.308980   67412 kubeadm.go:310] [mark-control-plane] Marking the node custom-flannel-405163 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0806 08:23:03.324843   67412 kubeadm.go:310] [bootstrap-token] Using token: kddtln.houswj5hxlxlq5c8
	I0806 08:23:01.209847   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | domain enable-default-cni-405163 has defined MAC address 52:54:00:44:6c:10 in network mk-enable-default-cni-405163
	I0806 08:23:01.210409   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | unable to find current IP address of domain enable-default-cni-405163 in network mk-enable-default-cni-405163
	I0806 08:23:01.210439   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | I0806 08:23:01.210357   68467 retry.go:31] will retry after 4.45140554s: waiting for machine to come up
	I0806 08:23:03.326539   67412 out.go:204]   - Configuring RBAC rules ...
	I0806 08:23:03.326696   67412 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0806 08:23:03.337979   67412 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0806 08:23:03.351659   67412 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0806 08:23:03.357110   67412 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0806 08:23:03.369194   67412 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0806 08:23:03.374736   67412 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0806 08:23:03.636239   67412 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0806 08:23:04.075831   67412 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0806 08:23:04.634230   67412 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0806 08:23:04.634263   67412 kubeadm.go:310] 
	I0806 08:23:04.634360   67412 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0806 08:23:04.634382   67412 kubeadm.go:310] 
	I0806 08:23:04.634489   67412 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0806 08:23:04.634499   67412 kubeadm.go:310] 
	I0806 08:23:04.634533   67412 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0806 08:23:04.634609   67412 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0806 08:23:04.634680   67412 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0806 08:23:04.634698   67412 kubeadm.go:310] 
	I0806 08:23:04.634764   67412 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0806 08:23:04.634775   67412 kubeadm.go:310] 
	I0806 08:23:04.634839   67412 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0806 08:23:04.634849   67412 kubeadm.go:310] 
	I0806 08:23:04.634910   67412 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0806 08:23:04.635027   67412 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0806 08:23:04.635134   67412 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0806 08:23:04.635145   67412 kubeadm.go:310] 
	I0806 08:23:04.635265   67412 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0806 08:23:04.635364   67412 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0806 08:23:04.635372   67412 kubeadm.go:310] 
	I0806 08:23:04.635493   67412 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token kddtln.houswj5hxlxlq5c8 \
	I0806 08:23:04.635631   67412 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d36e5c1dfe989c7d561c809d7c06e0fc13926ac8f6d664cc5d6865ea5f79931b \
	I0806 08:23:04.635669   67412 kubeadm.go:310] 	--control-plane 
	I0806 08:23:04.635676   67412 kubeadm.go:310] 
	I0806 08:23:04.635752   67412 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0806 08:23:04.635762   67412 kubeadm.go:310] 
	I0806 08:23:04.635860   67412 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token kddtln.houswj5hxlxlq5c8 \
	I0806 08:23:04.636011   67412 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d36e5c1dfe989c7d561c809d7c06e0fc13926ac8f6d664cc5d6865ea5f79931b 
	I0806 08:23:04.636396   67412 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0806 08:23:04.636428   67412 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0806 08:23:04.638357   67412 out.go:177] * Configuring testdata/kube-flannel.yaml (Container Networking Interface) ...
	I0806 08:23:04.639661   67412 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0806 08:23:04.639717   67412 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/tmp/minikube/cni.yaml
	I0806 08:23:04.645742   67412 ssh_runner.go:352] existence check for /var/tmp/minikube/cni.yaml: stat -c "%!s(MISSING) %!y(MISSING)" /var/tmp/minikube/cni.yaml: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/tmp/minikube/cni.yaml': No such file or directory
	I0806 08:23:04.645774   67412 ssh_runner.go:362] scp testdata/kube-flannel.yaml --> /var/tmp/minikube/cni.yaml (4591 bytes)
	I0806 08:23:04.678515   67412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0806 08:23:05.097980   67412 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0806 08:23:05.098111   67412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:23:05.098211   67412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes custom-flannel-405163 minikube.k8s.io/updated_at=2024_08_06T08_23_05_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=e92cb06692f5ea1ba801d10d148e5e92e807f9c8 minikube.k8s.io/name=custom-flannel-405163 minikube.k8s.io/primary=true
	I0806 08:23:05.185699   67412 ops.go:34] apiserver oom_adj: -16
	I0806 08:23:02.963330   65825 pod_ready.go:102] pod "calico-kube-controllers-77d59654f4-jlr9z" in "kube-system" namespace has status "Ready":"False"
	I0806 08:23:05.462493   65825 pod_ready.go:102] pod "calico-kube-controllers-77d59654f4-jlr9z" in "kube-system" namespace has status "Ready":"False"
	I0806 08:23:05.663963   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | domain enable-default-cni-405163 has defined MAC address 52:54:00:44:6c:10 in network mk-enable-default-cni-405163
	I0806 08:23:05.664554   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | unable to find current IP address of domain enable-default-cni-405163 in network mk-enable-default-cni-405163
	I0806 08:23:05.664581   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | I0806 08:23:05.664509   68467 retry.go:31] will retry after 3.471057798s: waiting for machine to come up
	I0806 08:23:05.328262   67412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:23:05.829146   67412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:23:06.328769   67412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:23:06.828486   67412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:23:07.329119   67412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:23:07.828868   67412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:23:08.328350   67412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:23:08.828806   67412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:23:09.328897   67412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:23:09.829283   67412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:23:10.746084   68099 start.go:364] duration metric: took 1m1.44427325s to acquireMachinesLock for "kubernetes-upgrade-122219"
	I0806 08:23:10.746138   68099 start.go:96] Skipping create...Using existing machine configuration
	I0806 08:23:10.746146   68099 fix.go:54] fixHost starting: 
	I0806 08:23:10.746568   68099 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:23:10.746636   68099 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:23:10.768732   68099 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46715
	I0806 08:23:10.769295   68099 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:23:10.769897   68099 main.go:141] libmachine: Using API Version  1
	I0806 08:23:10.769943   68099 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:23:10.770310   68099 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:23:10.770483   68099 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .DriverName
	I0806 08:23:10.770631   68099 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetState
	I0806 08:23:10.772412   68099 fix.go:112] recreateIfNeeded on kubernetes-upgrade-122219: state=Running err=<nil>
	W0806 08:23:10.772442   68099 fix.go:138] unexpected machine state, will restart: <nil>
	I0806 08:23:10.774762   68099 out.go:177] * Updating the running kvm2 "kubernetes-upgrade-122219" VM ...
	I0806 08:23:07.463311   65825 pod_ready.go:102] pod "calico-kube-controllers-77d59654f4-jlr9z" in "kube-system" namespace has status "Ready":"False"
	I0806 08:23:09.463390   65825 pod_ready.go:102] pod "calico-kube-controllers-77d59654f4-jlr9z" in "kube-system" namespace has status "Ready":"False"
	I0806 08:23:09.137642   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | domain enable-default-cni-405163 has defined MAC address 52:54:00:44:6c:10 in network mk-enable-default-cni-405163
	I0806 08:23:09.138201   67989 main.go:141] libmachine: (enable-default-cni-405163) Found IP for machine: 192.168.72.116
	I0806 08:23:09.138222   67989 main.go:141] libmachine: (enable-default-cni-405163) Reserving static IP address...
	I0806 08:23:09.138251   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | domain enable-default-cni-405163 has current primary IP address 192.168.72.116 and MAC address 52:54:00:44:6c:10 in network mk-enable-default-cni-405163
	I0806 08:23:09.138606   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | unable to find host DHCP lease matching {name: "enable-default-cni-405163", mac: "52:54:00:44:6c:10", ip: "192.168.72.116"} in network mk-enable-default-cni-405163
	I0806 08:23:09.218046   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | Getting to WaitForSSH function...
	I0806 08:23:09.218073   67989 main.go:141] libmachine: (enable-default-cni-405163) Reserved static IP address: 192.168.72.116
	I0806 08:23:09.218087   67989 main.go:141] libmachine: (enable-default-cni-405163) Waiting for SSH to be available...
	I0806 08:23:09.220789   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | domain enable-default-cni-405163 has defined MAC address 52:54:00:44:6c:10 in network mk-enable-default-cni-405163
	I0806 08:23:09.221375   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6c:10", ip: ""} in network mk-enable-default-cni-405163: {Iface:virbr4 ExpiryTime:2024-08-06 09:23:01 +0000 UTC Type:0 Mac:52:54:00:44:6c:10 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:minikube Clientid:01:52:54:00:44:6c:10}
	I0806 08:23:09.221406   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | domain enable-default-cni-405163 has defined IP address 192.168.72.116 and MAC address 52:54:00:44:6c:10 in network mk-enable-default-cni-405163
	I0806 08:23:09.221623   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | Using SSH client type: external
	I0806 08:23:09.221663   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | Using SSH private key: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/enable-default-cni-405163/id_rsa (-rw-------)
	I0806 08:23:09.221710   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.116 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19370-13057/.minikube/machines/enable-default-cni-405163/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0806 08:23:09.221731   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | About to run SSH command:
	I0806 08:23:09.221766   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | exit 0
	I0806 08:23:09.349643   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | SSH cmd err, output: <nil>: 
	I0806 08:23:09.349904   67989 main.go:141] libmachine: (enable-default-cni-405163) KVM machine creation complete!
	I0806 08:23:09.350382   67989 main.go:141] libmachine: (enable-default-cni-405163) Calling .GetConfigRaw
	I0806 08:23:09.351131   67989 main.go:141] libmachine: (enable-default-cni-405163) Calling .DriverName
	I0806 08:23:09.351417   67989 main.go:141] libmachine: (enable-default-cni-405163) Calling .DriverName
	I0806 08:23:09.351659   67989 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0806 08:23:09.351679   67989 main.go:141] libmachine: (enable-default-cni-405163) Calling .GetState
	I0806 08:23:09.353543   67989 main.go:141] libmachine: Detecting operating system of created instance...
	I0806 08:23:09.353565   67989 main.go:141] libmachine: Waiting for SSH to be available...
	I0806 08:23:09.353574   67989 main.go:141] libmachine: Getting to WaitForSSH function...
	I0806 08:23:09.353582   67989 main.go:141] libmachine: (enable-default-cni-405163) Calling .GetSSHHostname
	I0806 08:23:09.356634   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | domain enable-default-cni-405163 has defined MAC address 52:54:00:44:6c:10 in network mk-enable-default-cni-405163
	I0806 08:23:09.357099   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6c:10", ip: ""} in network mk-enable-default-cni-405163: {Iface:virbr4 ExpiryTime:2024-08-06 09:23:01 +0000 UTC Type:0 Mac:52:54:00:44:6c:10 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:enable-default-cni-405163 Clientid:01:52:54:00:44:6c:10}
	I0806 08:23:09.357143   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | domain enable-default-cni-405163 has defined IP address 192.168.72.116 and MAC address 52:54:00:44:6c:10 in network mk-enable-default-cni-405163
	I0806 08:23:09.357402   67989 main.go:141] libmachine: (enable-default-cni-405163) Calling .GetSSHPort
	I0806 08:23:09.357612   67989 main.go:141] libmachine: (enable-default-cni-405163) Calling .GetSSHKeyPath
	I0806 08:23:09.357786   67989 main.go:141] libmachine: (enable-default-cni-405163) Calling .GetSSHKeyPath
	I0806 08:23:09.357938   67989 main.go:141] libmachine: (enable-default-cni-405163) Calling .GetSSHUsername
	I0806 08:23:09.358117   67989 main.go:141] libmachine: Using SSH client type: native
	I0806 08:23:09.358362   67989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.116 22 <nil> <nil>}
	I0806 08:23:09.358380   67989 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0806 08:23:09.475978   67989 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 08:23:09.476005   67989 main.go:141] libmachine: Detecting the provisioner...
	I0806 08:23:09.476017   67989 main.go:141] libmachine: (enable-default-cni-405163) Calling .GetSSHHostname
	I0806 08:23:09.478935   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | domain enable-default-cni-405163 has defined MAC address 52:54:00:44:6c:10 in network mk-enable-default-cni-405163
	I0806 08:23:09.479370   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6c:10", ip: ""} in network mk-enable-default-cni-405163: {Iface:virbr4 ExpiryTime:2024-08-06 09:23:01 +0000 UTC Type:0 Mac:52:54:00:44:6c:10 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:enable-default-cni-405163 Clientid:01:52:54:00:44:6c:10}
	I0806 08:23:09.479394   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | domain enable-default-cni-405163 has defined IP address 192.168.72.116 and MAC address 52:54:00:44:6c:10 in network mk-enable-default-cni-405163
	I0806 08:23:09.479588   67989 main.go:141] libmachine: (enable-default-cni-405163) Calling .GetSSHPort
	I0806 08:23:09.479786   67989 main.go:141] libmachine: (enable-default-cni-405163) Calling .GetSSHKeyPath
	I0806 08:23:09.480011   67989 main.go:141] libmachine: (enable-default-cni-405163) Calling .GetSSHKeyPath
	I0806 08:23:09.480136   67989 main.go:141] libmachine: (enable-default-cni-405163) Calling .GetSSHUsername
	I0806 08:23:09.480285   67989 main.go:141] libmachine: Using SSH client type: native
	I0806 08:23:09.480534   67989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.116 22 <nil> <nil>}
	I0806 08:23:09.480548   67989 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0806 08:23:09.593408   67989 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0806 08:23:09.593488   67989 main.go:141] libmachine: found compatible host: buildroot
	I0806 08:23:09.593495   67989 main.go:141] libmachine: Provisioning with buildroot...
	I0806 08:23:09.593507   67989 main.go:141] libmachine: (enable-default-cni-405163) Calling .GetMachineName
	I0806 08:23:09.593755   67989 buildroot.go:166] provisioning hostname "enable-default-cni-405163"
	I0806 08:23:09.593784   67989 main.go:141] libmachine: (enable-default-cni-405163) Calling .GetMachineName
	I0806 08:23:09.594069   67989 main.go:141] libmachine: (enable-default-cni-405163) Calling .GetSSHHostname
	I0806 08:23:09.596996   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | domain enable-default-cni-405163 has defined MAC address 52:54:00:44:6c:10 in network mk-enable-default-cni-405163
	I0806 08:23:09.597336   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6c:10", ip: ""} in network mk-enable-default-cni-405163: {Iface:virbr4 ExpiryTime:2024-08-06 09:23:01 +0000 UTC Type:0 Mac:52:54:00:44:6c:10 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:enable-default-cni-405163 Clientid:01:52:54:00:44:6c:10}
	I0806 08:23:09.597377   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | domain enable-default-cni-405163 has defined IP address 192.168.72.116 and MAC address 52:54:00:44:6c:10 in network mk-enable-default-cni-405163
	I0806 08:23:09.597499   67989 main.go:141] libmachine: (enable-default-cni-405163) Calling .GetSSHPort
	I0806 08:23:09.597683   67989 main.go:141] libmachine: (enable-default-cni-405163) Calling .GetSSHKeyPath
	I0806 08:23:09.597892   67989 main.go:141] libmachine: (enable-default-cni-405163) Calling .GetSSHKeyPath
	I0806 08:23:09.598045   67989 main.go:141] libmachine: (enable-default-cni-405163) Calling .GetSSHUsername
	I0806 08:23:09.598211   67989 main.go:141] libmachine: Using SSH client type: native
	I0806 08:23:09.598442   67989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.116 22 <nil> <nil>}
	I0806 08:23:09.598472   67989 main.go:141] libmachine: About to run SSH command:
	sudo hostname enable-default-cni-405163 && echo "enable-default-cni-405163" | sudo tee /etc/hostname
	I0806 08:23:09.725223   67989 main.go:141] libmachine: SSH cmd err, output: <nil>: enable-default-cni-405163
	
	I0806 08:23:09.725258   67989 main.go:141] libmachine: (enable-default-cni-405163) Calling .GetSSHHostname
	I0806 08:23:09.728624   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | domain enable-default-cni-405163 has defined MAC address 52:54:00:44:6c:10 in network mk-enable-default-cni-405163
	I0806 08:23:09.729181   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6c:10", ip: ""} in network mk-enable-default-cni-405163: {Iface:virbr4 ExpiryTime:2024-08-06 09:23:01 +0000 UTC Type:0 Mac:52:54:00:44:6c:10 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:enable-default-cni-405163 Clientid:01:52:54:00:44:6c:10}
	I0806 08:23:09.729216   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | domain enable-default-cni-405163 has defined IP address 192.168.72.116 and MAC address 52:54:00:44:6c:10 in network mk-enable-default-cni-405163
	I0806 08:23:09.729471   67989 main.go:141] libmachine: (enable-default-cni-405163) Calling .GetSSHPort
	I0806 08:23:09.729668   67989 main.go:141] libmachine: (enable-default-cni-405163) Calling .GetSSHKeyPath
	I0806 08:23:09.729892   67989 main.go:141] libmachine: (enable-default-cni-405163) Calling .GetSSHKeyPath
	I0806 08:23:09.730033   67989 main.go:141] libmachine: (enable-default-cni-405163) Calling .GetSSHUsername
	I0806 08:23:09.730199   67989 main.go:141] libmachine: Using SSH client type: native
	I0806 08:23:09.730406   67989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.116 22 <nil> <nil>}
	I0806 08:23:09.730433   67989 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\senable-default-cni-405163' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 enable-default-cni-405163/g' /etc/hosts;
				else 
					echo '127.0.1.1 enable-default-cni-405163' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0806 08:23:09.853808   67989 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 08:23:09.853848   67989 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19370-13057/.minikube CaCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19370-13057/.minikube}
	I0806 08:23:09.853887   67989 buildroot.go:174] setting up certificates
	I0806 08:23:09.853898   67989 provision.go:84] configureAuth start
	I0806 08:23:09.853911   67989 main.go:141] libmachine: (enable-default-cni-405163) Calling .GetMachineName
	I0806 08:23:09.854198   67989 main.go:141] libmachine: (enable-default-cni-405163) Calling .GetIP
	I0806 08:23:09.857226   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | domain enable-default-cni-405163 has defined MAC address 52:54:00:44:6c:10 in network mk-enable-default-cni-405163
	I0806 08:23:09.857612   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6c:10", ip: ""} in network mk-enable-default-cni-405163: {Iface:virbr4 ExpiryTime:2024-08-06 09:23:01 +0000 UTC Type:0 Mac:52:54:00:44:6c:10 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:enable-default-cni-405163 Clientid:01:52:54:00:44:6c:10}
	I0806 08:23:09.857641   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | domain enable-default-cni-405163 has defined IP address 192.168.72.116 and MAC address 52:54:00:44:6c:10 in network mk-enable-default-cni-405163
	I0806 08:23:09.857934   67989 main.go:141] libmachine: (enable-default-cni-405163) Calling .GetSSHHostname
	I0806 08:23:09.860533   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | domain enable-default-cni-405163 has defined MAC address 52:54:00:44:6c:10 in network mk-enable-default-cni-405163
	I0806 08:23:09.860863   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6c:10", ip: ""} in network mk-enable-default-cni-405163: {Iface:virbr4 ExpiryTime:2024-08-06 09:23:01 +0000 UTC Type:0 Mac:52:54:00:44:6c:10 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:enable-default-cni-405163 Clientid:01:52:54:00:44:6c:10}
	I0806 08:23:09.860895   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | domain enable-default-cni-405163 has defined IP address 192.168.72.116 and MAC address 52:54:00:44:6c:10 in network mk-enable-default-cni-405163
	I0806 08:23:09.861010   67989 provision.go:143] copyHostCerts
	I0806 08:23:09.861080   67989 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem, removing ...
	I0806 08:23:09.861094   67989 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem
	I0806 08:23:09.861169   67989 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem (1123 bytes)
	I0806 08:23:09.861290   67989 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem, removing ...
	I0806 08:23:09.861301   67989 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem
	I0806 08:23:09.861333   67989 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem (1675 bytes)
	I0806 08:23:09.861422   67989 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem, removing ...
	I0806 08:23:09.861431   67989 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem
	I0806 08:23:09.861464   67989 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem (1078 bytes)
	I0806 08:23:09.861526   67989 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem org=jenkins.enable-default-cni-405163 san=[127.0.0.1 192.168.72.116 enable-default-cni-405163 localhost minikube]
	I0806 08:23:09.973186   67989 provision.go:177] copyRemoteCerts
	I0806 08:23:09.973251   67989 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0806 08:23:09.973280   67989 main.go:141] libmachine: (enable-default-cni-405163) Calling .GetSSHHostname
	I0806 08:23:09.975900   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | domain enable-default-cni-405163 has defined MAC address 52:54:00:44:6c:10 in network mk-enable-default-cni-405163
	I0806 08:23:09.976263   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6c:10", ip: ""} in network mk-enable-default-cni-405163: {Iface:virbr4 ExpiryTime:2024-08-06 09:23:01 +0000 UTC Type:0 Mac:52:54:00:44:6c:10 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:enable-default-cni-405163 Clientid:01:52:54:00:44:6c:10}
	I0806 08:23:09.976302   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | domain enable-default-cni-405163 has defined IP address 192.168.72.116 and MAC address 52:54:00:44:6c:10 in network mk-enable-default-cni-405163
	I0806 08:23:09.976445   67989 main.go:141] libmachine: (enable-default-cni-405163) Calling .GetSSHPort
	I0806 08:23:09.976643   67989 main.go:141] libmachine: (enable-default-cni-405163) Calling .GetSSHKeyPath
	I0806 08:23:09.976794   67989 main.go:141] libmachine: (enable-default-cni-405163) Calling .GetSSHUsername
	I0806 08:23:09.976916   67989 sshutil.go:53] new ssh client: &{IP:192.168.72.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/enable-default-cni-405163/id_rsa Username:docker}
	I0806 08:23:10.064565   67989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0806 08:23:10.095585   67989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0806 08:23:10.123116   67989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0806 08:23:10.151088   67989 provision.go:87] duration metric: took 297.174426ms to configureAuth
	I0806 08:23:10.151127   67989 buildroot.go:189] setting minikube options for container-runtime
	I0806 08:23:10.151395   67989 config.go:182] Loaded profile config "enable-default-cni-405163": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 08:23:10.151508   67989 main.go:141] libmachine: (enable-default-cni-405163) Calling .GetSSHHostname
	I0806 08:23:10.154962   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | domain enable-default-cni-405163 has defined MAC address 52:54:00:44:6c:10 in network mk-enable-default-cni-405163
	I0806 08:23:10.155378   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6c:10", ip: ""} in network mk-enable-default-cni-405163: {Iface:virbr4 ExpiryTime:2024-08-06 09:23:01 +0000 UTC Type:0 Mac:52:54:00:44:6c:10 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:enable-default-cni-405163 Clientid:01:52:54:00:44:6c:10}
	I0806 08:23:10.155424   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | domain enable-default-cni-405163 has defined IP address 192.168.72.116 and MAC address 52:54:00:44:6c:10 in network mk-enable-default-cni-405163
	I0806 08:23:10.155663   67989 main.go:141] libmachine: (enable-default-cni-405163) Calling .GetSSHPort
	I0806 08:23:10.155903   67989 main.go:141] libmachine: (enable-default-cni-405163) Calling .GetSSHKeyPath
	I0806 08:23:10.156121   67989 main.go:141] libmachine: (enable-default-cni-405163) Calling .GetSSHKeyPath
	I0806 08:23:10.156286   67989 main.go:141] libmachine: (enable-default-cni-405163) Calling .GetSSHUsername
	I0806 08:23:10.156464   67989 main.go:141] libmachine: Using SSH client type: native
	I0806 08:23:10.156657   67989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.116 22 <nil> <nil>}
	I0806 08:23:10.156672   67989 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0806 08:23:10.473594   67989 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0806 08:23:10.473624   67989 main.go:141] libmachine: Checking connection to Docker...
	I0806 08:23:10.473678   67989 main.go:141] libmachine: (enable-default-cni-405163) Calling .GetURL
	I0806 08:23:10.475324   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | Using libvirt version 6000000
	I0806 08:23:10.478178   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | domain enable-default-cni-405163 has defined MAC address 52:54:00:44:6c:10 in network mk-enable-default-cni-405163
	I0806 08:23:10.478603   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6c:10", ip: ""} in network mk-enable-default-cni-405163: {Iface:virbr4 ExpiryTime:2024-08-06 09:23:01 +0000 UTC Type:0 Mac:52:54:00:44:6c:10 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:enable-default-cni-405163 Clientid:01:52:54:00:44:6c:10}
	I0806 08:23:10.478643   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | domain enable-default-cni-405163 has defined IP address 192.168.72.116 and MAC address 52:54:00:44:6c:10 in network mk-enable-default-cni-405163
	I0806 08:23:10.478782   67989 main.go:141] libmachine: Docker is up and running!
	I0806 08:23:10.478800   67989 main.go:141] libmachine: Reticulating splines...
	I0806 08:23:10.478808   67989 client.go:171] duration metric: took 25.562667018s to LocalClient.Create
	I0806 08:23:10.478842   67989 start.go:167] duration metric: took 25.562741896s to libmachine.API.Create "enable-default-cni-405163"
	I0806 08:23:10.478856   67989 start.go:293] postStartSetup for "enable-default-cni-405163" (driver="kvm2")
	I0806 08:23:10.478871   67989 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0806 08:23:10.478892   67989 main.go:141] libmachine: (enable-default-cni-405163) Calling .DriverName
	I0806 08:23:10.479146   67989 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0806 08:23:10.479174   67989 main.go:141] libmachine: (enable-default-cni-405163) Calling .GetSSHHostname
	I0806 08:23:10.481637   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | domain enable-default-cni-405163 has defined MAC address 52:54:00:44:6c:10 in network mk-enable-default-cni-405163
	I0806 08:23:10.481968   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6c:10", ip: ""} in network mk-enable-default-cni-405163: {Iface:virbr4 ExpiryTime:2024-08-06 09:23:01 +0000 UTC Type:0 Mac:52:54:00:44:6c:10 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:enable-default-cni-405163 Clientid:01:52:54:00:44:6c:10}
	I0806 08:23:10.481995   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | domain enable-default-cni-405163 has defined IP address 192.168.72.116 and MAC address 52:54:00:44:6c:10 in network mk-enable-default-cni-405163
	I0806 08:23:10.482155   67989 main.go:141] libmachine: (enable-default-cni-405163) Calling .GetSSHPort
	I0806 08:23:10.482358   67989 main.go:141] libmachine: (enable-default-cni-405163) Calling .GetSSHKeyPath
	I0806 08:23:10.482536   67989 main.go:141] libmachine: (enable-default-cni-405163) Calling .GetSSHUsername
	I0806 08:23:10.482677   67989 sshutil.go:53] new ssh client: &{IP:192.168.72.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/enable-default-cni-405163/id_rsa Username:docker}
	I0806 08:23:10.571443   67989 ssh_runner.go:195] Run: cat /etc/os-release
	I0806 08:23:10.576509   67989 info.go:137] Remote host: Buildroot 2023.02.9
	I0806 08:23:10.576545   67989 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-13057/.minikube/addons for local assets ...
	I0806 08:23:10.576623   67989 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-13057/.minikube/files for local assets ...
	I0806 08:23:10.576762   67989 filesync.go:149] local asset: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem -> 202462.pem in /etc/ssl/certs
	I0806 08:23:10.576936   67989 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0806 08:23:10.588223   67989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem --> /etc/ssl/certs/202462.pem (1708 bytes)
	I0806 08:23:10.616570   67989 start.go:296] duration metric: took 137.698693ms for postStartSetup
	I0806 08:23:10.616647   67989 main.go:141] libmachine: (enable-default-cni-405163) Calling .GetConfigRaw
	I0806 08:23:10.617348   67989 main.go:141] libmachine: (enable-default-cni-405163) Calling .GetIP
	I0806 08:23:10.620278   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | domain enable-default-cni-405163 has defined MAC address 52:54:00:44:6c:10 in network mk-enable-default-cni-405163
	I0806 08:23:10.620806   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6c:10", ip: ""} in network mk-enable-default-cni-405163: {Iface:virbr4 ExpiryTime:2024-08-06 09:23:01 +0000 UTC Type:0 Mac:52:54:00:44:6c:10 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:enable-default-cni-405163 Clientid:01:52:54:00:44:6c:10}
	I0806 08:23:10.620837   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | domain enable-default-cni-405163 has defined IP address 192.168.72.116 and MAC address 52:54:00:44:6c:10 in network mk-enable-default-cni-405163
	I0806 08:23:10.621264   67989 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/enable-default-cni-405163/config.json ...
	I0806 08:23:10.621470   67989 start.go:128] duration metric: took 25.727709219s to createHost
	I0806 08:23:10.621493   67989 main.go:141] libmachine: (enable-default-cni-405163) Calling .GetSSHHostname
	I0806 08:23:10.624461   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | domain enable-default-cni-405163 has defined MAC address 52:54:00:44:6c:10 in network mk-enable-default-cni-405163
	I0806 08:23:10.624810   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6c:10", ip: ""} in network mk-enable-default-cni-405163: {Iface:virbr4 ExpiryTime:2024-08-06 09:23:01 +0000 UTC Type:0 Mac:52:54:00:44:6c:10 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:enable-default-cni-405163 Clientid:01:52:54:00:44:6c:10}
	I0806 08:23:10.624840   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | domain enable-default-cni-405163 has defined IP address 192.168.72.116 and MAC address 52:54:00:44:6c:10 in network mk-enable-default-cni-405163
	I0806 08:23:10.624980   67989 main.go:141] libmachine: (enable-default-cni-405163) Calling .GetSSHPort
	I0806 08:23:10.625199   67989 main.go:141] libmachine: (enable-default-cni-405163) Calling .GetSSHKeyPath
	I0806 08:23:10.625395   67989 main.go:141] libmachine: (enable-default-cni-405163) Calling .GetSSHKeyPath
	I0806 08:23:10.625571   67989 main.go:141] libmachine: (enable-default-cni-405163) Calling .GetSSHUsername
	I0806 08:23:10.625719   67989 main.go:141] libmachine: Using SSH client type: native
	I0806 08:23:10.625900   67989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.116 22 <nil> <nil>}
	I0806 08:23:10.625914   67989 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0806 08:23:10.745912   67989 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722932590.728826797
	
	I0806 08:23:10.745938   67989 fix.go:216] guest clock: 1722932590.728826797
	I0806 08:23:10.745949   67989 fix.go:229] Guest: 2024-08-06 08:23:10.728826797 +0000 UTC Remote: 2024-08-06 08:23:10.621482421 +0000 UTC m=+66.777225358 (delta=107.344376ms)
	I0806 08:23:10.745978   67989 fix.go:200] guest clock delta is within tolerance: 107.344376ms
	I0806 08:23:10.745991   67989 start.go:83] releasing machines lock for "enable-default-cni-405163", held for 25.852389282s
	I0806 08:23:10.746026   67989 main.go:141] libmachine: (enable-default-cni-405163) Calling .DriverName
	I0806 08:23:10.746311   67989 main.go:141] libmachine: (enable-default-cni-405163) Calling .GetIP
	I0806 08:23:10.749611   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | domain enable-default-cni-405163 has defined MAC address 52:54:00:44:6c:10 in network mk-enable-default-cni-405163
	I0806 08:23:10.750070   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6c:10", ip: ""} in network mk-enable-default-cni-405163: {Iface:virbr4 ExpiryTime:2024-08-06 09:23:01 +0000 UTC Type:0 Mac:52:54:00:44:6c:10 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:enable-default-cni-405163 Clientid:01:52:54:00:44:6c:10}
	I0806 08:23:10.750102   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | domain enable-default-cni-405163 has defined IP address 192.168.72.116 and MAC address 52:54:00:44:6c:10 in network mk-enable-default-cni-405163
	I0806 08:23:10.750303   67989 main.go:141] libmachine: (enable-default-cni-405163) Calling .DriverName
	I0806 08:23:10.750873   67989 main.go:141] libmachine: (enable-default-cni-405163) Calling .DriverName
	I0806 08:23:10.751074   67989 main.go:141] libmachine: (enable-default-cni-405163) Calling .DriverName
	I0806 08:23:10.751169   67989 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0806 08:23:10.751204   67989 main.go:141] libmachine: (enable-default-cni-405163) Calling .GetSSHHostname
	I0806 08:23:10.751298   67989 ssh_runner.go:195] Run: cat /version.json
	I0806 08:23:10.751320   67989 main.go:141] libmachine: (enable-default-cni-405163) Calling .GetSSHHostname
	I0806 08:23:10.754541   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | domain enable-default-cni-405163 has defined MAC address 52:54:00:44:6c:10 in network mk-enable-default-cni-405163
	I0806 08:23:10.754896   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6c:10", ip: ""} in network mk-enable-default-cni-405163: {Iface:virbr4 ExpiryTime:2024-08-06 09:23:01 +0000 UTC Type:0 Mac:52:54:00:44:6c:10 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:enable-default-cni-405163 Clientid:01:52:54:00:44:6c:10}
	I0806 08:23:10.755020   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | domain enable-default-cni-405163 has defined IP address 192.168.72.116 and MAC address 52:54:00:44:6c:10 in network mk-enable-default-cni-405163
	I0806 08:23:10.755047   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | domain enable-default-cni-405163 has defined MAC address 52:54:00:44:6c:10 in network mk-enable-default-cni-405163
	I0806 08:23:10.755241   67989 main.go:141] libmachine: (enable-default-cni-405163) Calling .GetSSHPort
	I0806 08:23:10.755454   67989 main.go:141] libmachine: (enable-default-cni-405163) Calling .GetSSHKeyPath
	I0806 08:23:10.755528   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6c:10", ip: ""} in network mk-enable-default-cni-405163: {Iface:virbr4 ExpiryTime:2024-08-06 09:23:01 +0000 UTC Type:0 Mac:52:54:00:44:6c:10 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:enable-default-cni-405163 Clientid:01:52:54:00:44:6c:10}
	I0806 08:23:10.755551   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | domain enable-default-cni-405163 has defined IP address 192.168.72.116 and MAC address 52:54:00:44:6c:10 in network mk-enable-default-cni-405163
	I0806 08:23:10.755630   67989 main.go:141] libmachine: (enable-default-cni-405163) Calling .GetSSHUsername
	I0806 08:23:10.755725   67989 main.go:141] libmachine: (enable-default-cni-405163) Calling .GetSSHPort
	I0806 08:23:10.755779   67989 sshutil.go:53] new ssh client: &{IP:192.168.72.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/enable-default-cni-405163/id_rsa Username:docker}
	I0806 08:23:10.755877   67989 main.go:141] libmachine: (enable-default-cni-405163) Calling .GetSSHKeyPath
	I0806 08:23:10.756028   67989 main.go:141] libmachine: (enable-default-cni-405163) Calling .GetSSHUsername
	I0806 08:23:10.756174   67989 sshutil.go:53] new ssh client: &{IP:192.168.72.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/enable-default-cni-405163/id_rsa Username:docker}
	I0806 08:23:10.870017   67989 ssh_runner.go:195] Run: systemctl --version
	I0806 08:23:10.879363   67989 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0806 08:23:11.049255   67989 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0806 08:23:11.057329   67989 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0806 08:23:11.057400   67989 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0806 08:23:11.075230   67989 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0806 08:23:11.075253   67989 start.go:495] detecting cgroup driver to use...
	I0806 08:23:11.075314   67989 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0806 08:23:11.093034   67989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 08:23:11.108169   67989 docker.go:217] disabling cri-docker service (if available) ...
	I0806 08:23:11.108229   67989 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0806 08:23:11.123526   67989 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0806 08:23:11.137931   67989 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0806 08:23:11.274388   67989 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0806 08:23:11.470764   67989 docker.go:233] disabling docker service ...
	I0806 08:23:11.470840   67989 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0806 08:23:11.486523   67989 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0806 08:23:11.500466   67989 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0806 08:23:11.633231   67989 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0806 08:23:11.780421   67989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0806 08:23:11.795321   67989 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 08:23:11.815573   67989 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0806 08:23:11.815642   67989 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:23:11.826638   67989 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0806 08:23:11.826702   67989 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:23:11.838923   67989 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:23:11.854291   67989 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:23:11.866467   67989 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0806 08:23:11.878507   67989 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:23:11.891636   67989 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:23:11.913253   67989 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:23:11.925838   67989 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0806 08:23:11.936548   67989 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0806 08:23:11.936600   67989 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0806 08:23:11.952151   67989 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0806 08:23:11.964209   67989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 08:23:12.091271   67989 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0806 08:23:12.239162   67989 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0806 08:23:12.239239   67989 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0806 08:23:12.244538   67989 start.go:563] Will wait 60s for crictl version
	I0806 08:23:12.244595   67989 ssh_runner.go:195] Run: which crictl
	I0806 08:23:12.249023   67989 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0806 08:23:12.289579   67989 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0806 08:23:12.289667   67989 ssh_runner.go:195] Run: crio --version
	I0806 08:23:12.320520   67989 ssh_runner.go:195] Run: crio --version
	I0806 08:23:12.354027   67989 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0806 08:23:12.355456   67989 main.go:141] libmachine: (enable-default-cni-405163) Calling .GetIP
	I0806 08:23:12.358327   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | domain enable-default-cni-405163 has defined MAC address 52:54:00:44:6c:10 in network mk-enable-default-cni-405163
	I0806 08:23:12.358699   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:6c:10", ip: ""} in network mk-enable-default-cni-405163: {Iface:virbr4 ExpiryTime:2024-08-06 09:23:01 +0000 UTC Type:0 Mac:52:54:00:44:6c:10 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:enable-default-cni-405163 Clientid:01:52:54:00:44:6c:10}
	I0806 08:23:12.358748   67989 main.go:141] libmachine: (enable-default-cni-405163) DBG | domain enable-default-cni-405163 has defined IP address 192.168.72.116 and MAC address 52:54:00:44:6c:10 in network mk-enable-default-cni-405163
	I0806 08:23:12.359022   67989 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0806 08:23:12.363694   67989 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 08:23:12.378125   67989 kubeadm.go:883] updating cluster {Name:enable-default-cni-405163 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:enable-default-cni-405163 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.72.116 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0806 08:23:12.378225   67989 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0806 08:23:12.378264   67989 ssh_runner.go:195] Run: sudo crictl images --output json
	I0806 08:23:12.415338   67989 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0806 08:23:12.415420   67989 ssh_runner.go:195] Run: which lz4
	I0806 08:23:12.423285   67989 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0806 08:23:12.427868   67989 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0806 08:23:12.427907   67989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0806 08:23:10.776183   68099 machine.go:94] provisionDockerMachine start ...
	I0806 08:23:10.776214   68099 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .DriverName
	I0806 08:23:10.776498   68099 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetSSHHostname
	I0806 08:23:10.779977   68099 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | domain kubernetes-upgrade-122219 has defined MAC address 52:54:00:1e:81:b8 in network mk-kubernetes-upgrade-122219
	I0806 08:23:10.780440   68099 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:81:b8", ip: ""} in network mk-kubernetes-upgrade-122219: {Iface:virbr3 ExpiryTime:2024-08-06 09:21:40 +0000 UTC Type:0 Mac:52:54:00:1e:81:b8 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:kubernetes-upgrade-122219 Clientid:01:52:54:00:1e:81:b8}
	I0806 08:23:10.780468   68099 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | domain kubernetes-upgrade-122219 has defined IP address 192.168.50.53 and MAC address 52:54:00:1e:81:b8 in network mk-kubernetes-upgrade-122219
	I0806 08:23:10.780654   68099 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetSSHPort
	I0806 08:23:10.780836   68099 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetSSHKeyPath
	I0806 08:23:10.781001   68099 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetSSHKeyPath
	I0806 08:23:10.781109   68099 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetSSHUsername
	I0806 08:23:10.781268   68099 main.go:141] libmachine: Using SSH client type: native
	I0806 08:23:10.781485   68099 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.53 22 <nil> <nil>}
	I0806 08:23:10.781498   68099 main.go:141] libmachine: About to run SSH command:
	hostname
	I0806 08:23:10.904076   68099 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-122219
	
	I0806 08:23:10.904111   68099 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetMachineName
	I0806 08:23:10.904406   68099 buildroot.go:166] provisioning hostname "kubernetes-upgrade-122219"
	I0806 08:23:10.904439   68099 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetMachineName
	I0806 08:23:10.904626   68099 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetSSHHostname
	I0806 08:23:10.907978   68099 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | domain kubernetes-upgrade-122219 has defined MAC address 52:54:00:1e:81:b8 in network mk-kubernetes-upgrade-122219
	I0806 08:23:10.908854   68099 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:81:b8", ip: ""} in network mk-kubernetes-upgrade-122219: {Iface:virbr3 ExpiryTime:2024-08-06 09:21:40 +0000 UTC Type:0 Mac:52:54:00:1e:81:b8 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:kubernetes-upgrade-122219 Clientid:01:52:54:00:1e:81:b8}
	I0806 08:23:10.908903   68099 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | domain kubernetes-upgrade-122219 has defined IP address 192.168.50.53 and MAC address 52:54:00:1e:81:b8 in network mk-kubernetes-upgrade-122219
	I0806 08:23:10.909174   68099 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetSSHPort
	I0806 08:23:10.909411   68099 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetSSHKeyPath
	I0806 08:23:10.909582   68099 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetSSHKeyPath
	I0806 08:23:10.909783   68099 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetSSHUsername
	I0806 08:23:10.909948   68099 main.go:141] libmachine: Using SSH client type: native
	I0806 08:23:10.910186   68099 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.53 22 <nil> <nil>}
	I0806 08:23:10.910206   68099 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-122219 && echo "kubernetes-upgrade-122219" | sudo tee /etc/hostname
	I0806 08:23:11.048858   68099 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-122219
	
	I0806 08:23:11.048907   68099 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetSSHHostname
	I0806 08:23:11.052766   68099 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | domain kubernetes-upgrade-122219 has defined MAC address 52:54:00:1e:81:b8 in network mk-kubernetes-upgrade-122219
	I0806 08:23:11.053202   68099 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:81:b8", ip: ""} in network mk-kubernetes-upgrade-122219: {Iface:virbr3 ExpiryTime:2024-08-06 09:21:40 +0000 UTC Type:0 Mac:52:54:00:1e:81:b8 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:kubernetes-upgrade-122219 Clientid:01:52:54:00:1e:81:b8}
	I0806 08:23:11.053237   68099 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | domain kubernetes-upgrade-122219 has defined IP address 192.168.50.53 and MAC address 52:54:00:1e:81:b8 in network mk-kubernetes-upgrade-122219
	I0806 08:23:11.053437   68099 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetSSHPort
	I0806 08:23:11.053694   68099 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetSSHKeyPath
	I0806 08:23:11.053905   68099 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetSSHKeyPath
	I0806 08:23:11.054081   68099 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetSSHUsername
	I0806 08:23:11.054279   68099 main.go:141] libmachine: Using SSH client type: native
	I0806 08:23:11.054487   68099 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.53 22 <nil> <nil>}
	I0806 08:23:11.054510   68099 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-122219' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-122219/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-122219' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0806 08:23:11.170342   68099 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 08:23:11.170377   68099 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19370-13057/.minikube CaCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19370-13057/.minikube}
	I0806 08:23:11.170419   68099 buildroot.go:174] setting up certificates
	I0806 08:23:11.170434   68099 provision.go:84] configureAuth start
	I0806 08:23:11.170452   68099 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetMachineName
	I0806 08:23:11.170740   68099 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetIP
	I0806 08:23:11.173977   68099 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | domain kubernetes-upgrade-122219 has defined MAC address 52:54:00:1e:81:b8 in network mk-kubernetes-upgrade-122219
	I0806 08:23:11.174377   68099 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:81:b8", ip: ""} in network mk-kubernetes-upgrade-122219: {Iface:virbr3 ExpiryTime:2024-08-06 09:21:40 +0000 UTC Type:0 Mac:52:54:00:1e:81:b8 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:kubernetes-upgrade-122219 Clientid:01:52:54:00:1e:81:b8}
	I0806 08:23:11.174415   68099 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | domain kubernetes-upgrade-122219 has defined IP address 192.168.50.53 and MAC address 52:54:00:1e:81:b8 in network mk-kubernetes-upgrade-122219
	I0806 08:23:11.174606   68099 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetSSHHostname
	I0806 08:23:11.177221   68099 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | domain kubernetes-upgrade-122219 has defined MAC address 52:54:00:1e:81:b8 in network mk-kubernetes-upgrade-122219
	I0806 08:23:11.177626   68099 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:81:b8", ip: ""} in network mk-kubernetes-upgrade-122219: {Iface:virbr3 ExpiryTime:2024-08-06 09:21:40 +0000 UTC Type:0 Mac:52:54:00:1e:81:b8 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:kubernetes-upgrade-122219 Clientid:01:52:54:00:1e:81:b8}
	I0806 08:23:11.177652   68099 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | domain kubernetes-upgrade-122219 has defined IP address 192.168.50.53 and MAC address 52:54:00:1e:81:b8 in network mk-kubernetes-upgrade-122219
	I0806 08:23:11.177803   68099 provision.go:143] copyHostCerts
	I0806 08:23:11.177886   68099 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem, removing ...
	I0806 08:23:11.177901   68099 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem
	I0806 08:23:11.177972   68099 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem (1078 bytes)
	I0806 08:23:11.178091   68099 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem, removing ...
	I0806 08:23:11.178103   68099 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem
	I0806 08:23:11.178134   68099 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem (1123 bytes)
	I0806 08:23:11.178218   68099 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem, removing ...
	I0806 08:23:11.178227   68099 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem
	I0806 08:23:11.178251   68099 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem (1675 bytes)
	I0806 08:23:11.178314   68099 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-122219 san=[127.0.0.1 192.168.50.53 kubernetes-upgrade-122219 localhost minikube]
	I0806 08:23:11.529308   68099 provision.go:177] copyRemoteCerts
	I0806 08:23:11.529393   68099 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0806 08:23:11.529428   68099 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetSSHHostname
	I0806 08:23:11.532089   68099 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | domain kubernetes-upgrade-122219 has defined MAC address 52:54:00:1e:81:b8 in network mk-kubernetes-upgrade-122219
	I0806 08:23:11.532488   68099 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:81:b8", ip: ""} in network mk-kubernetes-upgrade-122219: {Iface:virbr3 ExpiryTime:2024-08-06 09:21:40 +0000 UTC Type:0 Mac:52:54:00:1e:81:b8 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:kubernetes-upgrade-122219 Clientid:01:52:54:00:1e:81:b8}
	I0806 08:23:11.532517   68099 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | domain kubernetes-upgrade-122219 has defined IP address 192.168.50.53 and MAC address 52:54:00:1e:81:b8 in network mk-kubernetes-upgrade-122219
	I0806 08:23:11.532676   68099 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetSSHPort
	I0806 08:23:11.532872   68099 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetSSHKeyPath
	I0806 08:23:11.533043   68099 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetSSHUsername
	I0806 08:23:11.533188   68099 sshutil.go:53] new ssh client: &{IP:192.168.50.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/kubernetes-upgrade-122219/id_rsa Username:docker}
	I0806 08:23:11.620006   68099 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0806 08:23:11.651345   68099 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0806 08:23:11.684938   68099 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0806 08:23:11.715842   68099 provision.go:87] duration metric: took 545.392495ms to configureAuth
	I0806 08:23:11.715872   68099 buildroot.go:189] setting minikube options for container-runtime
	I0806 08:23:11.716031   68099 config.go:182] Loaded profile config "kubernetes-upgrade-122219": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-rc.0
	I0806 08:23:11.716116   68099 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetSSHHostname
	I0806 08:23:11.719320   68099 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | domain kubernetes-upgrade-122219 has defined MAC address 52:54:00:1e:81:b8 in network mk-kubernetes-upgrade-122219
	I0806 08:23:11.719686   68099 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:81:b8", ip: ""} in network mk-kubernetes-upgrade-122219: {Iface:virbr3 ExpiryTime:2024-08-06 09:21:40 +0000 UTC Type:0 Mac:52:54:00:1e:81:b8 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:kubernetes-upgrade-122219 Clientid:01:52:54:00:1e:81:b8}
	I0806 08:23:11.719733   68099 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | domain kubernetes-upgrade-122219 has defined IP address 192.168.50.53 and MAC address 52:54:00:1e:81:b8 in network mk-kubernetes-upgrade-122219
	I0806 08:23:11.719866   68099 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetSSHPort
	I0806 08:23:11.720129   68099 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetSSHKeyPath
	I0806 08:23:11.720352   68099 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetSSHKeyPath
	I0806 08:23:11.720547   68099 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetSSHUsername
	I0806 08:23:11.720737   68099 main.go:141] libmachine: Using SSH client type: native
	I0806 08:23:11.720949   68099 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.53 22 <nil> <nil>}
	I0806 08:23:11.720967   68099 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0806 08:23:10.329076   67412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:23:10.828966   67412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:23:11.328485   67412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:23:11.828747   67412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:23:12.328913   67412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:23:12.829376   67412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:23:13.329124   67412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:23:13.828879   67412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:23:14.329255   67412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:23:14.828974   67412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:23:11.465197   65825 pod_ready.go:102] pod "calico-kube-controllers-77d59654f4-jlr9z" in "kube-system" namespace has status "Ready":"False"
	I0806 08:23:13.478384   65825 pod_ready.go:102] pod "calico-kube-controllers-77d59654f4-jlr9z" in "kube-system" namespace has status "Ready":"False"
	I0806 08:23:15.607895   65825 pod_ready.go:102] pod "calico-kube-controllers-77d59654f4-jlr9z" in "kube-system" namespace has status "Ready":"False"
	I0806 08:23:14.060491   67989 crio.go:462] duration metric: took 1.637256037s to copy over tarball
	I0806 08:23:14.060600   67989 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0806 08:23:16.671654   67989 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.611023782s)
	I0806 08:23:16.671682   67989 crio.go:469] duration metric: took 2.611150755s to extract the tarball
	I0806 08:23:16.671692   67989 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0806 08:23:16.710038   67989 ssh_runner.go:195] Run: sudo crictl images --output json
	I0806 08:23:16.758868   67989 crio.go:514] all images are preloaded for cri-o runtime.
	I0806 08:23:16.758898   67989 cache_images.go:84] Images are preloaded, skipping loading
	I0806 08:23:16.758909   67989 kubeadm.go:934] updating node { 192.168.72.116 8443 v1.30.3 crio true true} ...
	I0806 08:23:16.759044   67989 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=enable-default-cni-405163 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.116
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-405163 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I0806 08:23:16.759127   67989 ssh_runner.go:195] Run: crio config
	I0806 08:23:16.816620   67989 cni.go:84] Creating CNI manager for "bridge"
	I0806 08:23:16.816652   67989 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0806 08:23:16.816685   67989 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.116 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:enable-default-cni-405163 NodeName:enable-default-cni-405163 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.116"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.116 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0806 08:23:16.816881   67989 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.116
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "enable-default-cni-405163"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.116
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.116"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0806 08:23:16.816960   67989 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0806 08:23:16.828450   67989 binaries.go:44] Found k8s binaries, skipping transfer
	I0806 08:23:16.828533   67989 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0806 08:23:16.843209   67989 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (325 bytes)
	I0806 08:23:16.871391   67989 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0806 08:23:16.895666   67989 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0806 08:23:16.923381   67989 ssh_runner.go:195] Run: grep 192.168.72.116	control-plane.minikube.internal$ /etc/hosts
	I0806 08:23:16.929743   67989 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.116	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 08:23:16.951093   67989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 08:23:17.119450   67989 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 08:23:17.141105   67989 certs.go:68] Setting up /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/enable-default-cni-405163 for IP: 192.168.72.116
	I0806 08:23:17.141135   67989 certs.go:194] generating shared ca certs ...
	I0806 08:23:17.141157   67989 certs.go:226] acquiring lock for ca certs: {Name:mkf5c5aed91f4108054d9f2c742cad66c86afbed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:23:17.141316   67989 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.key
	I0806 08:23:17.141369   67989 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.key
	I0806 08:23:17.141382   67989 certs.go:256] generating profile certs ...
	I0806 08:23:17.141445   67989 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/enable-default-cni-405163/client.key
	I0806 08:23:17.141458   67989 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/enable-default-cni-405163/client.crt with IP's: []
	I0806 08:23:17.427181   67989 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/enable-default-cni-405163/client.crt ...
	I0806 08:23:17.427216   67989 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/enable-default-cni-405163/client.crt: {Name:mkf3f55f22b7c7a07da8e2ae333bf52c80b11a06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:23:17.427423   67989 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/enable-default-cni-405163/client.key ...
	I0806 08:23:17.427443   67989 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/enable-default-cni-405163/client.key: {Name:mkb87a5383c07c40fc0317f68e844599148c7922 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:23:17.427559   67989 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/enable-default-cni-405163/apiserver.key.35a6cb0a
	I0806 08:23:17.427584   67989 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/enable-default-cni-405163/apiserver.crt.35a6cb0a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.116]
	I0806 08:23:17.600002   67989 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/enable-default-cni-405163/apiserver.crt.35a6cb0a ...
	I0806 08:23:17.600039   67989 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/enable-default-cni-405163/apiserver.crt.35a6cb0a: {Name:mk96921c93da035504abbd1bc58084f9bef1f7ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:23:17.600259   67989 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/enable-default-cni-405163/apiserver.key.35a6cb0a ...
	I0806 08:23:17.600283   67989 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/enable-default-cni-405163/apiserver.key.35a6cb0a: {Name:mk4b571690218924166301c4d72a52e4a96bd1d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:23:17.600414   67989 certs.go:381] copying /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/enable-default-cni-405163/apiserver.crt.35a6cb0a -> /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/enable-default-cni-405163/apiserver.crt
	I0806 08:23:17.600535   67989 certs.go:385] copying /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/enable-default-cni-405163/apiserver.key.35a6cb0a -> /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/enable-default-cni-405163/apiserver.key
	I0806 08:23:17.600642   67989 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/enable-default-cni-405163/proxy-client.key
	I0806 08:23:17.600667   67989 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/enable-default-cni-405163/proxy-client.crt with IP's: []
	I0806 08:23:17.756303   67989 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/enable-default-cni-405163/proxy-client.crt ...
	I0806 08:23:17.756333   67989 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/enable-default-cni-405163/proxy-client.crt: {Name:mk729d04ebde0603dfb8b0842bcb9ec4740f7aef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:23:17.756547   67989 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/enable-default-cni-405163/proxy-client.key ...
	I0806 08:23:17.756577   67989 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/enable-default-cni-405163/proxy-client.key: {Name:mk39022eccf679287520468658f12ce5667b3317 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:23:17.756818   67989 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246.pem (1338 bytes)
	W0806 08:23:17.756871   67989 certs.go:480] ignoring /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246_empty.pem, impossibly tiny 0 bytes
	I0806 08:23:17.756886   67989 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem (1675 bytes)
	I0806 08:23:17.756920   67989 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem (1078 bytes)
	I0806 08:23:17.756955   67989 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem (1123 bytes)
	I0806 08:23:17.756987   67989 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem (1675 bytes)
	I0806 08:23:17.757044   67989 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem (1708 bytes)
	I0806 08:23:17.757705   67989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0806 08:23:17.789515   67989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0806 08:23:17.819298   67989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0806 08:23:17.851485   67989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0806 08:23:17.887946   67989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/enable-default-cni-405163/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0806 08:23:17.939432   67989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/enable-default-cni-405163/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0806 08:23:18.021539   67989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/enable-default-cni-405163/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0806 08:23:18.117459   67989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/enable-default-cni-405163/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0806 08:23:18.146961   67989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246.pem --> /usr/share/ca-certificates/20246.pem (1338 bytes)
	I0806 08:23:18.176156   67989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem --> /usr/share/ca-certificates/202462.pem (1708 bytes)
	I0806 08:23:18.203085   67989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0806 08:23:18.239268   67989 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0806 08:23:18.262637   67989 ssh_runner.go:195] Run: openssl version
	I0806 08:23:18.271337   67989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20246.pem && ln -fs /usr/share/ca-certificates/20246.pem /etc/ssl/certs/20246.pem"
	I0806 08:23:18.286110   67989 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20246.pem
	I0806 08:23:18.292130   67989 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  6 07:17 /usr/share/ca-certificates/20246.pem
	I0806 08:23:18.292204   67989 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20246.pem
	I0806 08:23:18.299589   67989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20246.pem /etc/ssl/certs/51391683.0"
	I0806 08:23:18.312163   67989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202462.pem && ln -fs /usr/share/ca-certificates/202462.pem /etc/ssl/certs/202462.pem"
	I0806 08:23:18.325998   67989 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202462.pem
	I0806 08:23:18.330693   67989 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  6 07:17 /usr/share/ca-certificates/202462.pem
	I0806 08:23:18.330760   67989 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202462.pem
	I0806 08:23:18.336997   67989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202462.pem /etc/ssl/certs/3ec20f2e.0"
	I0806 08:23:18.348901   67989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0806 08:23:18.363617   67989 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0806 08:23:18.369274   67989 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  6 07:07 /usr/share/ca-certificates/minikubeCA.pem
	I0806 08:23:18.369329   67989 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0806 08:23:18.375752   67989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0806 08:23:18.389680   67989 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0806 08:23:18.394539   67989 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0806 08:23:18.394599   67989 kubeadm.go:392] StartCluster: {Name:enable-default-cni-405163 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.30.3 ClusterName:enable-default-cni-405163 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.72.116 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 08:23:18.394662   67989 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0806 08:23:18.394704   67989 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0806 08:23:18.434698   67989 cri.go:89] found id: ""
	I0806 08:23:18.434781   67989 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0806 08:23:18.446748   67989 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0806 08:23:18.459692   67989 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0806 08:23:18.470204   67989 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0806 08:23:18.470223   67989 kubeadm.go:157] found existing configuration files:
	
	I0806 08:23:18.470263   67989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0806 08:23:18.482558   67989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0806 08:23:18.482626   67989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0806 08:23:18.493636   67989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0806 08:23:18.506961   67989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0806 08:23:18.507023   67989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0806 08:23:18.520738   67989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0806 08:23:18.533272   67989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0806 08:23:18.533339   67989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0806 08:23:18.548015   67989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0806 08:23:18.560437   67989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0806 08:23:18.560488   67989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0806 08:23:18.580661   67989 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0806 08:23:18.662750   67989 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0806 08:23:18.662828   67989 kubeadm.go:310] [preflight] Running pre-flight checks
	I0806 08:23:18.825541   67989 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0806 08:23:18.825748   67989 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0806 08:23:18.825871   67989 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0806 08:23:15.329078   67412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:23:15.828799   67412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:23:16.329010   67412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:23:16.828956   67412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:23:17.328874   67412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:23:17.828310   67412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:23:18.831773   67412 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (1.003421432s)
	I0806 08:23:18.831871   67412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:23:19.063354   67412 kubeadm.go:1113] duration metric: took 13.965276091s to wait for elevateKubeSystemPrivileges
	I0806 08:23:19.063397   67412 kubeadm.go:394] duration metric: took 26.538458323s to StartCluster
	I0806 08:23:19.063417   67412 settings.go:142] acquiring lock: {Name:mkb0bfbcae3b4205ef43487a9de84cfd8afd2e5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:23:19.063492   67412 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19370-13057/kubeconfig
	I0806 08:23:19.065890   67412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/kubeconfig: {Name:mk5c066914acf8e36556b29792f75b98076ca34a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:23:19.066159   67412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0806 08:23:19.066173   67412 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.61.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0806 08:23:19.066468   67412 config.go:182] Loaded profile config "custom-flannel-405163": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 08:23:19.066532   67412 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0806 08:23:19.066597   67412 addons.go:69] Setting storage-provisioner=true in profile "custom-flannel-405163"
	I0806 08:23:19.066632   67412 addons.go:234] Setting addon storage-provisioner=true in "custom-flannel-405163"
	I0806 08:23:19.066672   67412 host.go:66] Checking if "custom-flannel-405163" exists ...
	I0806 08:23:19.067126   67412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:23:19.067156   67412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:23:19.067221   67412 addons.go:69] Setting default-storageclass=true in profile "custom-flannel-405163"
	I0806 08:23:19.067270   67412 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "custom-flannel-405163"
	I0806 08:23:19.067643   67412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:23:19.067671   67412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:23:19.068896   67412 out.go:177] * Verifying Kubernetes components...
	I0806 08:23:19.070227   67412 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 08:23:19.086966   67412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46593
	I0806 08:23:19.087192   67412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38851
	I0806 08:23:19.087592   67412 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:23:19.087965   67412 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:23:19.088166   67412 main.go:141] libmachine: Using API Version  1
	I0806 08:23:19.088179   67412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:23:19.088730   67412 main.go:141] libmachine: Using API Version  1
	I0806 08:23:19.088749   67412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:23:19.088783   67412 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:23:19.089130   67412 main.go:141] libmachine: (custom-flannel-405163) Calling .GetState
	I0806 08:23:19.089157   67412 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:23:19.089678   67412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:23:19.089716   67412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:23:19.093672   67412 addons.go:234] Setting addon default-storageclass=true in "custom-flannel-405163"
	I0806 08:23:19.093712   67412 host.go:66] Checking if "custom-flannel-405163" exists ...
	I0806 08:23:19.094144   67412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:23:19.099510   67412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:23:19.114606   67412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35303
	I0806 08:23:19.115557   67412 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:23:19.116163   67412 main.go:141] libmachine: Using API Version  1
	I0806 08:23:19.116194   67412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:23:19.120241   67412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35793
	I0806 08:23:19.120875   67412 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:23:19.120942   67412 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:23:19.121273   67412 main.go:141] libmachine: (custom-flannel-405163) Calling .GetState
	I0806 08:23:19.123287   67412 main.go:141] libmachine: (custom-flannel-405163) Calling .DriverName
	I0806 08:23:19.124998   67412 main.go:141] libmachine: Using API Version  1
	I0806 08:23:19.125025   67412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:23:19.125310   67412 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:23:19.125665   67412 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 08:23:19.148144   67989 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0806 08:23:18.079612   68099 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0806 08:23:18.079661   68099 machine.go:97] duration metric: took 7.303436937s to provisionDockerMachine
	I0806 08:23:18.079677   68099 start.go:293] postStartSetup for "kubernetes-upgrade-122219" (driver="kvm2")
	I0806 08:23:18.079692   68099 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0806 08:23:18.079713   68099 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .DriverName
	I0806 08:23:18.080046   68099 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0806 08:23:18.080074   68099 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetSSHHostname
	I0806 08:23:18.082704   68099 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | domain kubernetes-upgrade-122219 has defined MAC address 52:54:00:1e:81:b8 in network mk-kubernetes-upgrade-122219
	I0806 08:23:18.083070   68099 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:81:b8", ip: ""} in network mk-kubernetes-upgrade-122219: {Iface:virbr3 ExpiryTime:2024-08-06 09:21:40 +0000 UTC Type:0 Mac:52:54:00:1e:81:b8 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:kubernetes-upgrade-122219 Clientid:01:52:54:00:1e:81:b8}
	I0806 08:23:18.083119   68099 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | domain kubernetes-upgrade-122219 has defined IP address 192.168.50.53 and MAC address 52:54:00:1e:81:b8 in network mk-kubernetes-upgrade-122219
	I0806 08:23:18.083211   68099 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetSSHPort
	I0806 08:23:18.083422   68099 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetSSHKeyPath
	I0806 08:23:18.083639   68099 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetSSHUsername
	I0806 08:23:18.083767   68099 sshutil.go:53] new ssh client: &{IP:192.168.50.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/kubernetes-upgrade-122219/id_rsa Username:docker}
	I0806 08:23:18.175706   68099 ssh_runner.go:195] Run: cat /etc/os-release
	I0806 08:23:18.181445   68099 info.go:137] Remote host: Buildroot 2023.02.9
	I0806 08:23:18.181476   68099 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-13057/.minikube/addons for local assets ...
	I0806 08:23:18.181547   68099 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-13057/.minikube/files for local assets ...
	I0806 08:23:18.181647   68099 filesync.go:149] local asset: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem -> 202462.pem in /etc/ssl/certs
	I0806 08:23:18.181770   68099 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0806 08:23:18.194024   68099 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem --> /etc/ssl/certs/202462.pem (1708 bytes)
	I0806 08:23:18.230404   68099 start.go:296] duration metric: took 150.708895ms for postStartSetup
	I0806 08:23:18.230469   68099 fix.go:56] duration metric: took 7.484322797s for fixHost
	I0806 08:23:18.230495   68099 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetSSHHostname
	I0806 08:23:18.233631   68099 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | domain kubernetes-upgrade-122219 has defined MAC address 52:54:00:1e:81:b8 in network mk-kubernetes-upgrade-122219
	I0806 08:23:18.234102   68099 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:81:b8", ip: ""} in network mk-kubernetes-upgrade-122219: {Iface:virbr3 ExpiryTime:2024-08-06 09:21:40 +0000 UTC Type:0 Mac:52:54:00:1e:81:b8 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:kubernetes-upgrade-122219 Clientid:01:52:54:00:1e:81:b8}
	I0806 08:23:18.234130   68099 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | domain kubernetes-upgrade-122219 has defined IP address 192.168.50.53 and MAC address 52:54:00:1e:81:b8 in network mk-kubernetes-upgrade-122219
	I0806 08:23:18.234428   68099 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetSSHPort
	I0806 08:23:18.234658   68099 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetSSHKeyPath
	I0806 08:23:18.234875   68099 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetSSHKeyPath
	I0806 08:23:18.235027   68099 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetSSHUsername
	I0806 08:23:18.235229   68099 main.go:141] libmachine: Using SSH client type: native
	I0806 08:23:18.235474   68099 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.53 22 <nil> <nil>}
	I0806 08:23:18.235493   68099 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0806 08:23:18.357487   68099 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722932598.326525485
	
	I0806 08:23:18.357515   68099 fix.go:216] guest clock: 1722932598.326525485
	I0806 08:23:18.357524   68099 fix.go:229] Guest: 2024-08-06 08:23:18.326525485 +0000 UTC Remote: 2024-08-06 08:23:18.23047393 +0000 UTC m=+69.077472922 (delta=96.051555ms)
	I0806 08:23:18.357549   68099 fix.go:200] guest clock delta is within tolerance: 96.051555ms
	I0806 08:23:18.357556   68099 start.go:83] releasing machines lock for "kubernetes-upgrade-122219", held for 7.611441768s
	I0806 08:23:18.357582   68099 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .DriverName
	I0806 08:23:18.357858   68099 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetIP
	I0806 08:23:18.360456   68099 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | domain kubernetes-upgrade-122219 has defined MAC address 52:54:00:1e:81:b8 in network mk-kubernetes-upgrade-122219
	I0806 08:23:18.360882   68099 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:81:b8", ip: ""} in network mk-kubernetes-upgrade-122219: {Iface:virbr3 ExpiryTime:2024-08-06 09:21:40 +0000 UTC Type:0 Mac:52:54:00:1e:81:b8 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:kubernetes-upgrade-122219 Clientid:01:52:54:00:1e:81:b8}
	I0806 08:23:18.360923   68099 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | domain kubernetes-upgrade-122219 has defined IP address 192.168.50.53 and MAC address 52:54:00:1e:81:b8 in network mk-kubernetes-upgrade-122219
	I0806 08:23:18.361060   68099 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .DriverName
	I0806 08:23:18.361710   68099 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .DriverName
	I0806 08:23:18.361909   68099 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .DriverName
	I0806 08:23:18.362005   68099 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0806 08:23:18.362050   68099 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetSSHHostname
	I0806 08:23:18.362090   68099 ssh_runner.go:195] Run: cat /version.json
	I0806 08:23:18.362108   68099 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetSSHHostname
	I0806 08:23:18.364955   68099 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | domain kubernetes-upgrade-122219 has defined MAC address 52:54:00:1e:81:b8 in network mk-kubernetes-upgrade-122219
	I0806 08:23:18.365182   68099 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | domain kubernetes-upgrade-122219 has defined MAC address 52:54:00:1e:81:b8 in network mk-kubernetes-upgrade-122219
	I0806 08:23:18.365359   68099 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:81:b8", ip: ""} in network mk-kubernetes-upgrade-122219: {Iface:virbr3 ExpiryTime:2024-08-06 09:21:40 +0000 UTC Type:0 Mac:52:54:00:1e:81:b8 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:kubernetes-upgrade-122219 Clientid:01:52:54:00:1e:81:b8}
	I0806 08:23:18.365394   68099 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | domain kubernetes-upgrade-122219 has defined IP address 192.168.50.53 and MAC address 52:54:00:1e:81:b8 in network mk-kubernetes-upgrade-122219
	I0806 08:23:18.365545   68099 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetSSHPort
	I0806 08:23:18.365630   68099 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:81:b8", ip: ""} in network mk-kubernetes-upgrade-122219: {Iface:virbr3 ExpiryTime:2024-08-06 09:21:40 +0000 UTC Type:0 Mac:52:54:00:1e:81:b8 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:kubernetes-upgrade-122219 Clientid:01:52:54:00:1e:81:b8}
	I0806 08:23:18.365653   68099 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | domain kubernetes-upgrade-122219 has defined IP address 192.168.50.53 and MAC address 52:54:00:1e:81:b8 in network mk-kubernetes-upgrade-122219
	I0806 08:23:18.365729   68099 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetSSHKeyPath
	I0806 08:23:18.365891   68099 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetSSHUsername
	I0806 08:23:18.365924   68099 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetSSHPort
	I0806 08:23:18.366060   68099 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetSSHKeyPath
	I0806 08:23:18.366123   68099 sshutil.go:53] new ssh client: &{IP:192.168.50.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/kubernetes-upgrade-122219/id_rsa Username:docker}
	I0806 08:23:18.366215   68099 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetSSHUsername
	I0806 08:23:18.366382   68099 sshutil.go:53] new ssh client: &{IP:192.168.50.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/kubernetes-upgrade-122219/id_rsa Username:docker}
	I0806 08:23:18.474333   68099 ssh_runner.go:195] Run: systemctl --version
	I0806 08:23:18.482946   68099 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0806 08:23:18.656242   68099 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0806 08:23:18.667091   68099 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0806 08:23:18.667175   68099 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0806 08:23:18.678110   68099 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0806 08:23:18.678136   68099 start.go:495] detecting cgroup driver to use...
	I0806 08:23:18.678209   68099 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0806 08:23:18.697438   68099 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 08:23:18.713755   68099 docker.go:217] disabling cri-docker service (if available) ...
	I0806 08:23:18.713823   68099 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0806 08:23:18.730158   68099 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0806 08:23:18.747320   68099 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0806 08:23:18.966440   68099 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0806 08:23:19.190538   68099 docker.go:233] disabling docker service ...
	I0806 08:23:19.190617   68099 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0806 08:23:19.125799   67412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:23:19.125846   67412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:23:19.127252   67412 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0806 08:23:19.127302   67412 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0806 08:23:19.127332   67412 main.go:141] libmachine: (custom-flannel-405163) Calling .GetSSHHostname
	I0806 08:23:19.131596   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | domain custom-flannel-405163 has defined MAC address 52:54:00:53:15:a4 in network mk-custom-flannel-405163
	I0806 08:23:19.141480   67412 main.go:141] libmachine: (custom-flannel-405163) Calling .GetSSHPort
	I0806 08:23:19.141550   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:15:a4", ip: ""} in network mk-custom-flannel-405163: {Iface:virbr1 ExpiryTime:2024-08-06 09:22:33 +0000 UTC Type:0 Mac:52:54:00:53:15:a4 Iaid: IPaddr:192.168.61.6 Prefix:24 Hostname:custom-flannel-405163 Clientid:01:52:54:00:53:15:a4}
	I0806 08:23:19.141581   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | domain custom-flannel-405163 has defined IP address 192.168.61.6 and MAC address 52:54:00:53:15:a4 in network mk-custom-flannel-405163
	I0806 08:23:19.141751   67412 main.go:141] libmachine: (custom-flannel-405163) Calling .GetSSHKeyPath
	I0806 08:23:19.141925   67412 main.go:141] libmachine: (custom-flannel-405163) Calling .GetSSHUsername
	I0806 08:23:19.142076   67412 sshutil.go:53] new ssh client: &{IP:192.168.61.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/custom-flannel-405163/id_rsa Username:docker}
	I0806 08:23:19.147707   67412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45285
	I0806 08:23:19.148270   67412 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:23:19.148863   67412 main.go:141] libmachine: Using API Version  1
	I0806 08:23:19.148884   67412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:23:19.152926   67412 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:23:19.153123   67412 main.go:141] libmachine: (custom-flannel-405163) Calling .GetState
	I0806 08:23:19.159011   67412 main.go:141] libmachine: (custom-flannel-405163) Calling .DriverName
	I0806 08:23:19.159179   67412 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0806 08:23:19.159187   67412 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0806 08:23:19.159200   67412 main.go:141] libmachine: (custom-flannel-405163) Calling .GetSSHHostname
	I0806 08:23:19.162080   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | domain custom-flannel-405163 has defined MAC address 52:54:00:53:15:a4 in network mk-custom-flannel-405163
	I0806 08:23:19.162557   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:15:a4", ip: ""} in network mk-custom-flannel-405163: {Iface:virbr1 ExpiryTime:2024-08-06 09:22:33 +0000 UTC Type:0 Mac:52:54:00:53:15:a4 Iaid: IPaddr:192.168.61.6 Prefix:24 Hostname:custom-flannel-405163 Clientid:01:52:54:00:53:15:a4}
	I0806 08:23:19.162579   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | domain custom-flannel-405163 has defined IP address 192.168.61.6 and MAC address 52:54:00:53:15:a4 in network mk-custom-flannel-405163
	I0806 08:23:19.162858   67412 main.go:141] libmachine: (custom-flannel-405163) Calling .GetSSHPort
	I0806 08:23:19.163053   67412 main.go:141] libmachine: (custom-flannel-405163) Calling .GetSSHKeyPath
	I0806 08:23:19.163149   67412 main.go:141] libmachine: (custom-flannel-405163) Calling .GetSSHUsername
	I0806 08:23:19.163217   67412 sshutil.go:53] new ssh client: &{IP:192.168.61.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/custom-flannel-405163/id_rsa Username:docker}
	I0806 08:23:19.385833   67412 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 08:23:19.385954   67412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0806 08:23:19.442241   67412 node_ready.go:35] waiting up to 15m0s for node "custom-flannel-405163" to be "Ready" ...
	I0806 08:23:19.484369   67412 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0806 08:23:19.601782   67412 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0806 08:23:19.852843   67412 start.go:971] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0806 08:23:20.300472   67412 main.go:141] libmachine: Making call to close driver server
	I0806 08:23:20.300505   67412 main.go:141] libmachine: (custom-flannel-405163) Calling .Close
	I0806 08:23:20.300478   67412 main.go:141] libmachine: Making call to close driver server
	I0806 08:23:20.300571   67412 main.go:141] libmachine: (custom-flannel-405163) Calling .Close
	I0806 08:23:20.303847   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | Closing plugin on server side
	I0806 08:23:20.303926   67412 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:23:20.303935   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | Closing plugin on server side
	I0806 08:23:20.303926   67412 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:23:20.303994   67412 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:23:20.304007   67412 main.go:141] libmachine: Making call to close driver server
	I0806 08:23:20.304020   67412 main.go:141] libmachine: (custom-flannel-405163) Calling .Close
	I0806 08:23:20.304086   67412 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:23:20.304137   67412 main.go:141] libmachine: Making call to close driver server
	I0806 08:23:20.304154   67412 main.go:141] libmachine: (custom-flannel-405163) Calling .Close
	I0806 08:23:20.304435   67412 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:23:20.304452   67412 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:23:20.304474   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | Closing plugin on server side
	I0806 08:23:20.304711   67412 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:23:20.304751   67412 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:23:20.317372   67412 main.go:141] libmachine: Making call to close driver server
	I0806 08:23:20.317393   67412 main.go:141] libmachine: (custom-flannel-405163) Calling .Close
	I0806 08:23:20.319190   67412 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:23:20.319207   67412 main.go:141] libmachine: (custom-flannel-405163) DBG | Closing plugin on server side
	I0806 08:23:20.319214   67412 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:23:20.320975   67412 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0806 08:23:19.220593   68099 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0806 08:23:19.241591   68099 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0806 08:23:19.449845   68099 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0806 08:23:19.641427   68099 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0806 08:23:19.661926   68099 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 08:23:19.689972   68099 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0806 08:23:19.690063   68099 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:23:19.705896   68099 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0806 08:23:19.705972   68099 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:23:19.721919   68099 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:23:19.736593   68099 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:23:19.752472   68099 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0806 08:23:19.768858   68099 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:23:19.786341   68099 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:23:19.804333   68099 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:23:19.819321   68099 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0806 08:23:19.832828   68099 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0806 08:23:19.846027   68099 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 08:23:20.043895   68099 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0806 08:23:20.397939   68099 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0806 08:23:20.398024   68099 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0806 08:23:20.404147   68099 start.go:563] Will wait 60s for crictl version
	I0806 08:23:20.404215   68099 ssh_runner.go:195] Run: which crictl
	I0806 08:23:20.409333   68099 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0806 08:23:20.467024   68099 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0806 08:23:20.467112   68099 ssh_runner.go:195] Run: crio --version
	I0806 08:23:20.509620   68099 ssh_runner.go:195] Run: crio --version
	I0806 08:23:20.553398   68099 out.go:177] * Preparing Kubernetes v1.31.0-rc.0 on CRI-O 1.29.1 ...
	I0806 08:23:18.787646   65825 pod_ready.go:102] pod "calico-kube-controllers-77d59654f4-jlr9z" in "kube-system" namespace has status "Ready":"False"
	I0806 08:23:20.966012   65825 pod_ready.go:92] pod "calico-kube-controllers-77d59654f4-jlr9z" in "kube-system" namespace has status "Ready":"True"
	I0806 08:23:20.966040   65825 pod_ready.go:81] duration metric: took 22.009686281s for pod "calico-kube-controllers-77d59654f4-jlr9z" in "kube-system" namespace to be "Ready" ...
	I0806 08:23:20.966053   65825 pod_ready.go:78] waiting up to 15m0s for pod "calico-node-b2mgn" in "kube-system" namespace to be "Ready" ...
	I0806 08:23:20.974540   65825 pod_ready.go:92] pod "calico-node-b2mgn" in "kube-system" namespace has status "Ready":"True"
	I0806 08:23:20.974562   65825 pod_ready.go:81] duration metric: took 8.501039ms for pod "calico-node-b2mgn" in "kube-system" namespace to be "Ready" ...
	I0806 08:23:20.974574   65825 pod_ready.go:78] waiting up to 15m0s for pod "coredns-7db6d8ff4d-b7wpx" in "kube-system" namespace to be "Ready" ...
	I0806 08:23:20.980351   65825 pod_ready.go:92] pod "coredns-7db6d8ff4d-b7wpx" in "kube-system" namespace has status "Ready":"True"
	I0806 08:23:20.980395   65825 pod_ready.go:81] duration metric: took 5.812925ms for pod "coredns-7db6d8ff4d-b7wpx" in "kube-system" namespace to be "Ready" ...
	I0806 08:23:20.980407   65825 pod_ready.go:78] waiting up to 15m0s for pod "etcd-calico-405163" in "kube-system" namespace to be "Ready" ...
	I0806 08:23:20.986465   65825 pod_ready.go:92] pod "etcd-calico-405163" in "kube-system" namespace has status "Ready":"True"
	I0806 08:23:20.986492   65825 pod_ready.go:81] duration metric: took 6.076608ms for pod "etcd-calico-405163" in "kube-system" namespace to be "Ready" ...
	I0806 08:23:20.986503   65825 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-calico-405163" in "kube-system" namespace to be "Ready" ...
	I0806 08:23:20.991602   65825 pod_ready.go:92] pod "kube-apiserver-calico-405163" in "kube-system" namespace has status "Ready":"True"
	I0806 08:23:20.991624   65825 pod_ready.go:81] duration metric: took 5.113341ms for pod "kube-apiserver-calico-405163" in "kube-system" namespace to be "Ready" ...
	I0806 08:23:20.991635   65825 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-calico-405163" in "kube-system" namespace to be "Ready" ...
	I0806 08:23:19.150212   67989 out.go:204]   - Generating certificates and keys ...
	I0806 08:23:19.150376   67989 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0806 08:23:19.150502   67989 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0806 08:23:19.452977   67989 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0806 08:23:19.884901   67989 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0806 08:23:20.019607   67989 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0806 08:23:20.234285   67989 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0806 08:23:20.393881   67989 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0806 08:23:20.394481   67989 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [enable-default-cni-405163 localhost] and IPs [192.168.72.116 127.0.0.1 ::1]
	I0806 08:23:20.630270   67989 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0806 08:23:20.630635   67989 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [enable-default-cni-405163 localhost] and IPs [192.168.72.116 127.0.0.1 ::1]
	I0806 08:23:20.919586   67989 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0806 08:23:21.319931   67989 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0806 08:23:21.441996   67989 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0806 08:23:21.442196   67989 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0806 08:23:21.751775   67989 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0806 08:23:21.833638   67989 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0806 08:23:22.195315   67989 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0806 08:23:22.424289   67989 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0806 08:23:22.613003   67989 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0806 08:23:22.613780   67989 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0806 08:23:22.616456   67989 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0806 08:23:21.362109   65825 pod_ready.go:92] pod "kube-controller-manager-calico-405163" in "kube-system" namespace has status "Ready":"True"
	I0806 08:23:21.362134   65825 pod_ready.go:81] duration metric: took 370.491272ms for pod "kube-controller-manager-calico-405163" in "kube-system" namespace to be "Ready" ...
	I0806 08:23:21.362145   65825 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-2frwq" in "kube-system" namespace to be "Ready" ...
	I0806 08:23:21.760608   65825 pod_ready.go:92] pod "kube-proxy-2frwq" in "kube-system" namespace has status "Ready":"True"
	I0806 08:23:21.760630   65825 pod_ready.go:81] duration metric: took 398.477347ms for pod "kube-proxy-2frwq" in "kube-system" namespace to be "Ready" ...
	I0806 08:23:21.760639   65825 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-calico-405163" in "kube-system" namespace to be "Ready" ...
	I0806 08:23:22.161773   65825 pod_ready.go:92] pod "kube-scheduler-calico-405163" in "kube-system" namespace has status "Ready":"True"
	I0806 08:23:22.161806   65825 pod_ready.go:81] duration metric: took 401.160304ms for pod "kube-scheduler-calico-405163" in "kube-system" namespace to be "Ready" ...
	I0806 08:23:22.161821   65825 pod_ready.go:38] duration metric: took 23.21712397s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 08:23:22.161838   65825 api_server.go:52] waiting for apiserver process to appear ...
	I0806 08:23:22.161912   65825 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:23:22.185888   65825 api_server.go:72] duration metric: took 34.335685559s to wait for apiserver process to appear ...
	I0806 08:23:22.185916   65825 api_server.go:88] waiting for apiserver healthz status ...
	I0806 08:23:22.185936   65825 api_server.go:253] Checking apiserver healthz at https://192.168.39.37:8443/healthz ...
	I0806 08:23:22.193450   65825 api_server.go:279] https://192.168.39.37:8443/healthz returned 200:
	ok
	I0806 08:23:22.194785   65825 api_server.go:141] control plane version: v1.30.3
	I0806 08:23:22.194855   65825 api_server.go:131] duration metric: took 8.930884ms to wait for apiserver health ...
	I0806 08:23:22.194876   65825 system_pods.go:43] waiting for kube-system pods to appear ...
	I0806 08:23:22.370438   65825 system_pods.go:59] 9 kube-system pods found
	I0806 08:23:22.370481   65825 system_pods.go:61] "calico-kube-controllers-77d59654f4-jlr9z" [2d43ee07-7a9a-42c9-8cb1-b2d1e10b0473] Running
	I0806 08:23:22.370488   65825 system_pods.go:61] "calico-node-b2mgn" [5064842a-a460-4023-83ab-138a9298bfb6] Running
	I0806 08:23:22.370492   65825 system_pods.go:61] "coredns-7db6d8ff4d-b7wpx" [ce11df34-250c-4a9b-b45c-724b6566a68f] Running
	I0806 08:23:22.370496   65825 system_pods.go:61] "etcd-calico-405163" [970ab1b6-983c-4347-9a45-2ad42745e887] Running
	I0806 08:23:22.370499   65825 system_pods.go:61] "kube-apiserver-calico-405163" [d1460ffe-bf35-4245-befb-d4fd12573c82] Running
	I0806 08:23:22.370502   65825 system_pods.go:61] "kube-controller-manager-calico-405163" [0dab0f2b-893a-4644-ae6e-7a0ccf09ea04] Running
	I0806 08:23:22.370505   65825 system_pods.go:61] "kube-proxy-2frwq" [7ed21b91-f539-43d7-a603-55a8af9933c4] Running
	I0806 08:23:22.370509   65825 system_pods.go:61] "kube-scheduler-calico-405163" [ad6911cb-bd13-465a-9f7c-28f5b9456ce0] Running
	I0806 08:23:22.370515   65825 system_pods.go:61] "storage-provisioner" [c55a83fe-05f5-4009-9368-087a847a0560] Running
	I0806 08:23:22.370522   65825 system_pods.go:74] duration metric: took 175.630555ms to wait for pod list to return data ...
	I0806 08:23:22.370532   65825 default_sa.go:34] waiting for default service account to be created ...
	I0806 08:23:22.559550   65825 default_sa.go:45] found service account: "default"
	I0806 08:23:22.559584   65825 default_sa.go:55] duration metric: took 189.035862ms for default service account to be created ...
	I0806 08:23:22.559595   65825 system_pods.go:116] waiting for k8s-apps to be running ...
	I0806 08:23:22.765984   65825 system_pods.go:86] 9 kube-system pods found
	I0806 08:23:22.766018   65825 system_pods.go:89] "calico-kube-controllers-77d59654f4-jlr9z" [2d43ee07-7a9a-42c9-8cb1-b2d1e10b0473] Running
	I0806 08:23:22.766027   65825 system_pods.go:89] "calico-node-b2mgn" [5064842a-a460-4023-83ab-138a9298bfb6] Running
	I0806 08:23:22.766034   65825 system_pods.go:89] "coredns-7db6d8ff4d-b7wpx" [ce11df34-250c-4a9b-b45c-724b6566a68f] Running
	I0806 08:23:22.766041   65825 system_pods.go:89] "etcd-calico-405163" [970ab1b6-983c-4347-9a45-2ad42745e887] Running
	I0806 08:23:22.766047   65825 system_pods.go:89] "kube-apiserver-calico-405163" [d1460ffe-bf35-4245-befb-d4fd12573c82] Running
	I0806 08:23:22.766054   65825 system_pods.go:89] "kube-controller-manager-calico-405163" [0dab0f2b-893a-4644-ae6e-7a0ccf09ea04] Running
	I0806 08:23:22.766060   65825 system_pods.go:89] "kube-proxy-2frwq" [7ed21b91-f539-43d7-a603-55a8af9933c4] Running
	I0806 08:23:22.766070   65825 system_pods.go:89] "kube-scheduler-calico-405163" [ad6911cb-bd13-465a-9f7c-28f5b9456ce0] Running
	I0806 08:23:22.766077   65825 system_pods.go:89] "storage-provisioner" [c55a83fe-05f5-4009-9368-087a847a0560] Running
	I0806 08:23:22.766086   65825 system_pods.go:126] duration metric: took 206.483367ms to wait for k8s-apps to be running ...
	I0806 08:23:22.766098   65825 system_svc.go:44] waiting for kubelet service to be running ....
	I0806 08:23:22.766151   65825 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 08:23:22.785469   65825 system_svc.go:56] duration metric: took 19.362794ms WaitForService to wait for kubelet
	I0806 08:23:22.785502   65825 kubeadm.go:582] duration metric: took 34.935304255s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0806 08:23:22.785526   65825 node_conditions.go:102] verifying NodePressure condition ...
	I0806 08:23:22.961755   65825 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0806 08:23:22.961787   65825 node_conditions.go:123] node cpu capacity is 2
	I0806 08:23:22.961802   65825 node_conditions.go:105] duration metric: took 176.268827ms to run NodePressure ...
	I0806 08:23:22.961816   65825 start.go:241] waiting for startup goroutines ...
	I0806 08:23:22.961827   65825 start.go:246] waiting for cluster config update ...
	I0806 08:23:22.961840   65825 start.go:255] writing updated cluster config ...
	I0806 08:23:22.962175   65825 ssh_runner.go:195] Run: rm -f paused
	I0806 08:23:23.022068   65825 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0806 08:23:23.024115   65825 out.go:177] * Done! kubectl is now configured to use "calico-405163" cluster and "default" namespace by default
	I0806 08:23:22.618209   67989 out.go:204]   - Booting up control plane ...
	I0806 08:23:22.618324   67989 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0806 08:23:22.618412   67989 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0806 08:23:22.618767   67989 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0806 08:23:22.635028   67989 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0806 08:23:22.637887   67989 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0806 08:23:22.638297   67989 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0806 08:23:22.787953   67989 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0806 08:23:22.788110   67989 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0806 08:23:23.289435   67989 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.036269ms
	I0806 08:23:23.289555   67989 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0806 08:23:20.554717   68099 main.go:141] libmachine: (kubernetes-upgrade-122219) Calling .GetIP
	I0806 08:23:20.557577   68099 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | domain kubernetes-upgrade-122219 has defined MAC address 52:54:00:1e:81:b8 in network mk-kubernetes-upgrade-122219
	I0806 08:23:20.557985   68099 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:81:b8", ip: ""} in network mk-kubernetes-upgrade-122219: {Iface:virbr3 ExpiryTime:2024-08-06 09:21:40 +0000 UTC Type:0 Mac:52:54:00:1e:81:b8 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:kubernetes-upgrade-122219 Clientid:01:52:54:00:1e:81:b8}
	I0806 08:23:20.558011   68099 main.go:141] libmachine: (kubernetes-upgrade-122219) DBG | domain kubernetes-upgrade-122219 has defined IP address 192.168.50.53 and MAC address 52:54:00:1e:81:b8 in network mk-kubernetes-upgrade-122219
	I0806 08:23:20.558267   68099 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0806 08:23:20.562704   68099 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-122219 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0-rc.0 ClusterName:kubernetes-upgrade-122219 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.53 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0806 08:23:20.562820   68099 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime crio
	I0806 08:23:20.562874   68099 ssh_runner.go:195] Run: sudo crictl images --output json
	I0806 08:23:20.606494   68099 crio.go:514] all images are preloaded for cri-o runtime.
	I0806 08:23:20.606518   68099 crio.go:433] Images already preloaded, skipping extraction
	I0806 08:23:20.606575   68099 ssh_runner.go:195] Run: sudo crictl images --output json
	I0806 08:23:20.644983   68099 crio.go:514] all images are preloaded for cri-o runtime.
	I0806 08:23:20.645024   68099 cache_images.go:84] Images are preloaded, skipping loading
	I0806 08:23:20.645045   68099 kubeadm.go:934] updating node { 192.168.50.53 8443 v1.31.0-rc.0 crio true true} ...
	I0806 08:23:20.645192   68099 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-122219 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.53
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-rc.0 ClusterName:kubernetes-upgrade-122219 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0806 08:23:20.645302   68099 ssh_runner.go:195] Run: crio config
	I0806 08:23:20.712145   68099 cni.go:84] Creating CNI manager for ""
	I0806 08:23:20.712187   68099 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0806 08:23:20.712205   68099 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0806 08:23:20.712246   68099 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.53 APIServerPort:8443 KubernetesVersion:v1.31.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-122219 NodeName:kubernetes-upgrade-122219 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.53"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.53 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0806 08:23:20.712446   68099 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.53
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-122219"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.53
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.53"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0806 08:23:20.712519   68099 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-rc.0
	I0806 08:23:20.723564   68099 binaries.go:44] Found k8s binaries, skipping transfer
	I0806 08:23:20.723639   68099 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0806 08:23:20.734069   68099 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (329 bytes)
	I0806 08:23:20.756288   68099 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0806 08:23:20.774811   68099 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2171 bytes)
	I0806 08:23:20.792835   68099 ssh_runner.go:195] Run: grep 192.168.50.53	control-plane.minikube.internal$ /etc/hosts
	I0806 08:23:20.797211   68099 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 08:23:20.951211   68099 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 08:23:20.966705   68099 certs.go:68] Setting up /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/kubernetes-upgrade-122219 for IP: 192.168.50.53
	I0806 08:23:20.966724   68099 certs.go:194] generating shared ca certs ...
	I0806 08:23:20.966742   68099 certs.go:226] acquiring lock for ca certs: {Name:mkf5c5aed91f4108054d9f2c742cad66c86afbed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:23:20.966899   68099 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.key
	I0806 08:23:20.966939   68099 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.key
	I0806 08:23:20.966948   68099 certs.go:256] generating profile certs ...
	I0806 08:23:20.967025   68099 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/kubernetes-upgrade-122219/client.key
	I0806 08:23:20.967072   68099 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/kubernetes-upgrade-122219/apiserver.key.4ecc0233
	I0806 08:23:20.967111   68099 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/kubernetes-upgrade-122219/proxy-client.key
	I0806 08:23:20.967253   68099 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246.pem (1338 bytes)
	W0806 08:23:20.967283   68099 certs.go:480] ignoring /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246_empty.pem, impossibly tiny 0 bytes
	I0806 08:23:20.967293   68099 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem (1675 bytes)
	I0806 08:23:20.967322   68099 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem (1078 bytes)
	I0806 08:23:20.967352   68099 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem (1123 bytes)
	I0806 08:23:20.967376   68099 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem (1675 bytes)
	I0806 08:23:20.967424   68099 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem (1708 bytes)
	I0806 08:23:20.968107   68099 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0806 08:23:20.998035   68099 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0806 08:23:21.023803   68099 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0806 08:23:21.051714   68099 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0806 08:23:21.078742   68099 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/kubernetes-upgrade-122219/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0806 08:23:21.104803   68099 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/kubernetes-upgrade-122219/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0806 08:23:21.129302   68099 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/kubernetes-upgrade-122219/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0806 08:23:21.156783   68099 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/kubernetes-upgrade-122219/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0806 08:23:21.185573   68099 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0806 08:23:21.211717   68099 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246.pem --> /usr/share/ca-certificates/20246.pem (1338 bytes)
	I0806 08:23:21.239804   68099 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem --> /usr/share/ca-certificates/202462.pem (1708 bytes)
	I0806 08:23:21.266631   68099 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0806 08:23:21.285633   68099 ssh_runner.go:195] Run: openssl version
	I0806 08:23:21.291844   68099 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0806 08:23:21.304305   68099 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0806 08:23:21.309094   68099 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  6 07:07 /usr/share/ca-certificates/minikubeCA.pem
	I0806 08:23:21.309163   68099 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0806 08:23:21.315628   68099 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0806 08:23:21.325697   68099 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20246.pem && ln -fs /usr/share/ca-certificates/20246.pem /etc/ssl/certs/20246.pem"
	I0806 08:23:21.338429   68099 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20246.pem
	I0806 08:23:21.344613   68099 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  6 07:17 /usr/share/ca-certificates/20246.pem
	I0806 08:23:21.344683   68099 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20246.pem
	I0806 08:23:21.350744   68099 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20246.pem /etc/ssl/certs/51391683.0"
	I0806 08:23:21.361538   68099 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202462.pem && ln -fs /usr/share/ca-certificates/202462.pem /etc/ssl/certs/202462.pem"
	I0806 08:23:21.373166   68099 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202462.pem
	I0806 08:23:21.377838   68099 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  6 07:17 /usr/share/ca-certificates/202462.pem
	I0806 08:23:21.377906   68099 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202462.pem
	I0806 08:23:21.383418   68099 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202462.pem /etc/ssl/certs/3ec20f2e.0"
	I0806 08:23:21.393672   68099 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0806 08:23:21.398814   68099 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0806 08:23:21.407160   68099 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0806 08:23:21.414798   68099 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0806 08:23:21.421515   68099 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0806 08:23:21.427917   68099 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0806 08:23:21.434524   68099 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0806 08:23:21.442746   68099 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-122219 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.31.0-rc.0 ClusterName:kubernetes-upgrade-122219 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.53 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 08:23:21.442837   68099 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0806 08:23:21.442892   68099 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0806 08:23:21.495261   68099 cri.go:89] found id: "93ef0572a40cb9d1f5243aac54be5411cdeb1892cd88e9caece779acdc878aa1"
	I0806 08:23:21.495282   68099 cri.go:89] found id: "43d15a95c9bca157bc9f485a00b3835aeded080d99038b6b8c747427e79095e6"
	I0806 08:23:21.495286   68099 cri.go:89] found id: "6f61427af558462a8130e17d96d9342f7adeb37a37c0f28134c3358adbf96667"
	I0806 08:23:21.495294   68099 cri.go:89] found id: "f54d0bd883b037a4146b0f8ee52df02594b6e48da7903189c3a797c5a3231d27"
	I0806 08:23:21.495297   68099 cri.go:89] found id: "384eb8568b735417bf3f41cd800920e76bd2a32195b30d8de91a2c0599aef726"
	I0806 08:23:21.495300   68099 cri.go:89] found id: "6c03b6333726cb071cb8704222e9e4301fd48fa4ddc11214daa34813ec0897e6"
	I0806 08:23:21.495303   68099 cri.go:89] found id: "222a5b2dd8d175adbd1c6a4a49a8146caa9ec479be13829cfe8c11d78016bb03"
	I0806 08:23:21.495306   68099 cri.go:89] found id: "643a767aaafe9fb7493700c6d72a93221c782c57af1c4f494758971a425fbd2c"
	I0806 08:23:21.495308   68099 cri.go:89] found id: "8637af2c91315186249c624574144deaccdfe082fa46ded9846e732ae413bd18"
	I0806 08:23:21.495315   68099 cri.go:89] found id: ""
	I0806 08:23:21.495365   68099 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 06 08:23:32 kubernetes-upgrade-122219 crio[2365]: time="2024-08-06 08:23:32.670647439Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722932612670613667,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125243,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=686319a7-7e80-4659-8cfb-5ebecffd4d10 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 08:23:32 kubernetes-upgrade-122219 crio[2365]: time="2024-08-06 08:23:32.671430789Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=36efcbc5-afb8-41c2-93b1-8c076d6e1e31 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:23:32 kubernetes-upgrade-122219 crio[2365]: time="2024-08-06 08:23:32.671507950Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=36efcbc5-afb8-41c2-93b1-8c076d6e1e31 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:23:32 kubernetes-upgrade-122219 crio[2365]: time="2024-08-06 08:23:32.672150638Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5c32256384239655e196bc882c7a78fd7e3ad91458e46bad8987710f27b547ca,PodSandboxId:622c5866d79b02a54464a0a39dd38a94f6c77767985f913f3d57afbdd4cee8bc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722932609808608892,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-25nrp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 181437e7-d3e4-476c-8b18-3005f216f8bb,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4466a216d57fb778dc56bd89892fd2fdd301be479c3a51f7614b011b64c5fc78,PodSandboxId:413f59251d931c2b5479446ad7753ad1c2f908210bb77801032644003243597b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722932609701406457,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qv9lm,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: da668d29-55c2-4aad-b0f2-cecb0657186f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6305f4821ebc9d649bbd7aab01852a8c12423e3502d126cb767cf54931846abe,PodSandboxId:d54c701f3c2ff4a2e10895bb87cc697a9ca4835357b56912c64ccb893d4de518,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1722932608826280561,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92b8c20c-ee6b-4b26-97a6-6da541d58cbd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5def9e85c4ad6ba610b0c60a59f128a7f918566ddb12ec6fb8b60ab9873c68be,PodSandboxId:0092ff3511e5aab646a1aedb2b8462da42cf36e422b07e45e2033600364deb43,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,State:CONTAINER_RUNNING,C
reatedAt:1722932608794419985,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bj4bw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a3b5ea9-7f4c-411e-906e-40cb7a30ac59,},Annotations:map[string]string{io.kubernetes.container.hash: 16faf14d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43167bae3f0cf7af41c4e839000ba2e0a6fb866bb6d6c39336df5ee50d63b91a,PodSandboxId:2959119f4f6ae4581cf1778651e94cf982f5c50da8749edca2631f6a1b7db346,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_RUNNING,CreatedAt:17229326039
99537243,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-122219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b1828b38d3136a900fd81fce6ebdc41,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0434edcf261f42b2fa93a0dcd2baa5a2e258493f4fe2ed28efeda778357810d,PodSandboxId:6b12dcb207d0ec21ca2dd88c1d319ff129909c0afb5f48d7b0e00173c634598f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_RUNNING,CreatedA
t:1722932603955869072,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-122219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93666924b8a20f36dc2bdfbaf6c4574c,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94e74402e906ef816746b915aa320b0be25187014dd0cce77a93bfb5062bf5bb,PodSandboxId:1267e7a8cd784ec7d0aff261b482863d2e2f7227fa461dc3b276ac9abce3f7c7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_RUNNING,CreatedAt:172
2932603900210653,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-122219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aec938766e39c1544b595bd7e4358f52,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71506fa4e701bc474ee0bd735f743fc0c66a17df88463da1597830fa8b2d641c,PodSandboxId:6a44936baba26aed4e6d65555e04192583a15711b9eca1c961299a09b72ffe51,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:172293260384859091
5,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-122219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17b108ff09aa0e97357e9bbbb93c8af4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93ef0572a40cb9d1f5243aac54be5411cdeb1892cd88e9caece779acdc878aa1,PodSandboxId:afe41321b612974990fd43e6c99f72e2b2f8e416e1f09da9295a757b026ea84d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722932560555123690,Labels:map[string]s
tring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92b8c20c-ee6b-4b26-97a6-6da541d58cbd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43d15a95c9bca157bc9f485a00b3835aeded080d99038b6b8c747427e79095e6,PodSandboxId:8a7eb575fa5d8ef96589604d0e63def16a6014e6190d27ef51cf41a9d355da3b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,State:CONTAINER_EXITED,CreatedAt:1722932530150018498,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bj4bw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a3b5ea9-7f4c-411e-906e-40cb7a30ac59,},Annotations:map[string]string{io.kubernetes.container.hash: 16faf14d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f61427af558462a8130e17d96d9342f7adeb37a37c0f28134c3358adbf96667,PodSandboxId:c76eede73ed72cacb7449a054cd7535d261b5eb3a3f6828699d7c05317e08bf1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722932530101351690,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.p
od.name: coredns-6f6b679f8f-25nrp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 181437e7-d3e4-476c-8b18-3005f216f8bb,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f54d0bd883b037a4146b0f8ee52df02594b6e48da7903189c3a797c5a3231d27,PodSandboxId:0fd7f71fcac34109b06ee8273ce33c4d72a4fa0b81bfd09323d21c33b1cb776b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb
b01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722932530049886867,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qv9lm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da668d29-55c2-4aad-b0f2-cecb0657186f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c03b6333726cb071cb8704222e9e4301fd48fa4ddc11214daa34813ec0897e6,PodSandboxId:43d6a002df4407f9d64138bd5091b52cfe9d6e31e8749eb2575371706c8d70c0,Metadata:&ContainerMetadata{
Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1722932518694960058,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-122219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17b108ff09aa0e97357e9bbbb93c8af4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:222a5b2dd8d175adbd1c6a4a49a8146caa9ec479be13829cfe8c11d78016bb03,PodSandboxId:a3ea3b70c80a5bbb25cc4934584cf5f9f73b3a981a7fd4c0fa515f83e6c3a71a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt
:0,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_EXITED,CreatedAt:1722932518593591100,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-122219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b1828b38d3136a900fd81fce6ebdc41,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:643a767aaafe9fb7493700c6d72a93221c782c57af1c4f494758971a425fbd2c,PodSandboxId:f2e47d54493216386704ec0dc1d06ccbe830af93d193f21e7c89e65541222239,Metadata:&ContainerMetadata{Name:kube-sched
uler,Attempt:0,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_EXITED,CreatedAt:1722932518548380417,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-122219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aec938766e39c1544b595bd7e4358f52,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8637af2c91315186249c624574144deaccdfe082fa46ded9846e732ae413bd18,PodSandboxId:2e8e38107257857a5375aecea41c3dc6cadb1c162260ab936c713e25af03cd9f,Metadata:&ContainerMetadata{Name:kube-apiserver,A
ttempt:0,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_EXITED,CreatedAt:1722932518544146485,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-122219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93666924b8a20f36dc2bdfbaf6c4574c,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=36efcbc5-afb8-41c2-93b1-8c076d6e1e31 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:23:32 kubernetes-upgrade-122219 crio[2365]: time="2024-08-06 08:23:32.722841765Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0745633b-42e3-4640-9ec4-e9bd3d7c8815 name=/runtime.v1.RuntimeService/Version
	Aug 06 08:23:32 kubernetes-upgrade-122219 crio[2365]: time="2024-08-06 08:23:32.722948552Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0745633b-42e3-4640-9ec4-e9bd3d7c8815 name=/runtime.v1.RuntimeService/Version
	Aug 06 08:23:32 kubernetes-upgrade-122219 crio[2365]: time="2024-08-06 08:23:32.725606102Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fe51b687-7901-4233-a424-1e7c8f5a40ac name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 08:23:32 kubernetes-upgrade-122219 crio[2365]: time="2024-08-06 08:23:32.726288996Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722932612726254780,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125243,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fe51b687-7901-4233-a424-1e7c8f5a40ac name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 08:23:32 kubernetes-upgrade-122219 crio[2365]: time="2024-08-06 08:23:32.727104711Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=046935d7-4289-40c1-88f0-045c4cabf3f9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:23:32 kubernetes-upgrade-122219 crio[2365]: time="2024-08-06 08:23:32.727260944Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=046935d7-4289-40c1-88f0-045c4cabf3f9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:23:32 kubernetes-upgrade-122219 crio[2365]: time="2024-08-06 08:23:32.727797727Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5c32256384239655e196bc882c7a78fd7e3ad91458e46bad8987710f27b547ca,PodSandboxId:622c5866d79b02a54464a0a39dd38a94f6c77767985f913f3d57afbdd4cee8bc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722932609808608892,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-25nrp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 181437e7-d3e4-476c-8b18-3005f216f8bb,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4466a216d57fb778dc56bd89892fd2fdd301be479c3a51f7614b011b64c5fc78,PodSandboxId:413f59251d931c2b5479446ad7753ad1c2f908210bb77801032644003243597b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722932609701406457,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qv9lm,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: da668d29-55c2-4aad-b0f2-cecb0657186f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6305f4821ebc9d649bbd7aab01852a8c12423e3502d126cb767cf54931846abe,PodSandboxId:d54c701f3c2ff4a2e10895bb87cc697a9ca4835357b56912c64ccb893d4de518,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1722932608826280561,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92b8c20c-ee6b-4b26-97a6-6da541d58cbd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5def9e85c4ad6ba610b0c60a59f128a7f918566ddb12ec6fb8b60ab9873c68be,PodSandboxId:0092ff3511e5aab646a1aedb2b8462da42cf36e422b07e45e2033600364deb43,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,State:CONTAINER_RUNNING,C
reatedAt:1722932608794419985,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bj4bw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a3b5ea9-7f4c-411e-906e-40cb7a30ac59,},Annotations:map[string]string{io.kubernetes.container.hash: 16faf14d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43167bae3f0cf7af41c4e839000ba2e0a6fb866bb6d6c39336df5ee50d63b91a,PodSandboxId:2959119f4f6ae4581cf1778651e94cf982f5c50da8749edca2631f6a1b7db346,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_RUNNING,CreatedAt:17229326039
99537243,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-122219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b1828b38d3136a900fd81fce6ebdc41,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0434edcf261f42b2fa93a0dcd2baa5a2e258493f4fe2ed28efeda778357810d,PodSandboxId:6b12dcb207d0ec21ca2dd88c1d319ff129909c0afb5f48d7b0e00173c634598f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_RUNNING,CreatedA
t:1722932603955869072,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-122219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93666924b8a20f36dc2bdfbaf6c4574c,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94e74402e906ef816746b915aa320b0be25187014dd0cce77a93bfb5062bf5bb,PodSandboxId:1267e7a8cd784ec7d0aff261b482863d2e2f7227fa461dc3b276ac9abce3f7c7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_RUNNING,CreatedAt:172
2932603900210653,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-122219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aec938766e39c1544b595bd7e4358f52,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71506fa4e701bc474ee0bd735f743fc0c66a17df88463da1597830fa8b2d641c,PodSandboxId:6a44936baba26aed4e6d65555e04192583a15711b9eca1c961299a09b72ffe51,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:172293260384859091
5,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-122219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17b108ff09aa0e97357e9bbbb93c8af4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93ef0572a40cb9d1f5243aac54be5411cdeb1892cd88e9caece779acdc878aa1,PodSandboxId:afe41321b612974990fd43e6c99f72e2b2f8e416e1f09da9295a757b026ea84d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722932560555123690,Labels:map[string]s
tring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92b8c20c-ee6b-4b26-97a6-6da541d58cbd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43d15a95c9bca157bc9f485a00b3835aeded080d99038b6b8c747427e79095e6,PodSandboxId:8a7eb575fa5d8ef96589604d0e63def16a6014e6190d27ef51cf41a9d355da3b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,State:CONTAINER_EXITED,CreatedAt:1722932530150018498,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bj4bw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a3b5ea9-7f4c-411e-906e-40cb7a30ac59,},Annotations:map[string]string{io.kubernetes.container.hash: 16faf14d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f61427af558462a8130e17d96d9342f7adeb37a37c0f28134c3358adbf96667,PodSandboxId:c76eede73ed72cacb7449a054cd7535d261b5eb3a3f6828699d7c05317e08bf1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722932530101351690,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.p
od.name: coredns-6f6b679f8f-25nrp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 181437e7-d3e4-476c-8b18-3005f216f8bb,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f54d0bd883b037a4146b0f8ee52df02594b6e48da7903189c3a797c5a3231d27,PodSandboxId:0fd7f71fcac34109b06ee8273ce33c4d72a4fa0b81bfd09323d21c33b1cb776b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb
b01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722932530049886867,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qv9lm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da668d29-55c2-4aad-b0f2-cecb0657186f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c03b6333726cb071cb8704222e9e4301fd48fa4ddc11214daa34813ec0897e6,PodSandboxId:43d6a002df4407f9d64138bd5091b52cfe9d6e31e8749eb2575371706c8d70c0,Metadata:&ContainerMetadata{
Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1722932518694960058,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-122219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17b108ff09aa0e97357e9bbbb93c8af4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:222a5b2dd8d175adbd1c6a4a49a8146caa9ec479be13829cfe8c11d78016bb03,PodSandboxId:a3ea3b70c80a5bbb25cc4934584cf5f9f73b3a981a7fd4c0fa515f83e6c3a71a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt
:0,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_EXITED,CreatedAt:1722932518593591100,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-122219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b1828b38d3136a900fd81fce6ebdc41,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:643a767aaafe9fb7493700c6d72a93221c782c57af1c4f494758971a425fbd2c,PodSandboxId:f2e47d54493216386704ec0dc1d06ccbe830af93d193f21e7c89e65541222239,Metadata:&ContainerMetadata{Name:kube-sched
uler,Attempt:0,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_EXITED,CreatedAt:1722932518548380417,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-122219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aec938766e39c1544b595bd7e4358f52,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8637af2c91315186249c624574144deaccdfe082fa46ded9846e732ae413bd18,PodSandboxId:2e8e38107257857a5375aecea41c3dc6cadb1c162260ab936c713e25af03cd9f,Metadata:&ContainerMetadata{Name:kube-apiserver,A
ttempt:0,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_EXITED,CreatedAt:1722932518544146485,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-122219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93666924b8a20f36dc2bdfbaf6c4574c,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=046935d7-4289-40c1-88f0-045c4cabf3f9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:23:32 kubernetes-upgrade-122219 crio[2365]: time="2024-08-06 08:23:32.775362710Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=98f68e2e-cfc6-476c-85a0-8a2df6d853c4 name=/runtime.v1.RuntimeService/Version
	Aug 06 08:23:32 kubernetes-upgrade-122219 crio[2365]: time="2024-08-06 08:23:32.775451868Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=98f68e2e-cfc6-476c-85a0-8a2df6d853c4 name=/runtime.v1.RuntimeService/Version
	Aug 06 08:23:32 kubernetes-upgrade-122219 crio[2365]: time="2024-08-06 08:23:32.776868828Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=35f35109-4873-4b9f-b61c-0c14487991c7 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 08:23:32 kubernetes-upgrade-122219 crio[2365]: time="2024-08-06 08:23:32.777830092Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722932612777797675,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125243,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=35f35109-4873-4b9f-b61c-0c14487991c7 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 08:23:32 kubernetes-upgrade-122219 crio[2365]: time="2024-08-06 08:23:32.778699422Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2fee1542-3848-4a30-aac1-1a94a9afb6ea name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:23:32 kubernetes-upgrade-122219 crio[2365]: time="2024-08-06 08:23:32.778907200Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2fee1542-3848-4a30-aac1-1a94a9afb6ea name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:23:32 kubernetes-upgrade-122219 crio[2365]: time="2024-08-06 08:23:32.779490390Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5c32256384239655e196bc882c7a78fd7e3ad91458e46bad8987710f27b547ca,PodSandboxId:622c5866d79b02a54464a0a39dd38a94f6c77767985f913f3d57afbdd4cee8bc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722932609808608892,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-25nrp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 181437e7-d3e4-476c-8b18-3005f216f8bb,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4466a216d57fb778dc56bd89892fd2fdd301be479c3a51f7614b011b64c5fc78,PodSandboxId:413f59251d931c2b5479446ad7753ad1c2f908210bb77801032644003243597b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722932609701406457,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qv9lm,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: da668d29-55c2-4aad-b0f2-cecb0657186f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6305f4821ebc9d649bbd7aab01852a8c12423e3502d126cb767cf54931846abe,PodSandboxId:d54c701f3c2ff4a2e10895bb87cc697a9ca4835357b56912c64ccb893d4de518,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1722932608826280561,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92b8c20c-ee6b-4b26-97a6-6da541d58cbd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5def9e85c4ad6ba610b0c60a59f128a7f918566ddb12ec6fb8b60ab9873c68be,PodSandboxId:0092ff3511e5aab646a1aedb2b8462da42cf36e422b07e45e2033600364deb43,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,State:CONTAINER_RUNNING,C
reatedAt:1722932608794419985,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bj4bw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a3b5ea9-7f4c-411e-906e-40cb7a30ac59,},Annotations:map[string]string{io.kubernetes.container.hash: 16faf14d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43167bae3f0cf7af41c4e839000ba2e0a6fb866bb6d6c39336df5ee50d63b91a,PodSandboxId:2959119f4f6ae4581cf1778651e94cf982f5c50da8749edca2631f6a1b7db346,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_RUNNING,CreatedAt:17229326039
99537243,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-122219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b1828b38d3136a900fd81fce6ebdc41,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0434edcf261f42b2fa93a0dcd2baa5a2e258493f4fe2ed28efeda778357810d,PodSandboxId:6b12dcb207d0ec21ca2dd88c1d319ff129909c0afb5f48d7b0e00173c634598f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_RUNNING,CreatedA
t:1722932603955869072,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-122219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93666924b8a20f36dc2bdfbaf6c4574c,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94e74402e906ef816746b915aa320b0be25187014dd0cce77a93bfb5062bf5bb,PodSandboxId:1267e7a8cd784ec7d0aff261b482863d2e2f7227fa461dc3b276ac9abce3f7c7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_RUNNING,CreatedAt:172
2932603900210653,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-122219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aec938766e39c1544b595bd7e4358f52,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71506fa4e701bc474ee0bd735f743fc0c66a17df88463da1597830fa8b2d641c,PodSandboxId:6a44936baba26aed4e6d65555e04192583a15711b9eca1c961299a09b72ffe51,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:172293260384859091
5,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-122219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17b108ff09aa0e97357e9bbbb93c8af4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93ef0572a40cb9d1f5243aac54be5411cdeb1892cd88e9caece779acdc878aa1,PodSandboxId:afe41321b612974990fd43e6c99f72e2b2f8e416e1f09da9295a757b026ea84d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722932560555123690,Labels:map[string]s
tring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92b8c20c-ee6b-4b26-97a6-6da541d58cbd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43d15a95c9bca157bc9f485a00b3835aeded080d99038b6b8c747427e79095e6,PodSandboxId:8a7eb575fa5d8ef96589604d0e63def16a6014e6190d27ef51cf41a9d355da3b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,State:CONTAINER_EXITED,CreatedAt:1722932530150018498,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bj4bw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a3b5ea9-7f4c-411e-906e-40cb7a30ac59,},Annotations:map[string]string{io.kubernetes.container.hash: 16faf14d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f61427af558462a8130e17d96d9342f7adeb37a37c0f28134c3358adbf96667,PodSandboxId:c76eede73ed72cacb7449a054cd7535d261b5eb3a3f6828699d7c05317e08bf1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722932530101351690,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.p
od.name: coredns-6f6b679f8f-25nrp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 181437e7-d3e4-476c-8b18-3005f216f8bb,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f54d0bd883b037a4146b0f8ee52df02594b6e48da7903189c3a797c5a3231d27,PodSandboxId:0fd7f71fcac34109b06ee8273ce33c4d72a4fa0b81bfd09323d21c33b1cb776b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb
b01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722932530049886867,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qv9lm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da668d29-55c2-4aad-b0f2-cecb0657186f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c03b6333726cb071cb8704222e9e4301fd48fa4ddc11214daa34813ec0897e6,PodSandboxId:43d6a002df4407f9d64138bd5091b52cfe9d6e31e8749eb2575371706c8d70c0,Metadata:&ContainerMetadata{
Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1722932518694960058,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-122219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17b108ff09aa0e97357e9bbbb93c8af4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:222a5b2dd8d175adbd1c6a4a49a8146caa9ec479be13829cfe8c11d78016bb03,PodSandboxId:a3ea3b70c80a5bbb25cc4934584cf5f9f73b3a981a7fd4c0fa515f83e6c3a71a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt
:0,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_EXITED,CreatedAt:1722932518593591100,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-122219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b1828b38d3136a900fd81fce6ebdc41,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:643a767aaafe9fb7493700c6d72a93221c782c57af1c4f494758971a425fbd2c,PodSandboxId:f2e47d54493216386704ec0dc1d06ccbe830af93d193f21e7c89e65541222239,Metadata:&ContainerMetadata{Name:kube-sched
uler,Attempt:0,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_EXITED,CreatedAt:1722932518548380417,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-122219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aec938766e39c1544b595bd7e4358f52,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8637af2c91315186249c624574144deaccdfe082fa46ded9846e732ae413bd18,PodSandboxId:2e8e38107257857a5375aecea41c3dc6cadb1c162260ab936c713e25af03cd9f,Metadata:&ContainerMetadata{Name:kube-apiserver,A
ttempt:0,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_EXITED,CreatedAt:1722932518544146485,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-122219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93666924b8a20f36dc2bdfbaf6c4574c,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2fee1542-3848-4a30-aac1-1a94a9afb6ea name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:23:32 kubernetes-upgrade-122219 crio[2365]: time="2024-08-06 08:23:32.818295191Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f84f7e72-543f-4e9e-bcfa-ad3fc8e86125 name=/runtime.v1.RuntimeService/Version
	Aug 06 08:23:32 kubernetes-upgrade-122219 crio[2365]: time="2024-08-06 08:23:32.818390926Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f84f7e72-543f-4e9e-bcfa-ad3fc8e86125 name=/runtime.v1.RuntimeService/Version
	Aug 06 08:23:32 kubernetes-upgrade-122219 crio[2365]: time="2024-08-06 08:23:32.819599398Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8ded13f2-5d6d-48f8-b158-11b987cb5c77 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 08:23:32 kubernetes-upgrade-122219 crio[2365]: time="2024-08-06 08:23:32.819980483Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722932612819956710,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125243,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8ded13f2-5d6d-48f8-b158-11b987cb5c77 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 08:23:32 kubernetes-upgrade-122219 crio[2365]: time="2024-08-06 08:23:32.820705959Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f0f0fcf9-c6ab-47b1-8509-61fc4605261e name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:23:32 kubernetes-upgrade-122219 crio[2365]: time="2024-08-06 08:23:32.820786634Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f0f0fcf9-c6ab-47b1-8509-61fc4605261e name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:23:32 kubernetes-upgrade-122219 crio[2365]: time="2024-08-06 08:23:32.821849247Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5c32256384239655e196bc882c7a78fd7e3ad91458e46bad8987710f27b547ca,PodSandboxId:622c5866d79b02a54464a0a39dd38a94f6c77767985f913f3d57afbdd4cee8bc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722932609808608892,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-25nrp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 181437e7-d3e4-476c-8b18-3005f216f8bb,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4466a216d57fb778dc56bd89892fd2fdd301be479c3a51f7614b011b64c5fc78,PodSandboxId:413f59251d931c2b5479446ad7753ad1c2f908210bb77801032644003243597b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722932609701406457,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qv9lm,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: da668d29-55c2-4aad-b0f2-cecb0657186f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6305f4821ebc9d649bbd7aab01852a8c12423e3502d126cb767cf54931846abe,PodSandboxId:d54c701f3c2ff4a2e10895bb87cc697a9ca4835357b56912c64ccb893d4de518,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1722932608826280561,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92b8c20c-ee6b-4b26-97a6-6da541d58cbd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5def9e85c4ad6ba610b0c60a59f128a7f918566ddb12ec6fb8b60ab9873c68be,PodSandboxId:0092ff3511e5aab646a1aedb2b8462da42cf36e422b07e45e2033600364deb43,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,State:CONTAINER_RUNNING,C
reatedAt:1722932608794419985,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bj4bw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a3b5ea9-7f4c-411e-906e-40cb7a30ac59,},Annotations:map[string]string{io.kubernetes.container.hash: 16faf14d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43167bae3f0cf7af41c4e839000ba2e0a6fb866bb6d6c39336df5ee50d63b91a,PodSandboxId:2959119f4f6ae4581cf1778651e94cf982f5c50da8749edca2631f6a1b7db346,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_RUNNING,CreatedAt:17229326039
99537243,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-122219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b1828b38d3136a900fd81fce6ebdc41,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0434edcf261f42b2fa93a0dcd2baa5a2e258493f4fe2ed28efeda778357810d,PodSandboxId:6b12dcb207d0ec21ca2dd88c1d319ff129909c0afb5f48d7b0e00173c634598f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_RUNNING,CreatedA
t:1722932603955869072,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-122219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93666924b8a20f36dc2bdfbaf6c4574c,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94e74402e906ef816746b915aa320b0be25187014dd0cce77a93bfb5062bf5bb,PodSandboxId:1267e7a8cd784ec7d0aff261b482863d2e2f7227fa461dc3b276ac9abce3f7c7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_RUNNING,CreatedAt:172
2932603900210653,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-122219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aec938766e39c1544b595bd7e4358f52,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71506fa4e701bc474ee0bd735f743fc0c66a17df88463da1597830fa8b2d641c,PodSandboxId:6a44936baba26aed4e6d65555e04192583a15711b9eca1c961299a09b72ffe51,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:172293260384859091
5,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-122219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17b108ff09aa0e97357e9bbbb93c8af4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93ef0572a40cb9d1f5243aac54be5411cdeb1892cd88e9caece779acdc878aa1,PodSandboxId:afe41321b612974990fd43e6c99f72e2b2f8e416e1f09da9295a757b026ea84d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722932560555123690,Labels:map[string]s
tring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92b8c20c-ee6b-4b26-97a6-6da541d58cbd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43d15a95c9bca157bc9f485a00b3835aeded080d99038b6b8c747427e79095e6,PodSandboxId:8a7eb575fa5d8ef96589604d0e63def16a6014e6190d27ef51cf41a9d355da3b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,State:CONTAINER_EXITED,CreatedAt:1722932530150018498,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bj4bw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a3b5ea9-7f4c-411e-906e-40cb7a30ac59,},Annotations:map[string]string{io.kubernetes.container.hash: 16faf14d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f61427af558462a8130e17d96d9342f7adeb37a37c0f28134c3358adbf96667,PodSandboxId:c76eede73ed72cacb7449a054cd7535d261b5eb3a3f6828699d7c05317e08bf1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722932530101351690,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.p
od.name: coredns-6f6b679f8f-25nrp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 181437e7-d3e4-476c-8b18-3005f216f8bb,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f54d0bd883b037a4146b0f8ee52df02594b6e48da7903189c3a797c5a3231d27,PodSandboxId:0fd7f71fcac34109b06ee8273ce33c4d72a4fa0b81bfd09323d21c33b1cb776b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb
b01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722932530049886867,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qv9lm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da668d29-55c2-4aad-b0f2-cecb0657186f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c03b6333726cb071cb8704222e9e4301fd48fa4ddc11214daa34813ec0897e6,PodSandboxId:43d6a002df4407f9d64138bd5091b52cfe9d6e31e8749eb2575371706c8d70c0,Metadata:&ContainerMetadata{
Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1722932518694960058,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-122219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17b108ff09aa0e97357e9bbbb93c8af4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:222a5b2dd8d175adbd1c6a4a49a8146caa9ec479be13829cfe8c11d78016bb03,PodSandboxId:a3ea3b70c80a5bbb25cc4934584cf5f9f73b3a981a7fd4c0fa515f83e6c3a71a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt
:0,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_EXITED,CreatedAt:1722932518593591100,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-122219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b1828b38d3136a900fd81fce6ebdc41,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:643a767aaafe9fb7493700c6d72a93221c782c57af1c4f494758971a425fbd2c,PodSandboxId:f2e47d54493216386704ec0dc1d06ccbe830af93d193f21e7c89e65541222239,Metadata:&ContainerMetadata{Name:kube-sched
uler,Attempt:0,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_EXITED,CreatedAt:1722932518548380417,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-122219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aec938766e39c1544b595bd7e4358f52,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8637af2c91315186249c624574144deaccdfe082fa46ded9846e732ae413bd18,PodSandboxId:2e8e38107257857a5375aecea41c3dc6cadb1c162260ab936c713e25af03cd9f,Metadata:&ContainerMetadata{Name:kube-apiserver,A
ttempt:0,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_EXITED,CreatedAt:1722932518544146485,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-122219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93666924b8a20f36dc2bdfbaf6c4574c,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f0f0fcf9-c6ab-47b1-8509-61fc4605261e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	5c32256384239       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   3 seconds ago        Running             coredns                   1                   622c5866d79b0       coredns-6f6b679f8f-25nrp
	4466a216d57fb       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   3 seconds ago        Running             coredns                   1                   413f59251d931       coredns-6f6b679f8f-qv9lm
	6305f4821ebc9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   4 seconds ago        Running             storage-provisioner       2                   d54c701f3c2ff       storage-provisioner
	5def9e85c4ad6       41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318   4 seconds ago        Running             kube-proxy                1                   0092ff3511e5a       kube-proxy-bj4bw
	43167bae3f0cf       fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c   8 seconds ago        Running             kube-controller-manager   1                   2959119f4f6ae       kube-controller-manager-kubernetes-upgrade-122219
	d0434edcf261f       c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0   8 seconds ago        Running             kube-apiserver            1                   6b12dcb207d0e       kube-apiserver-kubernetes-upgrade-122219
	94e74402e906e       0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c   9 seconds ago        Running             kube-scheduler            1                   1267e7a8cd784       kube-scheduler-kubernetes-upgrade-122219
	71506fa4e701b       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 seconds ago        Running             etcd                      1                   6a44936baba26       etcd-kubernetes-upgrade-122219
	93ef0572a40cb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   52 seconds ago       Exited              storage-provisioner       1                   afe41321b6129       storage-provisioner
	43d15a95c9bca       41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318   About a minute ago   Exited              kube-proxy                0                   8a7eb575fa5d8       kube-proxy-bj4bw
	6f61427af5584       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   About a minute ago   Exited              coredns                   0                   c76eede73ed72       coredns-6f6b679f8f-25nrp
	f54d0bd883b03       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   About a minute ago   Exited              coredns                   0                   0fd7f71fcac34       coredns-6f6b679f8f-qv9lm
	6c03b6333726c       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   About a minute ago   Exited              etcd                      0                   43d6a002df440       etcd-kubernetes-upgrade-122219
	222a5b2dd8d17       fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c   About a minute ago   Exited              kube-controller-manager   0                   a3ea3b70c80a5       kube-controller-manager-kubernetes-upgrade-122219
	643a767aaafe9       0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c   About a minute ago   Exited              kube-scheduler            0                   f2e47d5449321       kube-scheduler-kubernetes-upgrade-122219
	8637af2c91315       c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0   About a minute ago   Exited              kube-apiserver            0                   2e8e381072578       kube-apiserver-kubernetes-upgrade-122219
	
	
	==> coredns [4466a216d57fb778dc56bd89892fd2fdd301be479c3a51f7614b011b64c5fc78] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [5c32256384239655e196bc882c7a78fd7e3ad91458e46bad8987710f27b547ca] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [6f61427af558462a8130e17d96d9342f7adeb37a37c0f28134c3358adbf96667] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1952480824]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (06-Aug-2024 08:22:10.396) (total time: 30001ms):
	Trace[1952480824]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (08:22:40.397)
	Trace[1952480824]: [30.001071509s] [30.001071509s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[297655894]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (06-Aug-2024 08:22:10.394) (total time: 30003ms):
	Trace[297655894]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (08:22:40.396)
	Trace[297655894]: [30.003276124s] [30.003276124s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[67850187]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (06-Aug-2024 08:22:10.394) (total time: 30003ms):
	Trace[67850187]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (08:22:40.396)
	Trace[67850187]: [30.003753027s] [30.003753027s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [f54d0bd883b037a4146b0f8ee52df02594b6e48da7903189c3a797c5a3231d27] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1248352415]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (06-Aug-2024 08:22:10.396) (total time: 30001ms):
	Trace[1248352415]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (08:22:40.396)
	Trace[1248352415]: [30.001535258s] [30.001535258s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[448060894]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (06-Aug-2024 08:22:10.394) (total time: 30003ms):
	Trace[448060894]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (08:22:40.396)
	Trace[448060894]: [30.003639343s] [30.003639343s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1929045730]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (06-Aug-2024 08:22:10.394) (total time: 30003ms):
	Trace[1929045730]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (08:22:40.397)
	Trace[1929045730]: [30.003813088s] [30.003813088s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-122219
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-122219
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 06 Aug 2024 08:22:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-122219
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 06 Aug 2024 08:23:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 06 Aug 2024 08:23:27 +0000   Tue, 06 Aug 2024 08:21:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 06 Aug 2024 08:23:27 +0000   Tue, 06 Aug 2024 08:21:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 06 Aug 2024 08:23:27 +0000   Tue, 06 Aug 2024 08:21:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 06 Aug 2024 08:23:27 +0000   Tue, 06 Aug 2024 08:22:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.53
	  Hostname:    kubernetes-upgrade-122219
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7b4a41b8f9d24d95b9cdb1b0446d942b
	  System UUID:                7b4a41b8-f9d2-4d95-b9cd-b1b0446d942b
	  Boot ID:                    8518ba29-d703-4a6f-abac-72fc6410889d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0-rc.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-25nrp                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     84s
	  kube-system                 coredns-6f6b679f8f-qv9lm                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     84s
	  kube-system                 etcd-kubernetes-upgrade-122219                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         86s
	  kube-system                 kube-apiserver-kubernetes-upgrade-122219             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         82s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-122219    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         82s
	  kube-system                 kube-proxy-bj4bw                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         85s
	  kube-system                 kube-scheduler-kubernetes-upgrade-122219             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         86s
	  kube-system                 storage-provisioner                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         85s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 3s                 kube-proxy       
	  Normal  Starting                 82s                kube-proxy       
	  Normal  NodeAllocatableEnforced  96s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    95s (x8 over 96s)  kubelet          Node kubernetes-upgrade-122219 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     95s (x7 over 96s)  kubelet          Node kubernetes-upgrade-122219 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  95s (x8 over 96s)  kubelet          Node kubernetes-upgrade-122219 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           85s                node-controller  Node kubernetes-upgrade-122219 event: Registered Node kubernetes-upgrade-122219 in Controller
	  Normal  Starting                 10s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10s (x7 over 10s)  kubelet          Node kubernetes-upgrade-122219 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10s (x6 over 10s)  kubelet          Node kubernetes-upgrade-122219 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10s (x6 over 10s)  kubelet          Node kubernetes-upgrade-122219 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3s                 node-controller  Node kubernetes-upgrade-122219 event: Registered Node kubernetes-upgrade-122219 in Controller
	
	
	==> dmesg <==
	[  +1.697748] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000010] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.112513] systemd-fstab-generator[574]: Ignoring "noauto" option for root device
	[  +0.068740] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.071867] systemd-fstab-generator[586]: Ignoring "noauto" option for root device
	[  +0.183037] systemd-fstab-generator[600]: Ignoring "noauto" option for root device
	[  +0.154026] systemd-fstab-generator[613]: Ignoring "noauto" option for root device
	[  +0.344921] systemd-fstab-generator[642]: Ignoring "noauto" option for root device
	[  +4.472182] systemd-fstab-generator[741]: Ignoring "noauto" option for root device
	[  +0.059493] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.089455] systemd-fstab-generator[864]: Ignoring "noauto" option for root device
	[Aug 6 08:22] systemd-fstab-generator[1264]: Ignoring "noauto" option for root device
	[  +0.086183] kauditd_printk_skb: 97 callbacks suppressed
	[ +32.419097] kauditd_printk_skb: 103 callbacks suppressed
	[Aug 6 08:23] systemd-fstab-generator[2284]: Ignoring "noauto" option for root device
	[  +0.222161] systemd-fstab-generator[2296]: Ignoring "noauto" option for root device
	[  +0.279785] systemd-fstab-generator[2311]: Ignoring "noauto" option for root device
	[  +0.201113] systemd-fstab-generator[2323]: Ignoring "noauto" option for root device
	[  +0.397532] systemd-fstab-generator[2351]: Ignoring "noauto" option for root device
	[  +0.940806] systemd-fstab-generator[2506]: Ignoring "noauto" option for root device
	[  +2.101158] systemd-fstab-generator[2626]: Ignoring "noauto" option for root device
	[  +1.018105] kauditd_printk_skb: 165 callbacks suppressed
	[  +5.004756] kauditd_printk_skb: 42 callbacks suppressed
	[  +1.195048] systemd-fstab-generator[3542]: Ignoring "noauto" option for root device
	
	
	==> etcd [6c03b6333726cb071cb8704222e9e4301fd48fa4ddc11214daa34813ec0897e6] <==
	{"level":"info","ts":"2024-08-06T08:21:59.741751Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-06T08:21:59.749328Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.53:2379"}
	{"level":"info","ts":"2024-08-06T08:21:59.752213Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"1150b84c67dfd974","local-member-id":"40a79b093a7e4780","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-06T08:21:59.752425Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-06T08:21:59.752520Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-06T08:21:59.756975Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-06T08:21:59.760732Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-06T08:22:53.001318Z","caller":"traceutil/trace.go:171","msg":"trace[1351240255] transaction","detail":"{read_only:false; response_revision:419; number_of_response:1; }","duration":"239.969959ms","start":"2024-08-06T08:22:52.761318Z","end":"2024-08-06T08:22:53.001288Z","steps":["trace[1351240255] 'process raft request'  (duration: 239.808255ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-06T08:22:53.281143Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"188.478101ms","expected-duration":"100ms","prefix":"","request":"header:<ID:5152277569460033071 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-g3zoylnjo66khsc6od3rngoqpy\" mod_revision:401 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-g3zoylnjo66khsc6od3rngoqpy\" value_size:617 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-g3zoylnjo66khsc6od3rngoqpy\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-08-06T08:22:53.281631Z","caller":"traceutil/trace.go:171","msg":"trace[845332372] transaction","detail":"{read_only:false; response_revision:421; number_of_response:1; }","duration":"350.194543ms","start":"2024-08-06T08:22:52.931421Z","end":"2024-08-06T08:22:53.281616Z","steps":["trace[845332372] 'process raft request'  (duration: 350.120925ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-06T08:22:53.281831Z","caller":"traceutil/trace.go:171","msg":"trace[569436970] transaction","detail":"{read_only:false; response_revision:420; number_of_response:1; }","duration":"456.827229ms","start":"2024-08-06T08:22:52.824987Z","end":"2024-08-06T08:22:53.281814Z","steps":["trace[569436970] 'process raft request'  (duration: 266.770587ms)","trace[569436970] 'compare'  (duration: 188.391624ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-06T08:22:53.282382Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-06T08:22:52.824967Z","time spent":"457.368752ms","remote":"127.0.0.1:37116","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":690,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-g3zoylnjo66khsc6od3rngoqpy\" mod_revision:401 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-g3zoylnjo66khsc6od3rngoqpy\" value_size:617 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-g3zoylnjo66khsc6od3rngoqpy\" > >"}
	{"level":"warn","ts":"2024-08-06T08:22:53.282754Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-06T08:22:52.931402Z","time spent":"350.848089ms","remote":"127.0.0.1:37116","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":589,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/kubernetes-upgrade-122219\" mod_revision:403 > success:<request_put:<key:\"/registry/leases/kube-node-lease/kubernetes-upgrade-122219\" value_size:523 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/kubernetes-upgrade-122219\" > >"}
	{"level":"info","ts":"2024-08-06T08:22:53.789206Z","caller":"traceutil/trace.go:171","msg":"trace[862253277] transaction","detail":"{read_only:false; response_revision:422; number_of_response:1; }","duration":"136.632223ms","start":"2024-08-06T08:22:53.652556Z","end":"2024-08-06T08:22:53.789189Z","steps":["trace[862253277] 'process raft request'  (duration: 122.053694ms)","trace[862253277] 'compare'  (duration: 14.389539ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-06T08:23:07.210556Z","caller":"traceutil/trace.go:171","msg":"trace[1500092776] transaction","detail":"{read_only:false; response_revision:432; number_of_response:1; }","duration":"125.719589ms","start":"2024-08-06T08:23:07.084816Z","end":"2024-08-06T08:23:07.210536Z","steps":["trace[1500092776] 'process raft request'  (duration: 125.584168ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-06T08:23:11.841902Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-06T08:23:11.841975Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"kubernetes-upgrade-122219","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.53:2380"],"advertise-client-urls":["https://192.168.50.53:2379"]}
	{"level":"warn","ts":"2024-08-06T08:23:11.842195Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.50.53:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-06T08:23:11.842229Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.50.53:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-06T08:23:11.842352Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-06T08:23:11.842412Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-06T08:23:11.885356Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"40a79b093a7e4780","current-leader-member-id":"40a79b093a7e4780"}
	{"level":"info","ts":"2024-08-06T08:23:11.893647Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.50.53:2380"}
	{"level":"info","ts":"2024-08-06T08:23:11.893854Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.50.53:2380"}
	{"level":"info","ts":"2024-08-06T08:23:11.893909Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"kubernetes-upgrade-122219","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.53:2380"],"advertise-client-urls":["https://192.168.50.53:2379"]}
	
	
	==> etcd [71506fa4e701bc474ee0bd735f743fc0c66a17df88463da1597830fa8b2d641c] <==
	{"level":"info","ts":"2024-08-06T08:23:24.372155Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"40a79b093a7e4780 switched to configuration voters=(4658862803476432768)"}
	{"level":"info","ts":"2024-08-06T08:23:24.372298Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"1150b84c67dfd974","local-member-id":"40a79b093a7e4780","added-peer-id":"40a79b093a7e4780","added-peer-peer-urls":["https://192.168.50.53:2380"]}
	{"level":"info","ts":"2024-08-06T08:23:24.374236Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"1150b84c67dfd974","local-member-id":"40a79b093a7e4780","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-06T08:23:24.374299Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-06T08:23:24.384086Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.50.53:2380"}
	{"level":"info","ts":"2024-08-06T08:23:24.384124Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.50.53:2380"}
	{"level":"info","ts":"2024-08-06T08:23:24.384149Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-06T08:23:24.389515Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"40a79b093a7e4780","initial-advertise-peer-urls":["https://192.168.50.53:2380"],"listen-peer-urls":["https://192.168.50.53:2380"],"advertise-client-urls":["https://192.168.50.53:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.53:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-06T08:23:24.389604Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-06T08:23:25.330010Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"40a79b093a7e4780 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-06T08:23:25.330123Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"40a79b093a7e4780 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-06T08:23:25.330161Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"40a79b093a7e4780 received MsgPreVoteResp from 40a79b093a7e4780 at term 2"}
	{"level":"info","ts":"2024-08-06T08:23:25.330176Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"40a79b093a7e4780 became candidate at term 3"}
	{"level":"info","ts":"2024-08-06T08:23:25.330183Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"40a79b093a7e4780 received MsgVoteResp from 40a79b093a7e4780 at term 3"}
	{"level":"info","ts":"2024-08-06T08:23:25.330192Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"40a79b093a7e4780 became leader at term 3"}
	{"level":"info","ts":"2024-08-06T08:23:25.330199Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 40a79b093a7e4780 elected leader 40a79b093a7e4780 at term 3"}
	{"level":"info","ts":"2024-08-06T08:23:25.341645Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"40a79b093a7e4780","local-member-attributes":"{Name:kubernetes-upgrade-122219 ClientURLs:[https://192.168.50.53:2379]}","request-path":"/0/members/40a79b093a7e4780/attributes","cluster-id":"1150b84c67dfd974","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-06T08:23:25.341708Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-06T08:23:25.342114Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-06T08:23:25.342855Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-06T08:23:25.350714Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-06T08:23:25.351729Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-06T08:23:25.353843Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.53:2379"}
	{"level":"info","ts":"2024-08-06T08:23:25.353091Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-06T08:23:25.362007Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 08:23:33 up 2 min,  0 users,  load average: 1.23, 0.52, 0.19
	Linux kubernetes-upgrade-122219 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [8637af2c91315186249c624574144deaccdfe082fa46ded9846e732ae413bd18] <==
	W0806 08:23:11.872896       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 08:23:11.872948       1 logging.go:55] [core] [Channel #19 SubChannel #20]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 08:23:11.873009       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 08:23:11.873238       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 08:23:11.873385       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 08:23:11.873453       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 08:23:11.873492       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 08:23:11.873527       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 08:23:11.873584       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 08:23:11.873617       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 08:23:11.873650       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 08:23:11.873684       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 08:23:11.873723       1 logging.go:55] [core] [Channel #10 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 08:23:11.874024       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 08:23:11.874160       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 08:23:11.874346       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 08:23:11.874443       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 08:23:11.874506       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 08:23:11.874559       1 logging.go:55] [core] [Channel #43 SubChannel #44]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 08:23:11.874613       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 08:23:11.874667       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 08:23:11.874731       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 08:23:11.874794       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 08:23:11.874924       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 08:23:11.875121       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [d0434edcf261f42b2fa93a0dcd2baa5a2e258493f4fe2ed28efeda778357810d] <==
	I0806 08:23:27.178183       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0806 08:23:27.181531       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0806 08:23:27.214425       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0806 08:23:27.234278       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0806 08:23:27.234803       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0806 08:23:27.237740       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0806 08:23:27.237992       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0806 08:23:27.238074       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0806 08:23:27.238193       1 shared_informer.go:320] Caches are synced for configmaps
	I0806 08:23:27.238265       1 aggregator.go:171] initial CRD sync complete...
	I0806 08:23:27.238294       1 autoregister_controller.go:144] Starting autoregister controller
	I0806 08:23:27.238302       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0806 08:23:27.238309       1 cache.go:39] Caches are synced for autoregister controller
	I0806 08:23:27.242543       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0806 08:23:27.242638       1 policy_source.go:224] refreshing policies
	I0806 08:23:27.258242       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	E0806 08:23:27.258909       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0806 08:23:28.036210       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0806 08:23:29.498918       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0806 08:23:29.556796       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0806 08:23:29.762674       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0806 08:23:29.887457       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0806 08:23:29.922356       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0806 08:23:30.899564       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0806 08:23:30.946904       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [222a5b2dd8d175adbd1c6a4a49a8146caa9ec479be13829cfe8c11d78016bb03] <==
	I0806 08:22:08.539374       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="kubernetes-upgrade-122219"
	I0806 08:22:08.561023       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="kubernetes-upgrade-122219" podCIDRs=["10.244.0.0/24"]
	I0806 08:22:08.561304       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="kubernetes-upgrade-122219"
	I0806 08:22:08.561346       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="kubernetes-upgrade-122219"
	I0806 08:22:08.601544       1 shared_informer.go:320] Caches are synced for resource quota
	I0806 08:22:08.628277       1 shared_informer.go:320] Caches are synced for HPA
	I0806 08:22:08.633468       1 shared_informer.go:320] Caches are synced for namespace
	I0806 08:22:08.638143       1 shared_informer.go:320] Caches are synced for resource quota
	I0806 08:22:08.680562       1 shared_informer.go:320] Caches are synced for service account
	I0806 08:22:08.728818       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="kubernetes-upgrade-122219"
	I0806 08:22:09.091334       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="kubernetes-upgrade-122219"
	I0806 08:22:09.096845       1 shared_informer.go:320] Caches are synced for garbage collector
	I0806 08:22:09.127444       1 shared_informer.go:320] Caches are synced for garbage collector
	I0806 08:22:09.127483       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0806 08:22:09.371586       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="463.366339ms"
	I0806 08:22:09.405463       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="33.808297ms"
	I0806 08:22:09.406374       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="58.693µs"
	I0806 08:22:09.428854       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="85.145µs"
	I0806 08:22:10.453492       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="57.536µs"
	I0806 08:22:10.519689       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="101.255µs"
	I0806 08:22:12.215212       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="kubernetes-upgrade-122219"
	I0806 08:22:49.700882       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="20.493297ms"
	I0806 08:22:49.701515       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="520.96µs"
	I0806 08:22:49.736180       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="25.420096ms"
	I0806 08:22:49.736358       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="108.638µs"
	
	
	==> kube-controller-manager [43167bae3f0cf7af41c4e839000ba2e0a6fb866bb6d6c39336df5ee50d63b91a] <==
	I0806 08:23:30.513186       1 shared_informer.go:320] Caches are synced for deployment
	I0806 08:23:30.514946       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0806 08:23:30.517866       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0806 08:23:30.520344       1 shared_informer.go:320] Caches are synced for job
	I0806 08:23:30.536146       1 shared_informer.go:320] Caches are synced for stateful set
	I0806 08:23:30.539547       1 shared_informer.go:320] Caches are synced for GC
	I0806 08:23:30.539624       1 shared_informer.go:320] Caches are synced for persistent volume
	I0806 08:23:30.539786       1 shared_informer.go:320] Caches are synced for cronjob
	I0806 08:23:30.539874       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0806 08:23:30.554157       1 shared_informer.go:320] Caches are synced for service account
	I0806 08:23:30.555025       1 shared_informer.go:320] Caches are synced for TTL
	I0806 08:23:30.558003       1 shared_informer.go:320] Caches are synced for disruption
	I0806 08:23:30.592537       1 shared_informer.go:320] Caches are synced for namespace
	I0806 08:23:30.689113       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0806 08:23:30.716470       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="kubernetes-upgrade-122219"
	I0806 08:23:30.718308       1 shared_informer.go:320] Caches are synced for resource quota
	I0806 08:23:30.714015       1 shared_informer.go:320] Caches are synced for endpoint
	I0806 08:23:30.732711       1 shared_informer.go:320] Caches are synced for resource quota
	I0806 08:23:30.738160       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="222.954598ms"
	I0806 08:23:30.740612       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0806 08:23:30.900245       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="162.0221ms"
	I0806 08:23:30.900595       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="141.221µs"
	I0806 08:23:31.133451       1 shared_informer.go:320] Caches are synced for garbage collector
	I0806 08:23:31.133488       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0806 08:23:31.182395       1 shared_informer.go:320] Caches are synced for garbage collector
	
	
	==> kube-proxy [43d15a95c9bca157bc9f485a00b3835aeded080d99038b6b8c747427e79095e6] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0806 08:22:10.548140       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0806 08:22:10.567004       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.53"]
	E0806 08:22:10.567233       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0806 08:22:10.604875       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0806 08:22:10.604967       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0806 08:22:10.605015       1 server_linux.go:169] "Using iptables Proxier"
	I0806 08:22:10.607631       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0806 08:22:10.607934       1 server.go:483] "Version info" version="v1.31.0-rc.0"
	I0806 08:22:10.608203       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0806 08:22:10.609895       1 config.go:197] "Starting service config controller"
	I0806 08:22:10.609957       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0806 08:22:10.609990       1 config.go:104] "Starting endpoint slice config controller"
	I0806 08:22:10.610006       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0806 08:22:10.610584       1 config.go:326] "Starting node config controller"
	I0806 08:22:10.610623       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0806 08:22:10.710424       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0806 08:22:10.710420       1 shared_informer.go:320] Caches are synced for service config
	I0806 08:22:10.711001       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [5def9e85c4ad6ba610b0c60a59f128a7f918566ddb12ec6fb8b60ab9873c68be] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0806 08:23:29.220141       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0806 08:23:29.262695       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.53"]
	E0806 08:23:29.262787       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0806 08:23:29.365659       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0806 08:23:29.365857       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0806 08:23:29.365979       1 server_linux.go:169] "Using iptables Proxier"
	I0806 08:23:29.391906       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0806 08:23:29.392601       1 server.go:483] "Version info" version="v1.31.0-rc.0"
	I0806 08:23:29.393168       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0806 08:23:29.396524       1 config.go:197] "Starting service config controller"
	I0806 08:23:29.396564       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0806 08:23:29.396592       1 config.go:104] "Starting endpoint slice config controller"
	I0806 08:23:29.396598       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0806 08:23:29.397337       1 config.go:326] "Starting node config controller"
	I0806 08:23:29.397347       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0806 08:23:29.503466       1 shared_informer.go:320] Caches are synced for node config
	I0806 08:23:29.503520       1 shared_informer.go:320] Caches are synced for service config
	I0806 08:23:29.503553       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [643a767aaafe9fb7493700c6d72a93221c782c57af1c4f494758971a425fbd2c] <==
	E0806 08:22:01.792773       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0806 08:22:01.792905       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0806 08:22:01.792968       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0806 08:22:01.793470       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0806 08:22:01.793517       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0806 08:22:02.645898       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0806 08:22:02.646713       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0806 08:22:02.763241       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0806 08:22:02.763682       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0806 08:22:02.789136       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0806 08:22:02.789270       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0806 08:22:02.843513       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0806 08:22:02.843826       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0806 08:22:02.852471       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0806 08:22:02.852657       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0806 08:22:02.911385       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0806 08:22:02.911485       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0806 08:22:02.991359       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0806 08:22:02.991414       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0806 08:22:03.065617       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0806 08:22:03.065667       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0806 08:22:03.119903       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0806 08:22:03.119996       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0806 08:22:04.382313       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0806 08:23:11.852644       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [94e74402e906ef816746b915aa320b0be25187014dd0cce77a93bfb5062bf5bb] <==
	I0806 08:23:25.009078       1 serving.go:386] Generated self-signed cert in-memory
	W0806 08:23:27.097237       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0806 08:23:27.097299       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0806 08:23:27.097313       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0806 08:23:27.097334       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0806 08:23:27.189177       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0-rc.0"
	I0806 08:23:27.189293       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0806 08:23:27.199146       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0806 08:23:27.199287       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0806 08:23:27.202541       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0806 08:23:27.202737       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0806 08:23:27.300348       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 06 08:23:24 kubernetes-upgrade-122219 kubelet[2633]: I0806 08:23:24.014519    2633 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-122219"
	Aug 06 08:23:24 kubernetes-upgrade-122219 kubelet[2633]: E0806 08:23:24.016699    2633 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.53:8443: connect: connection refused" node="kubernetes-upgrade-122219"
	Aug 06 08:23:24 kubernetes-upgrade-122219 kubelet[2633]: W0806 08:23:24.107772    2633 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.50.53:8443: connect: connection refused
	Aug 06 08:23:24 kubernetes-upgrade-122219 kubelet[2633]: E0806 08:23:24.107931    2633 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.50.53:8443: connect: connection refused" logger="UnhandledError"
	Aug 06 08:23:24 kubernetes-upgrade-122219 kubelet[2633]: W0806 08:23:24.189383    2633 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.50.53:8443: connect: connection refused
	Aug 06 08:23:24 kubernetes-upgrade-122219 kubelet[2633]: E0806 08:23:24.189487    2633 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.50.53:8443: connect: connection refused" logger="UnhandledError"
	Aug 06 08:23:24 kubernetes-upgrade-122219 kubelet[2633]: W0806 08:23:24.492351    2633 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)kubernetes-upgrade-122219&limit=500&resourceVersion=0": dial tcp 192.168.50.53:8443: connect: connection refused
	Aug 06 08:23:24 kubernetes-upgrade-122219 kubelet[2633]: E0806 08:23:24.492564    2633 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)kubernetes-upgrade-122219&limit=500&resourceVersion=0\": dial tcp 192.168.50.53:8443: connect: connection refused" logger="UnhandledError"
	Aug 06 08:23:24 kubernetes-upgrade-122219 kubelet[2633]: I0806 08:23:24.817898    2633 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-122219"
	Aug 06 08:23:27 kubernetes-upgrade-122219 kubelet[2633]: I0806 08:23:27.299107    2633 kubelet_node_status.go:111] "Node was previously registered" node="kubernetes-upgrade-122219"
	Aug 06 08:23:27 kubernetes-upgrade-122219 kubelet[2633]: I0806 08:23:27.299645    2633 kubelet_node_status.go:75] "Successfully registered node" node="kubernetes-upgrade-122219"
	Aug 06 08:23:27 kubernetes-upgrade-122219 kubelet[2633]: E0806 08:23:27.299714    2633 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"kubernetes-upgrade-122219\": node \"kubernetes-upgrade-122219\" not found"
	Aug 06 08:23:27 kubernetes-upgrade-122219 kubelet[2633]: I0806 08:23:27.311562    2633 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Aug 06 08:23:27 kubernetes-upgrade-122219 kubelet[2633]: I0806 08:23:27.312915    2633 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Aug 06 08:23:27 kubernetes-upgrade-122219 kubelet[2633]: E0806 08:23:27.327443    2633 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"kubernetes-upgrade-122219\" not found"
	Aug 06 08:23:27 kubernetes-upgrade-122219 kubelet[2633]: E0806 08:23:27.428653    2633 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"kubernetes-upgrade-122219\" not found"
	Aug 06 08:23:27 kubernetes-upgrade-122219 kubelet[2633]: E0806 08:23:27.529906    2633 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"kubernetes-upgrade-122219\" not found"
	Aug 06 08:23:27 kubernetes-upgrade-122219 kubelet[2633]: E0806 08:23:27.631124    2633 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"kubernetes-upgrade-122219\" not found"
	Aug 06 08:23:28 kubernetes-upgrade-122219 kubelet[2633]: I0806 08:23:28.175390    2633 apiserver.go:52] "Watching apiserver"
	Aug 06 08:23:28 kubernetes-upgrade-122219 kubelet[2633]: I0806 08:23:28.209642    2633 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Aug 06 08:23:28 kubernetes-upgrade-122219 kubelet[2633]: I0806 08:23:28.256239    2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4a3b5ea9-7f4c-411e-906e-40cb7a30ac59-xtables-lock\") pod \"kube-proxy-bj4bw\" (UID: \"4a3b5ea9-7f4c-411e-906e-40cb7a30ac59\") " pod="kube-system/kube-proxy-bj4bw"
	Aug 06 08:23:28 kubernetes-upgrade-122219 kubelet[2633]: I0806 08:23:28.256307    2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4a3b5ea9-7f4c-411e-906e-40cb7a30ac59-lib-modules\") pod \"kube-proxy-bj4bw\" (UID: \"4a3b5ea9-7f4c-411e-906e-40cb7a30ac59\") " pod="kube-system/kube-proxy-bj4bw"
	Aug 06 08:23:28 kubernetes-upgrade-122219 kubelet[2633]: I0806 08:23:28.256334    2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/92b8c20c-ee6b-4b26-97a6-6da541d58cbd-tmp\") pod \"storage-provisioner\" (UID: \"92b8c20c-ee6b-4b26-97a6-6da541d58cbd\") " pod="kube-system/storage-provisioner"
	Aug 06 08:23:33 kubernetes-upgrade-122219 kubelet[2633]: E0806 08:23:33.300981    2633 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722932613300609505,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125243,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 06 08:23:33 kubernetes-upgrade-122219 kubelet[2633]: E0806 08:23:33.301102    2633 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722932613300609505,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125243,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [6305f4821ebc9d649bbd7aab01852a8c12423e3502d126cb767cf54931846abe] <==
	I0806 08:23:29.048774       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0806 08:23:29.088298       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0806 08:23:29.088357       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	
	
	==> storage-provisioner [93ef0572a40cb9d1f5243aac54be5411cdeb1892cd88e9caece779acdc878aa1] <==
	I0806 08:22:40.648722       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0806 08:22:40.665513       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0806 08:22:40.665582       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0806 08:22:40.675259       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0806 08:22:40.675561       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-122219_64304cd5-124b-4e3d-ad1a-e907fc9c59c1!
	I0806 08:22:40.679164       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1b6716e8-a217-4d3b-a6b3-03aff3dfb863", APIVersion:"v1", ResourceVersion:"398", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kubernetes-upgrade-122219_64304cd5-124b-4e3d-ad1a-e907fc9c59c1 became leader
	I0806 08:22:40.776599       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-122219_64304cd5-124b-4e3d-ad1a-e907fc9c59c1!
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0806 08:23:32.288556   69036 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19370-13057/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-122219 -n kubernetes-upgrade-122219
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-122219 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-122219" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-122219
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-122219: (1.110677512s)
--- FAIL: TestKubernetesUpgrade (464.53s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (283.66s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-409903 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-409903 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m43.391726425s)

                                                
                                                
-- stdout --
	* [old-k8s-version-409903] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19370
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19370-13057/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19370-13057/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-409903" primary control-plane node in "old-k8s-version-409903" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 08:24:13.431298   72416 out.go:291] Setting OutFile to fd 1 ...
	I0806 08:24:13.431567   72416 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 08:24:13.431577   72416 out.go:304] Setting ErrFile to fd 2...
	I0806 08:24:13.431582   72416 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 08:24:13.431759   72416 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19370-13057/.minikube/bin
	I0806 08:24:13.432321   72416 out.go:298] Setting JSON to false
	I0806 08:24:13.433371   72416 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7599,"bootTime":1722925054,"procs":299,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0806 08:24:13.433429   72416 start.go:139] virtualization: kvm guest
	I0806 08:24:13.435777   72416 out.go:177] * [old-k8s-version-409903] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0806 08:24:13.437109   72416 out.go:177]   - MINIKUBE_LOCATION=19370
	I0806 08:24:13.437113   72416 notify.go:220] Checking for updates...
	I0806 08:24:13.439916   72416 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0806 08:24:13.441244   72416 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19370-13057/kubeconfig
	I0806 08:24:13.442553   72416 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19370-13057/.minikube
	I0806 08:24:13.443899   72416 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0806 08:24:13.445183   72416 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0806 08:24:13.446867   72416 config.go:182] Loaded profile config "bridge-405163": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 08:24:13.446984   72416 config.go:182] Loaded profile config "enable-default-cni-405163": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 08:24:13.447077   72416 config.go:182] Loaded profile config "flannel-405163": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 08:24:13.447263   72416 driver.go:392] Setting default libvirt URI to qemu:///system
	I0806 08:24:13.486019   72416 out.go:177] * Using the kvm2 driver based on user configuration
	I0806 08:24:13.487468   72416 start.go:297] selected driver: kvm2
	I0806 08:24:13.487514   72416 start.go:901] validating driver "kvm2" against <nil>
	I0806 08:24:13.487531   72416 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0806 08:24:13.488566   72416 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 08:24:13.488674   72416 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19370-13057/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0806 08:24:13.505741   72416 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0806 08:24:13.505790   72416 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0806 08:24:13.506060   72416 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0806 08:24:13.506090   72416 cni.go:84] Creating CNI manager for ""
	I0806 08:24:13.506098   72416 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0806 08:24:13.506105   72416 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0806 08:24:13.506173   72416 start.go:340] cluster config:
	{Name:old-k8s-version-409903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-409903 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 08:24:13.506294   72416 iso.go:125] acquiring lock: {Name:mke58d71de28babe305c5eb0c31c274e9713bb3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 08:24:13.508095   72416 out.go:177] * Starting "old-k8s-version-409903" primary control-plane node in "old-k8s-version-409903" cluster
	I0806 08:24:13.509259   72416 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0806 08:24:13.509316   72416 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0806 08:24:13.509328   72416 cache.go:56] Caching tarball of preloaded images
	I0806 08:24:13.509423   72416 preload.go:172] Found /home/jenkins/minikube-integration/19370-13057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0806 08:24:13.509435   72416 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0806 08:24:13.509546   72416 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/old-k8s-version-409903/config.json ...
	I0806 08:24:13.509570   72416 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/old-k8s-version-409903/config.json: {Name:mkbe760d1b93576cb27d5a6eb2f2f4bafa95a22f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:24:13.509716   72416 start.go:360] acquireMachinesLock for old-k8s-version-409903: {Name:mkc602ac9c8cc3360dcc0fcb8104303837aaab61 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 08:24:25.541431   72416 start.go:364] duration metric: took 12.031684087s to acquireMachinesLock for "old-k8s-version-409903"
	I0806 08:24:25.541494   72416 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-409903 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-409903 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0806 08:24:25.541614   72416 start.go:125] createHost starting for "" (driver="kvm2")
	I0806 08:24:25.543492   72416 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0806 08:24:25.543677   72416 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:24:25.543733   72416 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:24:25.563268   72416 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39027
	I0806 08:24:25.563704   72416 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:24:25.564339   72416 main.go:141] libmachine: Using API Version  1
	I0806 08:24:25.564385   72416 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:24:25.564839   72416 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:24:25.565060   72416 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetMachineName
	I0806 08:24:25.565234   72416 main.go:141] libmachine: (old-k8s-version-409903) Calling .DriverName
	I0806 08:24:25.565431   72416 start.go:159] libmachine.API.Create for "old-k8s-version-409903" (driver="kvm2")
	I0806 08:24:25.565459   72416 client.go:168] LocalClient.Create starting
	I0806 08:24:25.565497   72416 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem
	I0806 08:24:25.565539   72416 main.go:141] libmachine: Decoding PEM data...
	I0806 08:24:25.565559   72416 main.go:141] libmachine: Parsing certificate...
	I0806 08:24:25.565640   72416 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem
	I0806 08:24:25.565666   72416 main.go:141] libmachine: Decoding PEM data...
	I0806 08:24:25.565680   72416 main.go:141] libmachine: Parsing certificate...
	I0806 08:24:25.565706   72416 main.go:141] libmachine: Running pre-create checks...
	I0806 08:24:25.565718   72416 main.go:141] libmachine: (old-k8s-version-409903) Calling .PreCreateCheck
	I0806 08:24:25.566131   72416 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetConfigRaw
	I0806 08:24:25.566593   72416 main.go:141] libmachine: Creating machine...
	I0806 08:24:25.566608   72416 main.go:141] libmachine: (old-k8s-version-409903) Calling .Create
	I0806 08:24:25.566758   72416 main.go:141] libmachine: (old-k8s-version-409903) Creating KVM machine...
	I0806 08:24:25.568110   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | found existing default KVM network
	I0806 08:24:25.569799   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:24:25.569647   72505 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:90:bf:e6} reservation:<nil>}
	I0806 08:24:25.570646   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:24:25.570562   72505 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:fb:6f:0c} reservation:<nil>}
	I0806 08:24:25.571848   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:24:25.571782   72505 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00028b330}
	I0806 08:24:25.571868   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | created network xml: 
	I0806 08:24:25.571878   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | <network>
	I0806 08:24:25.571889   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG |   <name>mk-old-k8s-version-409903</name>
	I0806 08:24:25.571905   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG |   <dns enable='no'/>
	I0806 08:24:25.571917   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG |   
	I0806 08:24:25.571930   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0806 08:24:25.571945   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG |     <dhcp>
	I0806 08:24:25.571963   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0806 08:24:25.571972   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG |     </dhcp>
	I0806 08:24:25.571978   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG |   </ip>
	I0806 08:24:25.571985   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG |   
	I0806 08:24:25.571994   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | </network>
	I0806 08:24:25.572003   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | 
	I0806 08:24:25.577052   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | trying to create private KVM network mk-old-k8s-version-409903 192.168.61.0/24...
	I0806 08:24:25.650000   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | private KVM network mk-old-k8s-version-409903 192.168.61.0/24 created
	I0806 08:24:25.650036   72416 main.go:141] libmachine: (old-k8s-version-409903) Setting up store path in /home/jenkins/minikube-integration/19370-13057/.minikube/machines/old-k8s-version-409903 ...
	I0806 08:24:25.650064   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:24:25.649984   72505 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19370-13057/.minikube
	I0806 08:24:25.650084   72416 main.go:141] libmachine: (old-k8s-version-409903) Building disk image from file:///home/jenkins/minikube-integration/19370-13057/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0806 08:24:25.650197   72416 main.go:141] libmachine: (old-k8s-version-409903) Downloading /home/jenkins/minikube-integration/19370-13057/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19370-13057/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0806 08:24:25.910905   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:24:25.910753   72505 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/old-k8s-version-409903/id_rsa...
	I0806 08:24:26.088140   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:24:26.088016   72505 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/old-k8s-version-409903/old-k8s-version-409903.rawdisk...
	I0806 08:24:26.088180   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | Writing magic tar header
	I0806 08:24:26.088197   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | Writing SSH key tar header
	I0806 08:24:26.088348   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:24:26.088284   72505 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19370-13057/.minikube/machines/old-k8s-version-409903 ...
	I0806 08:24:26.088484   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/old-k8s-version-409903
	I0806 08:24:26.088507   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19370-13057/.minikube/machines
	I0806 08:24:26.088518   72416 main.go:141] libmachine: (old-k8s-version-409903) Setting executable bit set on /home/jenkins/minikube-integration/19370-13057/.minikube/machines/old-k8s-version-409903 (perms=drwx------)
	I0806 08:24:26.088529   72416 main.go:141] libmachine: (old-k8s-version-409903) Setting executable bit set on /home/jenkins/minikube-integration/19370-13057/.minikube/machines (perms=drwxr-xr-x)
	I0806 08:24:26.088542   72416 main.go:141] libmachine: (old-k8s-version-409903) Setting executable bit set on /home/jenkins/minikube-integration/19370-13057/.minikube (perms=drwxr-xr-x)
	I0806 08:24:26.088551   72416 main.go:141] libmachine: (old-k8s-version-409903) Setting executable bit set on /home/jenkins/minikube-integration/19370-13057 (perms=drwxrwxr-x)
	I0806 08:24:26.088558   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19370-13057/.minikube
	I0806 08:24:26.088566   72416 main.go:141] libmachine: (old-k8s-version-409903) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0806 08:24:26.088572   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19370-13057
	I0806 08:24:26.088582   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0806 08:24:26.088589   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | Checking permissions on dir: /home/jenkins
	I0806 08:24:26.088597   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | Checking permissions on dir: /home
	I0806 08:24:26.088611   72416 main.go:141] libmachine: (old-k8s-version-409903) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0806 08:24:26.088618   72416 main.go:141] libmachine: (old-k8s-version-409903) Creating domain...
	I0806 08:24:26.088628   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | Skipping /home - not owner
	I0806 08:24:26.089797   72416 main.go:141] libmachine: (old-k8s-version-409903) define libvirt domain using xml: 
	I0806 08:24:26.089826   72416 main.go:141] libmachine: (old-k8s-version-409903) <domain type='kvm'>
	I0806 08:24:26.089833   72416 main.go:141] libmachine: (old-k8s-version-409903)   <name>old-k8s-version-409903</name>
	I0806 08:24:26.091614   72416 main.go:141] libmachine: (old-k8s-version-409903)   <memory unit='MiB'>2200</memory>
	I0806 08:24:26.091636   72416 main.go:141] libmachine: (old-k8s-version-409903)   <vcpu>2</vcpu>
	I0806 08:24:26.091643   72416 main.go:141] libmachine: (old-k8s-version-409903)   <features>
	I0806 08:24:26.091656   72416 main.go:141] libmachine: (old-k8s-version-409903)     <acpi/>
	I0806 08:24:26.091663   72416 main.go:141] libmachine: (old-k8s-version-409903)     <apic/>
	I0806 08:24:26.091667   72416 main.go:141] libmachine: (old-k8s-version-409903)     <pae/>
	I0806 08:24:26.091673   72416 main.go:141] libmachine: (old-k8s-version-409903)     
	I0806 08:24:26.091680   72416 main.go:141] libmachine: (old-k8s-version-409903)   </features>
	I0806 08:24:26.091687   72416 main.go:141] libmachine: (old-k8s-version-409903)   <cpu mode='host-passthrough'>
	I0806 08:24:26.091694   72416 main.go:141] libmachine: (old-k8s-version-409903)   
	I0806 08:24:26.091700   72416 main.go:141] libmachine: (old-k8s-version-409903)   </cpu>
	I0806 08:24:26.091703   72416 main.go:141] libmachine: (old-k8s-version-409903)   <os>
	I0806 08:24:26.091709   72416 main.go:141] libmachine: (old-k8s-version-409903)     <type>hvm</type>
	I0806 08:24:26.091722   72416 main.go:141] libmachine: (old-k8s-version-409903)     <boot dev='cdrom'/>
	I0806 08:24:26.091733   72416 main.go:141] libmachine: (old-k8s-version-409903)     <boot dev='hd'/>
	I0806 08:24:26.091743   72416 main.go:141] libmachine: (old-k8s-version-409903)     <bootmenu enable='no'/>
	I0806 08:24:26.091753   72416 main.go:141] libmachine: (old-k8s-version-409903)   </os>
	I0806 08:24:26.091759   72416 main.go:141] libmachine: (old-k8s-version-409903)   <devices>
	I0806 08:24:26.091768   72416 main.go:141] libmachine: (old-k8s-version-409903)     <disk type='file' device='cdrom'>
	I0806 08:24:26.091782   72416 main.go:141] libmachine: (old-k8s-version-409903)       <source file='/home/jenkins/minikube-integration/19370-13057/.minikube/machines/old-k8s-version-409903/boot2docker.iso'/>
	I0806 08:24:26.091823   72416 main.go:141] libmachine: (old-k8s-version-409903)       <target dev='hdc' bus='scsi'/>
	I0806 08:24:26.091846   72416 main.go:141] libmachine: (old-k8s-version-409903)       <readonly/>
	I0806 08:24:26.091856   72416 main.go:141] libmachine: (old-k8s-version-409903)     </disk>
	I0806 08:24:26.091868   72416 main.go:141] libmachine: (old-k8s-version-409903)     <disk type='file' device='disk'>
	I0806 08:24:26.091882   72416 main.go:141] libmachine: (old-k8s-version-409903)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0806 08:24:26.091896   72416 main.go:141] libmachine: (old-k8s-version-409903)       <source file='/home/jenkins/minikube-integration/19370-13057/.minikube/machines/old-k8s-version-409903/old-k8s-version-409903.rawdisk'/>
	I0806 08:24:26.091908   72416 main.go:141] libmachine: (old-k8s-version-409903)       <target dev='hda' bus='virtio'/>
	I0806 08:24:26.091923   72416 main.go:141] libmachine: (old-k8s-version-409903)     </disk>
	I0806 08:24:26.091934   72416 main.go:141] libmachine: (old-k8s-version-409903)     <interface type='network'>
	I0806 08:24:26.091946   72416 main.go:141] libmachine: (old-k8s-version-409903)       <source network='mk-old-k8s-version-409903'/>
	I0806 08:24:26.091959   72416 main.go:141] libmachine: (old-k8s-version-409903)       <model type='virtio'/>
	I0806 08:24:26.091968   72416 main.go:141] libmachine: (old-k8s-version-409903)     </interface>
	I0806 08:24:26.091977   72416 main.go:141] libmachine: (old-k8s-version-409903)     <interface type='network'>
	I0806 08:24:26.091987   72416 main.go:141] libmachine: (old-k8s-version-409903)       <source network='default'/>
	I0806 08:24:26.091996   72416 main.go:141] libmachine: (old-k8s-version-409903)       <model type='virtio'/>
	I0806 08:24:26.092006   72416 main.go:141] libmachine: (old-k8s-version-409903)     </interface>
	I0806 08:24:26.092019   72416 main.go:141] libmachine: (old-k8s-version-409903)     <serial type='pty'>
	I0806 08:24:26.092026   72416 main.go:141] libmachine: (old-k8s-version-409903)       <target port='0'/>
	I0806 08:24:26.092037   72416 main.go:141] libmachine: (old-k8s-version-409903)     </serial>
	I0806 08:24:26.092049   72416 main.go:141] libmachine: (old-k8s-version-409903)     <console type='pty'>
	I0806 08:24:26.092059   72416 main.go:141] libmachine: (old-k8s-version-409903)       <target type='serial' port='0'/>
	I0806 08:24:26.092068   72416 main.go:141] libmachine: (old-k8s-version-409903)     </console>
	I0806 08:24:26.092078   72416 main.go:141] libmachine: (old-k8s-version-409903)     <rng model='virtio'>
	I0806 08:24:26.092088   72416 main.go:141] libmachine: (old-k8s-version-409903)       <backend model='random'>/dev/random</backend>
	I0806 08:24:26.092098   72416 main.go:141] libmachine: (old-k8s-version-409903)     </rng>
	I0806 08:24:26.092105   72416 main.go:141] libmachine: (old-k8s-version-409903)     
	I0806 08:24:26.092112   72416 main.go:141] libmachine: (old-k8s-version-409903)     
	I0806 08:24:26.092124   72416 main.go:141] libmachine: (old-k8s-version-409903)   </devices>
	I0806 08:24:26.092136   72416 main.go:141] libmachine: (old-k8s-version-409903) </domain>
	I0806 08:24:26.092145   72416 main.go:141] libmachine: (old-k8s-version-409903) 
	I0806 08:24:26.097205   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:1f:02:17 in network default
	I0806 08:24:26.097939   72416 main.go:141] libmachine: (old-k8s-version-409903) Ensuring networks are active...
	I0806 08:24:26.097966   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:24:26.098730   72416 main.go:141] libmachine: (old-k8s-version-409903) Ensuring network default is active
	I0806 08:24:26.099275   72416 main.go:141] libmachine: (old-k8s-version-409903) Ensuring network mk-old-k8s-version-409903 is active
	I0806 08:24:26.099914   72416 main.go:141] libmachine: (old-k8s-version-409903) Getting domain xml...
	I0806 08:24:26.100781   72416 main.go:141] libmachine: (old-k8s-version-409903) Creating domain...
	I0806 08:24:27.679279   72416 main.go:141] libmachine: (old-k8s-version-409903) Waiting to get IP...
	I0806 08:24:27.680235   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:24:27.680710   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:24:27.680763   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:24:27.680673   72505 retry.go:31] will retry after 227.903454ms: waiting for machine to come up
	I0806 08:24:27.910498   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:24:27.911411   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:24:27.911443   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:24:27.911356   72505 retry.go:31] will retry after 359.984033ms: waiting for machine to come up
	I0806 08:24:28.273179   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:24:28.273855   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:24:28.273891   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:24:28.273754   72505 retry.go:31] will retry after 483.714463ms: waiting for machine to come up
	I0806 08:24:28.759612   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:24:28.760328   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:24:28.760356   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:24:28.760275   72505 retry.go:31] will retry after 426.083126ms: waiting for machine to come up
	I0806 08:24:29.188060   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:24:29.188716   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:24:29.188744   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:24:29.188660   72505 retry.go:31] will retry after 520.97259ms: waiting for machine to come up
	I0806 08:24:29.711707   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:24:29.712355   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:24:29.712397   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:24:29.712240   72505 retry.go:31] will retry after 617.070752ms: waiting for machine to come up
	I0806 08:24:30.331005   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:24:30.331515   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:24:30.331548   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:24:30.331462   72505 retry.go:31] will retry after 818.752788ms: waiting for machine to come up
	I0806 08:24:31.151727   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:24:31.152342   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:24:31.152381   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:24:31.152280   72505 retry.go:31] will retry after 965.860859ms: waiting for machine to come up
	I0806 08:24:32.119603   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:24:32.120224   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:24:32.120246   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:24:32.120143   72505 retry.go:31] will retry after 1.46187149s: waiting for machine to come up
	I0806 08:24:33.583395   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:24:33.583846   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:24:33.583871   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:24:33.583793   72505 retry.go:31] will retry after 2.288972771s: waiting for machine to come up
	I0806 08:24:35.874232   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:24:35.874798   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:24:35.874829   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:24:35.874747   72505 retry.go:31] will retry after 2.411555947s: waiting for machine to come up
	I0806 08:24:38.288252   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:24:38.288899   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:24:38.288935   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:24:38.288845   72505 retry.go:31] will retry after 2.354251241s: waiting for machine to come up
	I0806 08:24:40.644554   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:24:40.645170   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:24:40.645196   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:24:40.645115   72505 retry.go:31] will retry after 3.190197375s: waiting for machine to come up
	I0806 08:24:43.838163   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:24:43.838682   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:24:43.838701   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:24:43.838646   72505 retry.go:31] will retry after 4.672559218s: waiting for machine to come up
	I0806 08:24:48.513368   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:24:48.513873   72416 main.go:141] libmachine: (old-k8s-version-409903) Found IP for machine: 192.168.61.158
	I0806 08:24:48.513920   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has current primary IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:24:48.513933   72416 main.go:141] libmachine: (old-k8s-version-409903) Reserving static IP address...
	I0806 08:24:48.514279   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-409903", mac: "52:54:00:12:30:88", ip: "192.168.61.158"} in network mk-old-k8s-version-409903
	I0806 08:24:48.596041   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | Getting to WaitForSSH function...
	I0806 08:24:48.596078   72416 main.go:141] libmachine: (old-k8s-version-409903) Reserved static IP address: 192.168.61.158
	I0806 08:24:48.596091   72416 main.go:141] libmachine: (old-k8s-version-409903) Waiting for SSH to be available...
	I0806 08:24:48.599064   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:24:48.599646   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:24:42 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:minikube Clientid:01:52:54:00:12:30:88}
	I0806 08:24:48.599679   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:24:48.599828   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | Using SSH client type: external
	I0806 08:24:48.599854   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | Using SSH private key: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/old-k8s-version-409903/id_rsa (-rw-------)
	I0806 08:24:48.599877   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.158 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19370-13057/.minikube/machines/old-k8s-version-409903/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0806 08:24:48.599904   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | About to run SSH command:
	I0806 08:24:48.599924   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | exit 0
	I0806 08:24:48.724426   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | SSH cmd err, output: <nil>: 
	I0806 08:24:48.724685   72416 main.go:141] libmachine: (old-k8s-version-409903) KVM machine creation complete!
	I0806 08:24:48.725027   72416 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetConfigRaw
	I0806 08:24:48.725624   72416 main.go:141] libmachine: (old-k8s-version-409903) Calling .DriverName
	I0806 08:24:48.725826   72416 main.go:141] libmachine: (old-k8s-version-409903) Calling .DriverName
	I0806 08:24:48.726003   72416 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0806 08:24:48.726018   72416 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetState
	I0806 08:24:48.727270   72416 main.go:141] libmachine: Detecting operating system of created instance...
	I0806 08:24:48.727293   72416 main.go:141] libmachine: Waiting for SSH to be available...
	I0806 08:24:48.727298   72416 main.go:141] libmachine: Getting to WaitForSSH function...
	I0806 08:24:48.727303   72416 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHHostname
	I0806 08:24:48.729622   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:24:48.729966   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:24:42 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:24:48.729992   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:24:48.730118   72416 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHPort
	I0806 08:24:48.730292   72416 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:24:48.730466   72416 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:24:48.730636   72416 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHUsername
	I0806 08:24:48.730786   72416 main.go:141] libmachine: Using SSH client type: native
	I0806 08:24:48.730963   72416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.158 22 <nil> <nil>}
	I0806 08:24:48.730974   72416 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0806 08:24:48.836079   72416 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 08:24:48.836105   72416 main.go:141] libmachine: Detecting the provisioner...
	I0806 08:24:48.836115   72416 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHHostname
	I0806 08:24:48.839518   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:24:48.839983   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:24:42 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:24:48.840023   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:24:48.840220   72416 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHPort
	I0806 08:24:48.840451   72416 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:24:48.840628   72416 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:24:48.840762   72416 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHUsername
	I0806 08:24:48.840928   72416 main.go:141] libmachine: Using SSH client type: native
	I0806 08:24:48.841110   72416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.158 22 <nil> <nil>}
	I0806 08:24:48.841121   72416 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0806 08:24:48.946077   72416 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0806 08:24:48.946153   72416 main.go:141] libmachine: found compatible host: buildroot
	I0806 08:24:48.946168   72416 main.go:141] libmachine: Provisioning with buildroot...
	I0806 08:24:48.946183   72416 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetMachineName
	I0806 08:24:48.946467   72416 buildroot.go:166] provisioning hostname "old-k8s-version-409903"
	I0806 08:24:48.946497   72416 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetMachineName
	I0806 08:24:48.946693   72416 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHHostname
	I0806 08:24:48.949987   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:24:48.950528   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:24:42 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:24:48.950570   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:24:48.950761   72416 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHPort
	I0806 08:24:48.950954   72416 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:24:48.951128   72416 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:24:48.951264   72416 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHUsername
	I0806 08:24:48.951480   72416 main.go:141] libmachine: Using SSH client type: native
	I0806 08:24:48.951691   72416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.158 22 <nil> <nil>}
	I0806 08:24:48.951708   72416 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-409903 && echo "old-k8s-version-409903" | sudo tee /etc/hostname
	I0806 08:24:49.078526   72416 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-409903
	
	I0806 08:24:49.078556   72416 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHHostname
	I0806 08:24:49.081551   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:24:49.082019   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:24:42 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:24:49.082064   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:24:49.082347   72416 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHPort
	I0806 08:24:49.082527   72416 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:24:49.082731   72416 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:24:49.082898   72416 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHUsername
	I0806 08:24:49.083074   72416 main.go:141] libmachine: Using SSH client type: native
	I0806 08:24:49.083297   72416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.158 22 <nil> <nil>}
	I0806 08:24:49.083328   72416 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-409903' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-409903/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-409903' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0806 08:24:49.200150   72416 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 08:24:49.200175   72416 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19370-13057/.minikube CaCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19370-13057/.minikube}
	I0806 08:24:49.200215   72416 buildroot.go:174] setting up certificates
	I0806 08:24:49.200225   72416 provision.go:84] configureAuth start
	I0806 08:24:49.200232   72416 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetMachineName
	I0806 08:24:49.200475   72416 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetIP
	I0806 08:24:49.203098   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:24:49.203386   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:24:42 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:24:49.203411   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:24:49.203665   72416 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHHostname
	I0806 08:24:49.206066   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:24:49.206418   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:24:42 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:24:49.206438   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:24:49.206721   72416 provision.go:143] copyHostCerts
	I0806 08:24:49.206806   72416 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem, removing ...
	I0806 08:24:49.206820   72416 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem
	I0806 08:24:49.206892   72416 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem (1078 bytes)
	I0806 08:24:49.207048   72416 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem, removing ...
	I0806 08:24:49.207063   72416 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem
	I0806 08:24:49.207100   72416 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem (1123 bytes)
	I0806 08:24:49.207223   72416 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem, removing ...
	I0806 08:24:49.207235   72416 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem
	I0806 08:24:49.207259   72416 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem (1675 bytes)
	I0806 08:24:49.207319   72416 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-409903 san=[127.0.0.1 192.168.61.158 localhost minikube old-k8s-version-409903]
	I0806 08:24:49.319805   72416 provision.go:177] copyRemoteCerts
	I0806 08:24:49.319859   72416 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0806 08:24:49.319885   72416 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHHostname
	I0806 08:24:49.322920   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:24:49.323289   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:24:42 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:24:49.323347   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:24:49.323551   72416 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHPort
	I0806 08:24:49.323759   72416 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:24:49.323901   72416 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHUsername
	I0806 08:24:49.324022   72416 sshutil.go:53] new ssh client: &{IP:192.168.61.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/old-k8s-version-409903/id_rsa Username:docker}
	I0806 08:24:49.414246   72416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0806 08:24:49.441408   72416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0806 08:24:49.467388   72416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0806 08:24:49.496288   72416 provision.go:87] duration metric: took 296.049415ms to configureAuth
	I0806 08:24:49.496319   72416 buildroot.go:189] setting minikube options for container-runtime
	I0806 08:24:49.496548   72416 config.go:182] Loaded profile config "old-k8s-version-409903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0806 08:24:49.496632   72416 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHHostname
	I0806 08:24:49.499159   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:24:49.499590   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:24:42 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:24:49.499620   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:24:49.499841   72416 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHPort
	I0806 08:24:49.500040   72416 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:24:49.500196   72416 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:24:49.500318   72416 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHUsername
	I0806 08:24:49.500529   72416 main.go:141] libmachine: Using SSH client type: native
	I0806 08:24:49.500751   72416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.158 22 <nil> <nil>}
	I0806 08:24:49.500776   72416 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0806 08:24:49.782416   72416 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0806 08:24:49.782443   72416 main.go:141] libmachine: Checking connection to Docker...
	I0806 08:24:49.782455   72416 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetURL
	I0806 08:24:49.784588   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | Using libvirt version 6000000
	I0806 08:24:49.787482   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:24:49.787869   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:24:42 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:24:49.787903   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:24:49.788060   72416 main.go:141] libmachine: Docker is up and running!
	I0806 08:24:49.788078   72416 main.go:141] libmachine: Reticulating splines...
	I0806 08:24:49.788091   72416 client.go:171] duration metric: took 24.222621405s to LocalClient.Create
	I0806 08:24:49.788121   72416 start.go:167] duration metric: took 24.22269899s to libmachine.API.Create "old-k8s-version-409903"
	I0806 08:24:49.788135   72416 start.go:293] postStartSetup for "old-k8s-version-409903" (driver="kvm2")
	I0806 08:24:49.788150   72416 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0806 08:24:49.788174   72416 main.go:141] libmachine: (old-k8s-version-409903) Calling .DriverName
	I0806 08:24:49.788468   72416 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0806 08:24:49.788495   72416 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHHostname
	I0806 08:24:49.790934   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:24:49.791276   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:24:42 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:24:49.791297   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:24:49.791454   72416 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHPort
	I0806 08:24:49.791642   72416 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:24:49.791821   72416 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHUsername
	I0806 08:24:49.791995   72416 sshutil.go:53] new ssh client: &{IP:192.168.61.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/old-k8s-version-409903/id_rsa Username:docker}
	I0806 08:24:49.880531   72416 ssh_runner.go:195] Run: cat /etc/os-release
	I0806 08:24:49.885465   72416 info.go:137] Remote host: Buildroot 2023.02.9
	I0806 08:24:49.885495   72416 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-13057/.minikube/addons for local assets ...
	I0806 08:24:49.885562   72416 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-13057/.minikube/files for local assets ...
	I0806 08:24:49.885674   72416 filesync.go:149] local asset: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem -> 202462.pem in /etc/ssl/certs
	I0806 08:24:49.885767   72416 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0806 08:24:49.897110   72416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem --> /etc/ssl/certs/202462.pem (1708 bytes)
	I0806 08:24:49.927536   72416 start.go:296] duration metric: took 139.384427ms for postStartSetup
	I0806 08:24:49.927589   72416 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetConfigRaw
	I0806 08:24:49.928185   72416 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetIP
	I0806 08:24:49.931272   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:24:49.931660   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:24:42 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:24:49.931689   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:24:49.931948   72416 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/old-k8s-version-409903/config.json ...
	I0806 08:24:49.932181   72416 start.go:128] duration metric: took 24.390554456s to createHost
	I0806 08:24:49.932206   72416 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHHostname
	I0806 08:24:49.934730   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:24:49.935120   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:24:42 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:24:49.935147   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:24:49.935325   72416 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHPort
	I0806 08:24:49.935489   72416 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:24:49.935662   72416 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:24:49.935795   72416 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHUsername
	I0806 08:24:49.935978   72416 main.go:141] libmachine: Using SSH client type: native
	I0806 08:24:49.936132   72416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.158 22 <nil> <nil>}
	I0806 08:24:49.936141   72416 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0806 08:24:50.042748   72416 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722932690.012326677
	
	I0806 08:24:50.042782   72416 fix.go:216] guest clock: 1722932690.012326677
	I0806 08:24:50.042794   72416 fix.go:229] Guest: 2024-08-06 08:24:50.012326677 +0000 UTC Remote: 2024-08-06 08:24:49.932195384 +0000 UTC m=+36.540059422 (delta=80.131293ms)
	I0806 08:24:50.042851   72416 fix.go:200] guest clock delta is within tolerance: 80.131293ms
	I0806 08:24:50.042859   72416 start.go:83] releasing machines lock for "old-k8s-version-409903", held for 24.501397862s
	I0806 08:24:50.042890   72416 main.go:141] libmachine: (old-k8s-version-409903) Calling .DriverName
	I0806 08:24:50.043198   72416 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetIP
	I0806 08:24:50.046353   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:24:50.046765   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:24:42 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:24:50.046789   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:24:50.047042   72416 main.go:141] libmachine: (old-k8s-version-409903) Calling .DriverName
	I0806 08:24:50.047629   72416 main.go:141] libmachine: (old-k8s-version-409903) Calling .DriverName
	I0806 08:24:50.047797   72416 main.go:141] libmachine: (old-k8s-version-409903) Calling .DriverName
	I0806 08:24:50.047861   72416 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0806 08:24:50.047904   72416 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHHostname
	I0806 08:24:50.048009   72416 ssh_runner.go:195] Run: cat /version.json
	I0806 08:24:50.048030   72416 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHHostname
	I0806 08:24:50.055677   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:24:50.056007   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:24:50.056256   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:24:42 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:24:50.056276   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:24:50.056524   72416 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHPort
	I0806 08:24:50.056704   72416 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:24:50.056757   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:24:42 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:24:50.056779   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:24:50.056872   72416 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHUsername
	I0806 08:24:50.057025   72416 sshutil.go:53] new ssh client: &{IP:192.168.61.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/old-k8s-version-409903/id_rsa Username:docker}
	I0806 08:24:50.057041   72416 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHPort
	I0806 08:24:50.057353   72416 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:24:50.057531   72416 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHUsername
	I0806 08:24:50.057626   72416 sshutil.go:53] new ssh client: &{IP:192.168.61.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/old-k8s-version-409903/id_rsa Username:docker}
	I0806 08:24:50.138459   72416 ssh_runner.go:195] Run: systemctl --version
	I0806 08:24:50.162482   72416 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0806 08:24:50.324968   72416 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0806 08:24:50.332118   72416 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0806 08:24:50.332185   72416 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0806 08:24:50.353629   72416 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0806 08:24:50.353652   72416 start.go:495] detecting cgroup driver to use...
	I0806 08:24:50.353715   72416 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0806 08:24:50.375786   72416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 08:24:50.391834   72416 docker.go:217] disabling cri-docker service (if available) ...
	I0806 08:24:50.391904   72416 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0806 08:24:50.408461   72416 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0806 08:24:50.423541   72416 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0806 08:24:50.570368   72416 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0806 08:24:50.759239   72416 docker.go:233] disabling docker service ...
	I0806 08:24:50.759298   72416 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0806 08:24:50.776147   72416 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0806 08:24:50.791173   72416 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0806 08:24:50.949550   72416 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0806 08:24:51.077663   72416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0806 08:24:51.094859   72416 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 08:24:51.119567   72416 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0806 08:24:51.119632   72416 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:24:51.130621   72416 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0806 08:24:51.130694   72416 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:24:51.143001   72416 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:24:51.154716   72416 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:24:51.167063   72416 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0806 08:24:51.179299   72416 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0806 08:24:51.194201   72416 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0806 08:24:51.194268   72416 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0806 08:24:51.208331   72416 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0806 08:24:51.218281   72416 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 08:24:51.354980   72416 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0806 08:24:51.513267   72416 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0806 08:24:51.513344   72416 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0806 08:24:51.518918   72416 start.go:563] Will wait 60s for crictl version
	I0806 08:24:51.518973   72416 ssh_runner.go:195] Run: which crictl
	I0806 08:24:51.522771   72416 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0806 08:24:51.580173   72416 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0806 08:24:51.580255   72416 ssh_runner.go:195] Run: crio --version
	I0806 08:24:51.615258   72416 ssh_runner.go:195] Run: crio --version
	I0806 08:24:51.652223   72416 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0806 08:24:51.653589   72416 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetIP
	I0806 08:24:51.656553   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:24:51.656922   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:24:42 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:24:51.656957   72416 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:24:51.657224   72416 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0806 08:24:51.662600   72416 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 08:24:51.678462   72416 kubeadm.go:883] updating cluster {Name:old-k8s-version-409903 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-409903 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.158 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0806 08:24:51.678598   72416 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0806 08:24:51.678655   72416 ssh_runner.go:195] Run: sudo crictl images --output json
	I0806 08:24:51.714754   72416 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0806 08:24:51.714813   72416 ssh_runner.go:195] Run: which lz4
	I0806 08:24:51.718928   72416 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0806 08:24:51.723294   72416 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0806 08:24:51.723327   72416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0806 08:24:53.466258   72416 crio.go:462] duration metric: took 1.747357192s to copy over tarball
	I0806 08:24:53.466366   72416 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0806 08:24:56.504000   72416 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.037603463s)
	I0806 08:24:56.504022   72416 crio.go:469] duration metric: took 3.037733108s to extract the tarball
	I0806 08:24:56.504030   72416 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0806 08:24:56.553124   72416 ssh_runner.go:195] Run: sudo crictl images --output json
	I0806 08:24:56.634282   72416 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0806 08:24:56.634312   72416 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0806 08:24:56.634353   72416 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 08:24:56.634618   72416 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0806 08:24:56.634741   72416 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0806 08:24:56.634835   72416 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0806 08:24:56.634935   72416 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0806 08:24:56.635033   72416 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0806 08:24:56.635128   72416 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0806 08:24:56.635231   72416 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0806 08:24:56.637456   72416 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0806 08:24:56.637485   72416 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0806 08:24:56.637487   72416 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0806 08:24:56.637456   72416 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0806 08:24:56.637519   72416 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0806 08:24:56.637540   72416 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0806 08:24:56.637556   72416 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0806 08:24:56.637790   72416 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 08:24:56.786548   72416 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0806 08:24:56.794604   72416 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0806 08:24:56.795319   72416 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0806 08:24:56.803295   72416 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0806 08:24:56.804656   72416 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0806 08:24:56.817474   72416 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0806 08:24:56.820472   72416 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0806 08:24:56.928776   72416 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0806 08:24:56.928827   72416 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0806 08:24:56.928880   72416 ssh_runner.go:195] Run: which crictl
	I0806 08:24:57.007938   72416 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0806 08:24:57.008007   72416 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0806 08:24:57.008058   72416 ssh_runner.go:195] Run: which crictl
	I0806 08:24:57.028773   72416 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0806 08:24:57.028816   72416 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0806 08:24:57.028865   72416 ssh_runner.go:195] Run: which crictl
	I0806 08:24:57.053254   72416 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0806 08:24:57.053305   72416 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0806 08:24:57.053355   72416 ssh_runner.go:195] Run: which crictl
	I0806 08:24:57.053501   72416 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0806 08:24:57.053530   72416 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0806 08:24:57.053569   72416 ssh_runner.go:195] Run: which crictl
	I0806 08:24:57.064978   72416 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0806 08:24:57.065022   72416 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0806 08:24:57.065045   72416 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0806 08:24:57.065069   72416 ssh_runner.go:195] Run: which crictl
	I0806 08:24:57.065087   72416 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0806 08:24:57.065136   72416 ssh_runner.go:195] Run: which crictl
	I0806 08:24:57.065163   72416 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0806 08:24:57.065225   72416 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0806 08:24:57.065257   72416 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0806 08:24:57.068303   72416 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0806 08:24:57.068400   72416 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0806 08:24:57.077133   72416 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0806 08:24:57.077277   72416 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0806 08:24:57.224770   72416 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0806 08:24:57.224849   72416 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0806 08:24:57.235128   72416 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0806 08:24:57.246980   72416 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0806 08:24:57.247003   72416 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0806 08:24:57.247064   72416 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0806 08:24:57.247154   72416 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0806 08:24:57.563436   72416 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 08:24:57.714404   72416 cache_images.go:92] duration metric: took 1.080075282s to LoadCachedImages
	W0806 08:24:57.714484   72416 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0806 08:24:57.714499   72416 kubeadm.go:934] updating node { 192.168.61.158 8443 v1.20.0 crio true true} ...
	I0806 08:24:57.714635   72416 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-409903 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.158
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-409903 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0806 08:24:57.714728   72416 ssh_runner.go:195] Run: crio config
	I0806 08:24:57.776641   72416 cni.go:84] Creating CNI manager for ""
	I0806 08:24:57.776669   72416 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0806 08:24:57.776682   72416 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0806 08:24:57.776701   72416 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.158 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-409903 NodeName:old-k8s-version-409903 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.158"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.158 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0806 08:24:57.776876   72416 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.158
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-409903"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.158
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.158"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0806 08:24:57.776961   72416 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0806 08:24:57.789294   72416 binaries.go:44] Found k8s binaries, skipping transfer
	I0806 08:24:57.789367   72416 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0806 08:24:57.801668   72416 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0806 08:24:57.823084   72416 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0806 08:24:57.843221   72416 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0806 08:24:57.862726   72416 ssh_runner.go:195] Run: grep 192.168.61.158	control-plane.minikube.internal$ /etc/hosts
	I0806 08:24:57.867185   72416 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.158	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 08:24:57.881822   72416 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 08:24:58.028740   72416 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 08:24:58.054168   72416 certs.go:68] Setting up /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/old-k8s-version-409903 for IP: 192.168.61.158
	I0806 08:24:58.054190   72416 certs.go:194] generating shared ca certs ...
	I0806 08:24:58.054208   72416 certs.go:226] acquiring lock for ca certs: {Name:mkf5c5aed91f4108054d9f2c742cad66c86afbed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:24:58.054373   72416 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.key
	I0806 08:24:58.054441   72416 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.key
	I0806 08:24:58.054457   72416 certs.go:256] generating profile certs ...
	I0806 08:24:58.054523   72416 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/old-k8s-version-409903/client.key
	I0806 08:24:58.054544   72416 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/old-k8s-version-409903/client.crt with IP's: []
	I0806 08:24:58.185645   72416 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/old-k8s-version-409903/client.crt ...
	I0806 08:24:58.185682   72416 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/old-k8s-version-409903/client.crt: {Name:mk1a7416518268838f77dc54b42bf2aea434f411 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:24:58.192548   72416 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/old-k8s-version-409903/client.key ...
	I0806 08:24:58.192590   72416 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/old-k8s-version-409903/client.key: {Name:mk7cd792b515a150f0934dafac7a715a849ad02c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:24:58.192776   72416 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/old-k8s-version-409903/apiserver.key.1ffd6785
	I0806 08:24:58.192805   72416 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/old-k8s-version-409903/apiserver.crt.1ffd6785 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.158]
	I0806 08:24:58.833614   72416 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/old-k8s-version-409903/apiserver.crt.1ffd6785 ...
	I0806 08:24:58.833648   72416 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/old-k8s-version-409903/apiserver.crt.1ffd6785: {Name:mk10aefe48cb67152e38cca0c9c951e3990436fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:24:58.833855   72416 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/old-k8s-version-409903/apiserver.key.1ffd6785 ...
	I0806 08:24:58.833874   72416 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/old-k8s-version-409903/apiserver.key.1ffd6785: {Name:mk941bb328aa1ad9d8e2eee6a2de357139b24849 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:24:58.833981   72416 certs.go:381] copying /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/old-k8s-version-409903/apiserver.crt.1ffd6785 -> /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/old-k8s-version-409903/apiserver.crt
	I0806 08:24:58.834085   72416 certs.go:385] copying /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/old-k8s-version-409903/apiserver.key.1ffd6785 -> /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/old-k8s-version-409903/apiserver.key
	I0806 08:24:58.834179   72416 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/old-k8s-version-409903/proxy-client.key
	I0806 08:24:58.834201   72416 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/old-k8s-version-409903/proxy-client.crt with IP's: []
	I0806 08:24:59.040242   72416 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/old-k8s-version-409903/proxy-client.crt ...
	I0806 08:24:59.040272   72416 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/old-k8s-version-409903/proxy-client.crt: {Name:mk8f86627b31e41fd4bb5cd2a07d4575a6fb3abd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:24:59.040451   72416 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/old-k8s-version-409903/proxy-client.key ...
	I0806 08:24:59.040466   72416 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/old-k8s-version-409903/proxy-client.key: {Name:mk233104b5545293b0cbcbfcf0d4003b3b06ce0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:24:59.040637   72416 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246.pem (1338 bytes)
	W0806 08:24:59.040672   72416 certs.go:480] ignoring /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246_empty.pem, impossibly tiny 0 bytes
	I0806 08:24:59.040682   72416 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem (1675 bytes)
	I0806 08:24:59.040704   72416 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem (1078 bytes)
	I0806 08:24:59.040726   72416 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem (1123 bytes)
	I0806 08:24:59.040754   72416 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem (1675 bytes)
	I0806 08:24:59.040813   72416 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem (1708 bytes)
	I0806 08:24:59.041430   72416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0806 08:24:59.092070   72416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0806 08:24:59.129792   72416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0806 08:24:59.174289   72416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0806 08:24:59.210239   72416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/old-k8s-version-409903/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0806 08:24:59.237377   72416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/old-k8s-version-409903/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0806 08:24:59.274746   72416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/old-k8s-version-409903/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0806 08:24:59.304420   72416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/old-k8s-version-409903/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0806 08:24:59.368351   72416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem --> /usr/share/ca-certificates/202462.pem (1708 bytes)
	I0806 08:24:59.399525   72416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0806 08:24:59.430300   72416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246.pem --> /usr/share/ca-certificates/20246.pem (1338 bytes)
	I0806 08:24:59.459463   72416 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0806 08:24:59.481899   72416 ssh_runner.go:195] Run: openssl version
	I0806 08:24:59.489961   72416 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202462.pem && ln -fs /usr/share/ca-certificates/202462.pem /etc/ssl/certs/202462.pem"
	I0806 08:24:59.505491   72416 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202462.pem
	I0806 08:24:59.512093   72416 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  6 07:17 /usr/share/ca-certificates/202462.pem
	I0806 08:24:59.512170   72416 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202462.pem
	I0806 08:24:59.520543   72416 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202462.pem /etc/ssl/certs/3ec20f2e.0"
	I0806 08:24:59.536043   72416 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0806 08:24:59.552357   72416 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0806 08:24:59.558639   72416 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  6 07:07 /usr/share/ca-certificates/minikubeCA.pem
	I0806 08:24:59.558709   72416 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0806 08:24:59.566862   72416 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0806 08:24:59.579710   72416 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20246.pem && ln -fs /usr/share/ca-certificates/20246.pem /etc/ssl/certs/20246.pem"
	I0806 08:24:59.592727   72416 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20246.pem
	I0806 08:24:59.598338   72416 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  6 07:17 /usr/share/ca-certificates/20246.pem
	I0806 08:24:59.598404   72416 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20246.pem
	I0806 08:24:59.605631   72416 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20246.pem /etc/ssl/certs/51391683.0"
	I0806 08:24:59.617078   72416 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0806 08:24:59.622073   72416 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0806 08:24:59.622133   72416 kubeadm.go:392] StartCluster: {Name:old-k8s-version-409903 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-409903 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.158 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 08:24:59.622228   72416 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0806 08:24:59.622278   72416 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0806 08:24:59.666437   72416 cri.go:89] found id: ""
	I0806 08:24:59.666512   72416 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0806 08:24:59.677219   72416 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0806 08:24:59.687329   72416 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0806 08:24:59.697525   72416 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0806 08:24:59.697547   72416 kubeadm.go:157] found existing configuration files:
	
	I0806 08:24:59.697603   72416 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0806 08:24:59.707285   72416 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0806 08:24:59.707348   72416 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0806 08:24:59.717282   72416 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0806 08:24:59.726797   72416 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0806 08:24:59.726864   72416 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0806 08:24:59.739490   72416 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0806 08:24:59.751825   72416 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0806 08:24:59.751968   72416 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0806 08:24:59.764991   72416 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0806 08:24:59.777270   72416 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0806 08:24:59.777343   72416 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0806 08:24:59.794802   72416 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0806 08:25:00.023275   72416 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0806 08:25:00.023369   72416 kubeadm.go:310] [preflight] Running pre-flight checks
	I0806 08:25:00.228784   72416 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0806 08:25:00.228937   72416 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0806 08:25:00.229062   72416 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0806 08:25:00.488924   72416 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0806 08:25:00.491089   72416 out.go:204]   - Generating certificates and keys ...
	I0806 08:25:00.491245   72416 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0806 08:25:00.491377   72416 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0806 08:25:00.823912   72416 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0806 08:25:01.113986   72416 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0806 08:25:01.251927   72416 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0806 08:25:01.347526   72416 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0806 08:25:01.474726   72416 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0806 08:25:01.478492   72416 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-409903] and IPs [192.168.61.158 127.0.0.1 ::1]
	I0806 08:25:01.584955   72416 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0806 08:25:01.585182   72416 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-409903] and IPs [192.168.61.158 127.0.0.1 ::1]
	I0806 08:25:01.899141   72416 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0806 08:25:02.046312   72416 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0806 08:25:02.141712   72416 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0806 08:25:02.142105   72416 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0806 08:25:02.458392   72416 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0806 08:25:02.659260   72416 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0806 08:25:02.948558   72416 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0806 08:25:03.195172   72416 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0806 08:25:03.213147   72416 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0806 08:25:03.216646   72416 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0806 08:25:03.216849   72416 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0806 08:25:03.361413   72416 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0806 08:25:03.363721   72416 out.go:204]   - Booting up control plane ...
	I0806 08:25:03.363878   72416 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0806 08:25:03.370328   72416 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0806 08:25:03.371590   72416 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0806 08:25:03.372747   72416 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0806 08:25:03.378815   72416 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0806 08:25:43.371496   72416 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0806 08:25:43.372214   72416 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0806 08:25:43.372526   72416 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0806 08:25:48.372566   72416 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0806 08:25:48.372758   72416 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0806 08:25:58.372033   72416 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0806 08:25:58.372305   72416 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0806 08:26:18.371754   72416 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0806 08:26:18.371955   72416 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0806 08:26:58.373574   72416 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0806 08:26:58.373826   72416 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0806 08:26:58.373849   72416 kubeadm.go:310] 
	I0806 08:26:58.373890   72416 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0806 08:26:58.373924   72416 kubeadm.go:310] 		timed out waiting for the condition
	I0806 08:26:58.373930   72416 kubeadm.go:310] 
	I0806 08:26:58.373959   72416 kubeadm.go:310] 	This error is likely caused by:
	I0806 08:26:58.373987   72416 kubeadm.go:310] 		- The kubelet is not running
	I0806 08:26:58.374100   72416 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0806 08:26:58.374124   72416 kubeadm.go:310] 
	I0806 08:26:58.374249   72416 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0806 08:26:58.374314   72416 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0806 08:26:58.374357   72416 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0806 08:26:58.374369   72416 kubeadm.go:310] 
	I0806 08:26:58.374500   72416 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0806 08:26:58.374619   72416 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0806 08:26:58.374630   72416 kubeadm.go:310] 
	I0806 08:26:58.374761   72416 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0806 08:26:58.374856   72416 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0806 08:26:58.374978   72416 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0806 08:26:58.375077   72416 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0806 08:26:58.375087   72416 kubeadm.go:310] 
	I0806 08:26:58.375331   72416 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0806 08:26:58.375456   72416 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0806 08:26:58.375666   72416 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0806 08:26:58.375702   72416 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-409903] and IPs [192.168.61.158 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-409903] and IPs [192.168.61.158 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-409903] and IPs [192.168.61.158 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-409903] and IPs [192.168.61.158 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0806 08:26:58.375769   72416 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0806 08:26:59.898127   72416 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.522331427s)
	I0806 08:26:59.898217   72416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 08:26:59.912809   72416 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0806 08:26:59.923234   72416 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0806 08:26:59.923251   72416 kubeadm.go:157] found existing configuration files:
	
	I0806 08:26:59.923292   72416 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0806 08:26:59.932804   72416 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0806 08:26:59.932890   72416 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0806 08:26:59.945422   72416 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0806 08:26:59.954979   72416 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0806 08:26:59.955035   72416 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0806 08:26:59.965000   72416 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0806 08:26:59.974421   72416 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0806 08:26:59.974485   72416 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0806 08:26:59.986664   72416 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0806 08:26:59.996100   72416 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0806 08:26:59.996168   72416 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0806 08:27:00.006039   72416 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0806 08:27:00.081780   72416 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0806 08:27:00.081878   72416 kubeadm.go:310] [preflight] Running pre-flight checks
	I0806 08:27:00.232662   72416 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0806 08:27:00.232830   72416 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0806 08:27:00.232960   72416 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0806 08:27:00.439599   72416 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0806 08:27:00.442511   72416 out.go:204]   - Generating certificates and keys ...
	I0806 08:27:00.442613   72416 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0806 08:27:00.442703   72416 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0806 08:27:00.442836   72416 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0806 08:27:00.442942   72416 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0806 08:27:00.443033   72416 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0806 08:27:00.443104   72416 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0806 08:27:00.443198   72416 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0806 08:27:00.443278   72416 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0806 08:27:00.443399   72416 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0806 08:27:00.443508   72416 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0806 08:27:00.443566   72416 kubeadm.go:310] [certs] Using the existing "sa" key
	I0806 08:27:00.443649   72416 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0806 08:27:00.524224   72416 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0806 08:27:00.576475   72416 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0806 08:27:00.867376   72416 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0806 08:27:00.994405   72416 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0806 08:27:01.008418   72416 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0806 08:27:01.009497   72416 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0806 08:27:01.009574   72416 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0806 08:27:01.144927   72416 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0806 08:27:01.146783   72416 out.go:204]   - Booting up control plane ...
	I0806 08:27:01.146886   72416 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0806 08:27:01.153257   72416 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0806 08:27:01.154307   72416 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0806 08:27:01.155185   72416 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0806 08:27:01.157742   72416 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0806 08:27:41.160448   72416 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0806 08:27:41.160818   72416 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0806 08:27:41.161192   72416 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0806 08:27:46.161831   72416 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0806 08:27:46.162130   72416 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0806 08:27:56.162861   72416 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0806 08:27:56.163061   72416 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0806 08:28:16.162345   72416 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0806 08:28:16.162588   72416 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0806 08:28:56.162714   72416 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0806 08:28:56.162950   72416 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0806 08:28:56.162974   72416 kubeadm.go:310] 
	I0806 08:28:56.163025   72416 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0806 08:28:56.163079   72416 kubeadm.go:310] 		timed out waiting for the condition
	I0806 08:28:56.163088   72416 kubeadm.go:310] 
	I0806 08:28:56.163138   72416 kubeadm.go:310] 	This error is likely caused by:
	I0806 08:28:56.163191   72416 kubeadm.go:310] 		- The kubelet is not running
	I0806 08:28:56.163320   72416 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0806 08:28:56.163328   72416 kubeadm.go:310] 
	I0806 08:28:56.163477   72416 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0806 08:28:56.163534   72416 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0806 08:28:56.163582   72416 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0806 08:28:56.163592   72416 kubeadm.go:310] 
	I0806 08:28:56.163719   72416 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0806 08:28:56.163839   72416 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0806 08:28:56.163864   72416 kubeadm.go:310] 
	I0806 08:28:56.164011   72416 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0806 08:28:56.164124   72416 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0806 08:28:56.164224   72416 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0806 08:28:56.164340   72416 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0806 08:28:56.164384   72416 kubeadm.go:310] 
	I0806 08:28:56.164715   72416 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0806 08:28:56.164795   72416 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0806 08:28:56.164858   72416 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0806 08:28:56.164919   72416 kubeadm.go:394] duration metric: took 3m56.542792306s to StartCluster
	I0806 08:28:56.164958   72416 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:28:56.165015   72416 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:28:56.207166   72416 cri.go:89] found id: ""
	I0806 08:28:56.207197   72416 logs.go:276] 0 containers: []
	W0806 08:28:56.207208   72416 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:28:56.207216   72416 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:28:56.207272   72416 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:28:56.246036   72416 cri.go:89] found id: ""
	I0806 08:28:56.246062   72416 logs.go:276] 0 containers: []
	W0806 08:28:56.246071   72416 logs.go:278] No container was found matching "etcd"
	I0806 08:28:56.246076   72416 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:28:56.246134   72416 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:28:56.286498   72416 cri.go:89] found id: ""
	I0806 08:28:56.286526   72416 logs.go:276] 0 containers: []
	W0806 08:28:56.286535   72416 logs.go:278] No container was found matching "coredns"
	I0806 08:28:56.286541   72416 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:28:56.286593   72416 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:28:56.321652   72416 cri.go:89] found id: ""
	I0806 08:28:56.321678   72416 logs.go:276] 0 containers: []
	W0806 08:28:56.321686   72416 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:28:56.321692   72416 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:28:56.321739   72416 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:28:56.357514   72416 cri.go:89] found id: ""
	I0806 08:28:56.357547   72416 logs.go:276] 0 containers: []
	W0806 08:28:56.357558   72416 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:28:56.357565   72416 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:28:56.357628   72416 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:28:56.392930   72416 cri.go:89] found id: ""
	I0806 08:28:56.392953   72416 logs.go:276] 0 containers: []
	W0806 08:28:56.392961   72416 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:28:56.392967   72416 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:28:56.393022   72416 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:28:56.428739   72416 cri.go:89] found id: ""
	I0806 08:28:56.428767   72416 logs.go:276] 0 containers: []
	W0806 08:28:56.428776   72416 logs.go:278] No container was found matching "kindnet"
	I0806 08:28:56.428785   72416 logs.go:123] Gathering logs for kubelet ...
	I0806 08:28:56.428814   72416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:28:56.482110   72416 logs.go:123] Gathering logs for dmesg ...
	I0806 08:28:56.482150   72416 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:28:56.495885   72416 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:28:56.495915   72416 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:28:56.608424   72416 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:28:56.608448   72416 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:28:56.608463   72416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:28:56.708998   72416 logs.go:123] Gathering logs for container status ...
	I0806 08:28:56.709024   72416 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0806 08:28:56.766746   72416 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0806 08:28:56.766785   72416 out.go:239] * 
	* 
	W0806 08:28:56.766838   72416 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0806 08:28:56.766869   72416 out.go:239] * 
	* 
	W0806 08:28:56.767932   72416 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0806 08:28:56.771420   72416 out.go:177] 
	W0806 08:28:56.772748   72416 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0806 08:28:56.772818   72416 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0806 08:28:56.772844   72416 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0806 08:28:56.774226   72416 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-409903 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-409903 -n old-k8s-version-409903
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-409903 -n old-k8s-version-409903: exit status 6 (210.129032ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0806 08:28:57.035897   78963 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-409903" does not appear in /home/jenkins/minikube-integration/19370-13057/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-409903" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (283.66s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-703340 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-703340 --alsologtostderr -v=3: exit status 82 (2m0.56657035s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-703340"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 08:26:25.254786   77906 out.go:291] Setting OutFile to fd 1 ...
	I0806 08:26:25.254927   77906 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 08:26:25.254938   77906 out.go:304] Setting ErrFile to fd 2...
	I0806 08:26:25.254943   77906 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 08:26:25.255161   77906 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19370-13057/.minikube/bin
	I0806 08:26:25.255583   77906 out.go:298] Setting JSON to false
	I0806 08:26:25.255713   77906 mustload.go:65] Loading cluster: no-preload-703340
	I0806 08:26:25.256214   77906 config.go:182] Loaded profile config "no-preload-703340": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-rc.0
	I0806 08:26:25.256322   77906 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/no-preload-703340/config.json ...
	I0806 08:26:25.256589   77906 mustload.go:65] Loading cluster: no-preload-703340
	I0806 08:26:25.256747   77906 config.go:182] Loaded profile config "no-preload-703340": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-rc.0
	I0806 08:26:25.256790   77906 stop.go:39] StopHost: no-preload-703340
	I0806 08:26:25.257230   77906 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:26:25.257278   77906 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:26:25.273709   77906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38395
	I0806 08:26:25.274273   77906 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:26:25.274873   77906 main.go:141] libmachine: Using API Version  1
	I0806 08:26:25.274904   77906 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:26:25.275419   77906 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:26:25.278231   77906 out.go:177] * Stopping node "no-preload-703340"  ...
	I0806 08:26:25.279904   77906 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0806 08:26:25.279994   77906 main.go:141] libmachine: (no-preload-703340) Calling .DriverName
	I0806 08:26:25.280347   77906 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0806 08:26:25.280420   77906 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHHostname
	I0806 08:26:25.283653   77906 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:26:25.284011   77906 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:25:11 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:26:25.284068   77906 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:26:25.284328   77906 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHPort
	I0806 08:26:25.284578   77906 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHKeyPath
	I0806 08:26:25.284772   77906 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHUsername
	I0806 08:26:25.284984   77906 sshutil.go:53] new ssh client: &{IP:192.168.72.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/no-preload-703340/id_rsa Username:docker}
	I0806 08:26:25.422363   77906 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0806 08:26:25.487040   77906 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0806 08:26:25.551860   77906 main.go:141] libmachine: Stopping "no-preload-703340"...
	I0806 08:26:25.551901   77906 main.go:141] libmachine: (no-preload-703340) Calling .GetState
	I0806 08:26:25.553696   77906 main.go:141] libmachine: (no-preload-703340) Calling .Stop
	I0806 08:26:25.557830   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 0/120
	I0806 08:26:26.559260   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 1/120
	I0806 08:26:27.561298   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 2/120
	I0806 08:26:28.562729   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 3/120
	I0806 08:26:29.564473   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 4/120
	I0806 08:26:30.566565   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 5/120
	I0806 08:26:31.568896   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 6/120
	I0806 08:26:32.570916   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 7/120
	I0806 08:26:33.572266   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 8/120
	I0806 08:26:34.573854   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 9/120
	I0806 08:26:35.576288   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 10/120
	I0806 08:26:36.577883   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 11/120
	I0806 08:26:37.579446   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 12/120
	I0806 08:26:38.581029   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 13/120
	I0806 08:26:39.582608   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 14/120
	I0806 08:26:40.584069   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 15/120
	I0806 08:26:41.585533   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 16/120
	I0806 08:26:42.587284   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 17/120
	I0806 08:26:43.588741   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 18/120
	I0806 08:26:44.590352   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 19/120
	I0806 08:26:45.592712   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 20/120
	I0806 08:26:46.594764   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 21/120
	I0806 08:26:47.596193   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 22/120
	I0806 08:26:48.597871   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 23/120
	I0806 08:26:49.599066   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 24/120
	I0806 08:26:50.600945   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 25/120
	I0806 08:26:51.602418   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 26/120
	I0806 08:26:52.604147   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 27/120
	I0806 08:26:53.605730   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 28/120
	I0806 08:26:54.607138   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 29/120
	I0806 08:26:55.609219   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 30/120
	I0806 08:26:56.610526   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 31/120
	I0806 08:26:57.611904   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 32/120
	I0806 08:26:58.613307   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 33/120
	I0806 08:26:59.614862   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 34/120
	I0806 08:27:00.616860   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 35/120
	I0806 08:27:01.618298   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 36/120
	I0806 08:27:02.620184   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 37/120
	I0806 08:27:03.621651   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 38/120
	I0806 08:27:04.623278   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 39/120
	I0806 08:27:05.625234   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 40/120
	I0806 08:27:06.626569   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 41/120
	I0806 08:27:07.627772   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 42/120
	I0806 08:27:08.629272   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 43/120
	I0806 08:27:09.631436   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 44/120
	I0806 08:27:10.633073   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 45/120
	I0806 08:27:11.634438   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 46/120
	I0806 08:27:12.636021   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 47/120
	I0806 08:27:13.637533   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 48/120
	I0806 08:27:14.638968   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 49/120
	I0806 08:27:15.641196   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 50/120
	I0806 08:27:16.642963   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 51/120
	I0806 08:27:17.644538   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 52/120
	I0806 08:27:18.646229   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 53/120
	I0806 08:27:19.648219   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 54/120
	I0806 08:27:20.650442   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 55/120
	I0806 08:27:21.651983   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 56/120
	I0806 08:27:22.653594   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 57/120
	I0806 08:27:23.655272   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 58/120
	I0806 08:27:24.657692   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 59/120
	I0806 08:27:25.659960   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 60/120
	I0806 08:27:26.661429   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 61/120
	I0806 08:27:27.662939   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 62/120
	I0806 08:27:28.664302   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 63/120
	I0806 08:27:29.665752   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 64/120
	I0806 08:27:30.668145   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 65/120
	I0806 08:27:31.669900   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 66/120
	I0806 08:27:32.671321   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 67/120
	I0806 08:27:33.673445   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 68/120
	I0806 08:27:34.674722   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 69/120
	I0806 08:27:35.676985   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 70/120
	I0806 08:27:36.678475   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 71/120
	I0806 08:27:37.680009   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 72/120
	I0806 08:27:38.681468   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 73/120
	I0806 08:27:39.682803   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 74/120
	I0806 08:27:40.685017   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 75/120
	I0806 08:27:41.687007   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 76/120
	I0806 08:27:42.688684   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 77/120
	I0806 08:27:43.690413   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 78/120
	I0806 08:27:44.691988   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 79/120
	I0806 08:27:45.694147   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 80/120
	I0806 08:27:46.695519   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 81/120
	I0806 08:27:47.696917   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 82/120
	I0806 08:27:48.698397   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 83/120
	I0806 08:27:49.699871   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 84/120
	I0806 08:27:50.701936   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 85/120
	I0806 08:27:51.703374   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 86/120
	I0806 08:27:52.704581   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 87/120
	I0806 08:27:53.705942   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 88/120
	I0806 08:27:54.707276   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 89/120
	I0806 08:27:55.709614   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 90/120
	I0806 08:27:56.711033   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 91/120
	I0806 08:27:57.712421   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 92/120
	I0806 08:27:58.713899   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 93/120
	I0806 08:27:59.715235   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 94/120
	I0806 08:28:00.717353   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 95/120
	I0806 08:28:01.718841   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 96/120
	I0806 08:28:02.720261   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 97/120
	I0806 08:28:03.721961   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 98/120
	I0806 08:28:04.723572   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 99/120
	I0806 08:28:05.725595   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 100/120
	I0806 08:28:06.727072   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 101/120
	I0806 08:28:07.728691   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 102/120
	I0806 08:28:08.730165   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 103/120
	I0806 08:28:09.731730   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 104/120
	I0806 08:28:10.733626   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 105/120
	I0806 08:28:11.735223   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 106/120
	I0806 08:28:12.736663   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 107/120
	I0806 08:28:13.738259   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 108/120
	I0806 08:28:14.739870   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 109/120
	I0806 08:28:15.742297   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 110/120
	I0806 08:28:16.744195   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 111/120
	I0806 08:28:17.745709   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 112/120
	I0806 08:28:18.747176   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 113/120
	I0806 08:28:19.748839   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 114/120
	I0806 08:28:20.751017   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 115/120
	I0806 08:28:21.752509   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 116/120
	I0806 08:28:22.754032   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 117/120
	I0806 08:28:23.755489   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 118/120
	I0806 08:28:24.757039   77906 main.go:141] libmachine: (no-preload-703340) Waiting for machine to stop 119/120
	I0806 08:28:25.758575   77906 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0806 08:28:25.758643   77906 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0806 08:28:25.760520   77906 out.go:177] 
	W0806 08:28:25.761929   77906 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0806 08:28:25.761952   77906 out.go:239] * 
	* 
	W0806 08:28:25.764786   77906 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0806 08:28:25.766152   77906 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-703340 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-703340 -n no-preload-703340
E0806 08:28:28.161406   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/calico-405163/client.crt: no such file or directory
E0806 08:28:33.282001   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/calico-405163/client.crt: no such file or directory
E0806 08:28:41.345726   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/kindnet-405163/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-703340 -n no-preload-703340: exit status 3 (18.461134658s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0806 08:28:44.228689   78666 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.126:22: connect: no route to host
	E0806 08:28:44.228709   78666 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.126:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-703340" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-324808 --alsologtostderr -v=3
E0806 08:27:19.424593   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/kindnet-405163/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-324808 --alsologtostderr -v=3: exit status 82 (2m0.493196743s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-324808"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 08:26:42.844398   78126 out.go:291] Setting OutFile to fd 1 ...
	I0806 08:26:42.844534   78126 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 08:26:42.844544   78126 out.go:304] Setting ErrFile to fd 2...
	I0806 08:26:42.844548   78126 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 08:26:42.844750   78126 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19370-13057/.minikube/bin
	I0806 08:26:42.844985   78126 out.go:298] Setting JSON to false
	I0806 08:26:42.845060   78126 mustload.go:65] Loading cluster: embed-certs-324808
	I0806 08:26:42.845373   78126 config.go:182] Loaded profile config "embed-certs-324808": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 08:26:42.845435   78126 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/embed-certs-324808/config.json ...
	I0806 08:26:42.845615   78126 mustload.go:65] Loading cluster: embed-certs-324808
	I0806 08:26:42.845713   78126 config.go:182] Loaded profile config "embed-certs-324808": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 08:26:42.845739   78126 stop.go:39] StopHost: embed-certs-324808
	I0806 08:26:42.846118   78126 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:26:42.846159   78126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:26:42.860954   78126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33581
	I0806 08:26:42.861384   78126 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:26:42.861894   78126 main.go:141] libmachine: Using API Version  1
	I0806 08:26:42.861918   78126 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:26:42.862248   78126 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:26:42.864597   78126 out.go:177] * Stopping node "embed-certs-324808"  ...
	I0806 08:26:42.865945   78126 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0806 08:26:42.865982   78126 main.go:141] libmachine: (embed-certs-324808) Calling .DriverName
	I0806 08:26:42.866272   78126 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0806 08:26:42.866294   78126 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHHostname
	I0806 08:26:42.869406   78126 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:26:42.869759   78126 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:25:49 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:26:42.869785   78126 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:26:42.869961   78126 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHPort
	I0806 08:26:42.870158   78126 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHKeyPath
	I0806 08:26:42.870353   78126 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHUsername
	I0806 08:26:42.870519   78126 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/embed-certs-324808/id_rsa Username:docker}
	I0806 08:26:42.970689   78126 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0806 08:26:43.034594   78126 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0806 08:26:43.080466   78126 main.go:141] libmachine: Stopping "embed-certs-324808"...
	I0806 08:26:43.080508   78126 main.go:141] libmachine: (embed-certs-324808) Calling .GetState
	I0806 08:26:43.082219   78126 main.go:141] libmachine: (embed-certs-324808) Calling .Stop
	I0806 08:26:43.085937   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 0/120
	I0806 08:26:44.087381   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 1/120
	I0806 08:26:45.088808   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 2/120
	I0806 08:26:46.090815   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 3/120
	I0806 08:26:47.092211   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 4/120
	I0806 08:26:48.093686   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 5/120
	I0806 08:26:49.095233   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 6/120
	I0806 08:26:50.096673   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 7/120
	I0806 08:26:51.098310   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 8/120
	I0806 08:26:52.099523   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 9/120
	I0806 08:26:53.102012   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 10/120
	I0806 08:26:54.103648   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 11/120
	I0806 08:26:55.105080   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 12/120
	I0806 08:26:56.107389   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 13/120
	I0806 08:26:57.109031   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 14/120
	I0806 08:26:58.110684   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 15/120
	I0806 08:26:59.112139   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 16/120
	I0806 08:27:00.113690   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 17/120
	I0806 08:27:01.115242   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 18/120
	I0806 08:27:02.116730   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 19/120
	I0806 08:27:03.118852   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 20/120
	I0806 08:27:04.120220   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 21/120
	I0806 08:27:05.121554   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 22/120
	I0806 08:27:06.123074   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 23/120
	I0806 08:27:07.124514   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 24/120
	I0806 08:27:08.126675   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 25/120
	I0806 08:27:09.128176   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 26/120
	I0806 08:27:10.130152   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 27/120
	I0806 08:27:11.131780   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 28/120
	I0806 08:27:12.133294   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 29/120
	I0806 08:27:13.135948   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 30/120
	I0806 08:27:14.137288   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 31/120
	I0806 08:27:15.139229   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 32/120
	I0806 08:27:16.140825   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 33/120
	I0806 08:27:17.142395   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 34/120
	I0806 08:27:18.144647   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 35/120
	I0806 08:27:19.147008   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 36/120
	I0806 08:27:20.148335   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 37/120
	I0806 08:27:21.149776   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 38/120
	I0806 08:27:22.151224   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 39/120
	I0806 08:27:23.153779   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 40/120
	I0806 08:27:24.155247   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 41/120
	I0806 08:27:25.156636   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 42/120
	I0806 08:27:26.158358   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 43/120
	I0806 08:27:27.159989   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 44/120
	I0806 08:27:28.161907   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 45/120
	I0806 08:27:29.163498   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 46/120
	I0806 08:27:30.165110   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 47/120
	I0806 08:27:31.166549   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 48/120
	I0806 08:27:32.168412   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 49/120
	I0806 08:27:33.170603   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 50/120
	I0806 08:27:34.173061   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 51/120
	I0806 08:27:35.174893   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 52/120
	I0806 08:27:36.176166   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 53/120
	I0806 08:27:37.177628   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 54/120
	I0806 08:27:38.179677   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 55/120
	I0806 08:27:39.181037   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 56/120
	I0806 08:27:40.182860   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 57/120
	I0806 08:27:41.184295   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 58/120
	I0806 08:27:42.185988   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 59/120
	I0806 08:27:43.188384   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 60/120
	I0806 08:27:44.189664   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 61/120
	I0806 08:27:45.191012   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 62/120
	I0806 08:27:46.192436   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 63/120
	I0806 08:27:47.193997   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 64/120
	I0806 08:27:48.195805   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 65/120
	I0806 08:27:49.197280   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 66/120
	I0806 08:27:50.198805   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 67/120
	I0806 08:27:51.200188   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 68/120
	I0806 08:27:52.201533   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 69/120
	I0806 08:27:53.204102   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 70/120
	I0806 08:27:54.205649   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 71/120
	I0806 08:27:55.207293   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 72/120
	I0806 08:27:56.208810   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 73/120
	I0806 08:27:57.210304   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 74/120
	I0806 08:27:58.212457   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 75/120
	I0806 08:27:59.214073   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 76/120
	I0806 08:28:00.215430   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 77/120
	I0806 08:28:01.217013   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 78/120
	I0806 08:28:02.218447   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 79/120
	I0806 08:28:03.221252   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 80/120
	I0806 08:28:04.222623   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 81/120
	I0806 08:28:05.224114   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 82/120
	I0806 08:28:06.225660   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 83/120
	I0806 08:28:07.227016   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 84/120
	I0806 08:28:08.229067   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 85/120
	I0806 08:28:09.230317   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 86/120
	I0806 08:28:10.231960   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 87/120
	I0806 08:28:11.233333   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 88/120
	I0806 08:28:12.234951   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 89/120
	I0806 08:28:13.237496   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 90/120
	I0806 08:28:14.239377   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 91/120
	I0806 08:28:15.241077   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 92/120
	I0806 08:28:16.243180   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 93/120
	I0806 08:28:17.244533   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 94/120
	I0806 08:28:18.246838   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 95/120
	I0806 08:28:19.248332   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 96/120
	I0806 08:28:20.249735   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 97/120
	I0806 08:28:21.251121   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 98/120
	I0806 08:28:22.252688   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 99/120
	I0806 08:28:23.254886   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 100/120
	I0806 08:28:24.256425   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 101/120
	I0806 08:28:25.258230   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 102/120
	I0806 08:28:26.259654   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 103/120
	I0806 08:28:27.261333   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 104/120
	I0806 08:28:28.263302   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 105/120
	I0806 08:28:29.264984   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 106/120
	I0806 08:28:30.266544   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 107/120
	I0806 08:28:31.268000   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 108/120
	I0806 08:28:32.269474   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 109/120
	I0806 08:28:33.271673   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 110/120
	I0806 08:28:34.273185   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 111/120
	I0806 08:28:35.274572   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 112/120
	I0806 08:28:36.275983   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 113/120
	I0806 08:28:37.277416   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 114/120
	I0806 08:28:38.279575   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 115/120
	I0806 08:28:39.281223   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 116/120
	I0806 08:28:40.282728   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 117/120
	I0806 08:28:41.284374   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 118/120
	I0806 08:28:42.285823   78126 main.go:141] libmachine: (embed-certs-324808) Waiting for machine to stop 119/120
	I0806 08:28:43.287135   78126 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0806 08:28:43.287203   78126 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0806 08:28:43.289294   78126 out.go:177] 
	W0806 08:28:43.290726   78126 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0806 08:28:43.290748   78126 out.go:239] * 
	* 
	W0806 08:28:43.293311   78126 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0806 08:28:43.294721   78126 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-324808 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-324808 -n embed-certs-324808
E0806 08:28:43.523034   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/calico-405163/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-324808 -n embed-certs-324808: exit status 3 (18.600130098s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0806 08:29:01.896660   78762 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.190:22: connect: no route to host
	E0806 08:29:01.896681   78762 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.190:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-324808" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-990751 --alsologtostderr -v=3
E0806 08:28:04.440081   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/auto-405163/client.crt: no such file or directory
E0806 08:28:23.040742   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/calico-405163/client.crt: no such file or directory
E0806 08:28:23.046039   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/calico-405163/client.crt: no such file or directory
E0806 08:28:23.056392   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/calico-405163/client.crt: no such file or directory
E0806 08:28:23.076674   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/calico-405163/client.crt: no such file or directory
E0806 08:28:23.117007   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/calico-405163/client.crt: no such file or directory
E0806 08:28:23.197342   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/calico-405163/client.crt: no such file or directory
E0806 08:28:23.358256   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/calico-405163/client.crt: no such file or directory
E0806 08:28:23.678755   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/calico-405163/client.crt: no such file or directory
E0806 08:28:24.319872   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/calico-405163/client.crt: no such file or directory
E0806 08:28:25.601070   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/calico-405163/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-990751 --alsologtostderr -v=3: exit status 82 (2m0.517758667s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-990751"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 08:27:47.332235   78509 out.go:291] Setting OutFile to fd 1 ...
	I0806 08:27:47.332507   78509 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 08:27:47.332518   78509 out.go:304] Setting ErrFile to fd 2...
	I0806 08:27:47.332524   78509 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 08:27:47.332724   78509 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19370-13057/.minikube/bin
	I0806 08:27:47.332983   78509 out.go:298] Setting JSON to false
	I0806 08:27:47.333074   78509 mustload.go:65] Loading cluster: default-k8s-diff-port-990751
	I0806 08:27:47.333434   78509 config.go:182] Loaded profile config "default-k8s-diff-port-990751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 08:27:47.333525   78509 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/default-k8s-diff-port-990751/config.json ...
	I0806 08:27:47.333713   78509 mustload.go:65] Loading cluster: default-k8s-diff-port-990751
	I0806 08:27:47.333849   78509 config.go:182] Loaded profile config "default-k8s-diff-port-990751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 08:27:47.333887   78509 stop.go:39] StopHost: default-k8s-diff-port-990751
	I0806 08:27:47.334257   78509 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:27:47.334316   78509 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:27:47.354650   78509 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44633
	I0806 08:27:47.355125   78509 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:27:47.355736   78509 main.go:141] libmachine: Using API Version  1
	I0806 08:27:47.355764   78509 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:27:47.356156   78509 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:27:47.358503   78509 out.go:177] * Stopping node "default-k8s-diff-port-990751"  ...
	I0806 08:27:47.359887   78509 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0806 08:27:47.359922   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .DriverName
	I0806 08:27:47.360146   78509 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0806 08:27:47.360166   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHHostname
	I0806 08:27:47.363356   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:27:47.363712   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:26:14 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:27:47.363743   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:27:47.363885   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHPort
	I0806 08:27:47.364065   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHKeyPath
	I0806 08:27:47.364229   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHUsername
	I0806 08:27:47.364391   78509 sshutil.go:53] new ssh client: &{IP:192.168.50.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/default-k8s-diff-port-990751/id_rsa Username:docker}
	I0806 08:27:47.457982   78509 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0806 08:27:47.527697   78509 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0806 08:27:47.599907   78509 main.go:141] libmachine: Stopping "default-k8s-diff-port-990751"...
	I0806 08:27:47.599949   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetState
	I0806 08:27:47.601637   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .Stop
	I0806 08:27:47.605414   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 0/120
	I0806 08:27:48.606792   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 1/120
	I0806 08:27:49.608117   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 2/120
	I0806 08:27:50.609273   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 3/120
	I0806 08:27:51.610679   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 4/120
	I0806 08:27:52.612845   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 5/120
	I0806 08:27:53.614270   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 6/120
	I0806 08:27:54.615865   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 7/120
	I0806 08:27:55.617547   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 8/120
	I0806 08:27:56.618876   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 9/120
	I0806 08:27:57.621351   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 10/120
	I0806 08:27:58.622957   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 11/120
	I0806 08:27:59.624405   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 12/120
	I0806 08:28:00.625838   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 13/120
	I0806 08:28:01.627425   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 14/120
	I0806 08:28:02.629672   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 15/120
	I0806 08:28:03.631137   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 16/120
	I0806 08:28:04.632735   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 17/120
	I0806 08:28:05.634061   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 18/120
	I0806 08:28:06.635531   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 19/120
	I0806 08:28:07.637126   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 20/120
	I0806 08:28:08.638596   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 21/120
	I0806 08:28:09.640297   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 22/120
	I0806 08:28:10.641806   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 23/120
	I0806 08:28:11.643402   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 24/120
	I0806 08:28:12.645831   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 25/120
	I0806 08:28:13.647235   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 26/120
	I0806 08:28:14.648728   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 27/120
	I0806 08:28:15.650356   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 28/120
	I0806 08:28:16.651613   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 29/120
	I0806 08:28:17.654151   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 30/120
	I0806 08:28:18.655453   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 31/120
	I0806 08:28:19.656842   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 32/120
	I0806 08:28:20.658434   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 33/120
	I0806 08:28:21.659764   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 34/120
	I0806 08:28:22.661892   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 35/120
	I0806 08:28:23.663495   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 36/120
	I0806 08:28:24.664830   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 37/120
	I0806 08:28:25.666111   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 38/120
	I0806 08:28:26.667468   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 39/120
	I0806 08:28:27.669671   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 40/120
	I0806 08:28:28.670987   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 41/120
	I0806 08:28:29.672529   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 42/120
	I0806 08:28:30.674008   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 43/120
	I0806 08:28:31.675527   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 44/120
	I0806 08:28:32.677554   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 45/120
	I0806 08:28:33.678934   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 46/120
	I0806 08:28:34.680841   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 47/120
	I0806 08:28:35.682378   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 48/120
	I0806 08:28:36.684294   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 49/120
	I0806 08:28:37.686126   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 50/120
	I0806 08:28:38.687653   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 51/120
	I0806 08:28:39.689142   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 52/120
	I0806 08:28:40.690753   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 53/120
	I0806 08:28:41.692160   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 54/120
	I0806 08:28:42.694151   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 55/120
	I0806 08:28:43.695721   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 56/120
	I0806 08:28:44.697189   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 57/120
	I0806 08:28:45.698777   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 58/120
	I0806 08:28:46.700235   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 59/120
	I0806 08:28:47.702789   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 60/120
	I0806 08:28:48.704303   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 61/120
	I0806 08:28:49.705801   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 62/120
	I0806 08:28:50.707409   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 63/120
	I0806 08:28:51.709080   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 64/120
	I0806 08:28:52.711202   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 65/120
	I0806 08:28:53.712616   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 66/120
	I0806 08:28:54.714026   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 67/120
	I0806 08:28:55.715446   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 68/120
	I0806 08:28:56.716749   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 69/120
	I0806 08:28:57.718534   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 70/120
	I0806 08:28:58.719939   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 71/120
	I0806 08:28:59.721464   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 72/120
	I0806 08:29:00.722896   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 73/120
	I0806 08:29:01.724393   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 74/120
	I0806 08:29:02.726384   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 75/120
	I0806 08:29:03.727878   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 76/120
	I0806 08:29:04.729283   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 77/120
	I0806 08:29:05.730642   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 78/120
	I0806 08:29:06.732057   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 79/120
	I0806 08:29:07.734406   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 80/120
	I0806 08:29:08.736036   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 81/120
	I0806 08:29:09.737389   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 82/120
	I0806 08:29:10.738808   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 83/120
	I0806 08:29:11.740547   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 84/120
	I0806 08:29:12.742623   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 85/120
	I0806 08:29:13.744158   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 86/120
	I0806 08:29:14.745800   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 87/120
	I0806 08:29:15.747283   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 88/120
	I0806 08:29:16.749032   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 89/120
	I0806 08:29:17.751232   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 90/120
	I0806 08:29:18.752689   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 91/120
	I0806 08:29:19.754589   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 92/120
	I0806 08:29:20.756079   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 93/120
	I0806 08:29:21.757505   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 94/120
	I0806 08:29:22.759662   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 95/120
	I0806 08:29:23.761089   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 96/120
	I0806 08:29:24.762433   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 97/120
	I0806 08:29:25.764064   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 98/120
	I0806 08:29:26.765487   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 99/120
	I0806 08:29:27.767851   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 100/120
	I0806 08:29:28.769492   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 101/120
	I0806 08:29:29.771369   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 102/120
	I0806 08:29:30.773132   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 103/120
	I0806 08:29:31.774930   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 104/120
	I0806 08:29:32.776889   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 105/120
	I0806 08:29:33.778490   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 106/120
	I0806 08:29:34.780047   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 107/120
	I0806 08:29:35.781795   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 108/120
	I0806 08:29:36.783348   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 109/120
	I0806 08:29:37.785851   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 110/120
	I0806 08:29:38.787192   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 111/120
	I0806 08:29:39.788814   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 112/120
	I0806 08:29:40.790307   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 113/120
	I0806 08:29:41.791620   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 114/120
	I0806 08:29:42.794006   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 115/120
	I0806 08:29:43.795719   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 116/120
	I0806 08:29:44.797542   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 117/120
	I0806 08:29:45.798930   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 118/120
	I0806 08:29:46.800699   78509 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for machine to stop 119/120
	I0806 08:29:47.801265   78509 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0806 08:29:47.801336   78509 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0806 08:29:47.803334   78509 out.go:177] 
	W0806 08:29:47.804784   78509 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0806 08:29:47.804806   78509 out.go:239] * 
	* 
	W0806 08:29:47.807717   78509 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0806 08:29:47.809496   78509 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-990751 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-990751 -n default-k8s-diff-port-990751
E0806 08:30:01.379656   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/flannel-405163/client.crt: no such file or directory
E0806 08:30:01.384969   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/flannel-405163/client.crt: no such file or directory
E0806 08:30:01.395258   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/flannel-405163/client.crt: no such file or directory
E0806 08:30:01.415579   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/flannel-405163/client.crt: no such file or directory
E0806 08:30:01.455916   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/flannel-405163/client.crt: no such file or directory
E0806 08:30:01.536293   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/flannel-405163/client.crt: no such file or directory
E0806 08:30:01.696761   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/flannel-405163/client.crt: no such file or directory
E0806 08:30:02.017457   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/flannel-405163/client.crt: no such file or directory
E0806 08:30:02.658450   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/flannel-405163/client.crt: no such file or directory
E0806 08:30:03.939640   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/flannel-405163/client.crt: no such file or directory
E0806 08:30:04.747714   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/bridge-405163/client.crt: no such file or directory
E0806 08:30:04.752960   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/bridge-405163/client.crt: no such file or directory
E0806 08:30:04.763258   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/bridge-405163/client.crt: no such file or directory
E0806 08:30:04.783547   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/bridge-405163/client.crt: no such file or directory
E0806 08:30:04.823906   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/bridge-405163/client.crt: no such file or directory
E0806 08:30:04.904310   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/bridge-405163/client.crt: no such file or directory
E0806 08:30:05.064751   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/bridge-405163/client.crt: no such file or directory
E0806 08:30:05.385681   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/bridge-405163/client.crt: no such file or directory
E0806 08:30:06.026621   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/bridge-405163/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-990751 -n default-k8s-diff-port-990751: exit status 3 (18.59359318s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0806 08:30:06.404761   79410 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.235:22: connect: no route to host
	E0806 08:30:06.404780   79410 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.235:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-990751" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-703340 -n no-preload-703340
E0806 08:28:45.266137   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/custom-flannel-405163/client.crt: no such file or directory
E0806 08:28:45.271424   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/custom-flannel-405163/client.crt: no such file or directory
E0806 08:28:45.281703   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/custom-flannel-405163/client.crt: no such file or directory
E0806 08:28:45.301982   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/custom-flannel-405163/client.crt: no such file or directory
E0806 08:28:45.342326   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/custom-flannel-405163/client.crt: no such file or directory
E0806 08:28:45.422694   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/custom-flannel-405163/client.crt: no such file or directory
E0806 08:28:45.583116   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/custom-flannel-405163/client.crt: no such file or directory
E0806 08:28:45.904295   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/custom-flannel-405163/client.crt: no such file or directory
E0806 08:28:46.545376   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/custom-flannel-405163/client.crt: no such file or directory
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-703340 -n no-preload-703340: exit status 3 (3.168006567s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0806 08:28:47.396800   78792 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.126:22: connect: no route to host
	E0806 08:28:47.396822   78792 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.126:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-703340 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0806 08:28:47.826249   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/custom-flannel-405163/client.crt: no such file or directory
E0806 08:28:50.386677   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/custom-flannel-405163/client.crt: no such file or directory
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-703340 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152145609s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.126:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-703340 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-703340 -n no-preload-703340
E0806 08:28:55.507833   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/custom-flannel-405163/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-703340 -n no-preload-703340: exit status 3 (3.064951062s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0806 08:28:56.612706   78874 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.126:22: connect: no route to host
	E0806 08:28:56.612725   78874 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.126:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-703340" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.46s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-409903 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-409903 create -f testdata/busybox.yaml: exit status 1 (40.720503ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-409903" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-409903 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-409903 -n old-k8s-version-409903
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-409903 -n old-k8s-version-409903: exit status 6 (207.299839ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0806 08:28:57.286817   79003 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-409903" does not appear in /home/jenkins/minikube-integration/19370-13057/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-409903" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-409903 -n old-k8s-version-409903
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-409903 -n old-k8s-version-409903: exit status 6 (214.839343ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0806 08:28:57.501412   79033 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-409903" does not appear in /home/jenkins/minikube-integration/19370-13057/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-409903" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.46s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (116.74s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-409903 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-409903 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m56.486341013s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-409903 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-409903 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-409903 describe deploy/metrics-server -n kube-system: exit status 1 (41.678709ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-409903" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-409903 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-409903 -n old-k8s-version-409903
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-409903 -n old-k8s-version-409903: exit status 6 (214.636953ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0806 08:30:54.243648   79793 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-409903" does not appear in /home/jenkins/minikube-integration/19370-13057/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-409903" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (116.74s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-324808 -n embed-certs-324808
E0806 08:29:04.003357   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/calico-405163/client.crt: no such file or directory
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-324808 -n embed-certs-324808: exit status 3 (3.163763226s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0806 08:29:05.060709   79109 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.190:22: connect: no route to host
	E0806 08:29:05.060732   79109 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.190:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-324808 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0806 08:29:05.749020   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/custom-flannel-405163/client.crt: no such file or directory
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-324808 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152956112s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.190:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-324808 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-324808 -n embed-certs-324808
E0806 08:29:13.203051   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/addons-505457/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-324808 -n embed-certs-324808: exit status 3 (3.06310672s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0806 08:29:14.276737   79173 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.190:22: connect: no route to host
	E0806 08:29:14.276762   79173 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.190:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-324808" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-990751 -n default-k8s-diff-port-990751
E0806 08:30:06.500467   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/flannel-405163/client.crt: no such file or directory
E0806 08:30:07.190273   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/custom-flannel-405163/client.crt: no such file or directory
E0806 08:30:07.307605   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/bridge-405163/client.crt: no such file or directory
E0806 08:30:07.412906   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/enable-default-cni-405163/client.crt: no such file or directory
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-990751 -n default-k8s-diff-port-990751: exit status 3 (3.167769573s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0806 08:30:09.572700   79488 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.235:22: connect: no route to host
	E0806 08:30:09.572727   79488 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.235:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-990751 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0806 08:30:09.867937   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/bridge-405163/client.crt: no such file or directory
E0806 08:30:11.621240   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/flannel-405163/client.crt: no such file or directory
E0806 08:30:14.988629   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/bridge-405163/client.crt: no such file or directory
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-990751 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.15280811s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.235:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-990751 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-990751 -n default-k8s-diff-port-990751
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-990751 -n default-k8s-diff-port-990751: exit status 3 (3.063016231s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0806 08:30:18.788758   79568 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.235:22: connect: no route to host
	E0806 08:30:18.788778   79568 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.235:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-990751" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (709.44s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-409903 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0806 08:31:06.884922   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/calico-405163/client.crt: no such file or directory
E0806 08:31:21.824182   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/functional-811691/client.crt: no such file or directory
E0806 08:31:23.303465   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/flannel-405163/client.crt: no such file or directory
E0806 08:31:25.186055   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/kindnet-405163/client.crt: no such file or directory
E0806 08:31:26.671185   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/bridge-405163/client.crt: no such file or directory
E0806 08:31:29.111086   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/custom-flannel-405163/client.crt: no such file or directory
E0806 08:32:10.294458   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/enable-default-cni-405163/client.crt: no such file or directory
E0806 08:32:44.871510   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/functional-811691/client.crt: no such file or directory
E0806 08:32:45.224343   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/flannel-405163/client.crt: no such file or directory
E0806 08:32:48.591507   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/bridge-405163/client.crt: no such file or directory
E0806 08:33:23.041139   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/calico-405163/client.crt: no such file or directory
E0806 08:33:45.266614   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/custom-flannel-405163/client.crt: no such file or directory
E0806 08:33:50.725718   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/calico-405163/client.crt: no such file or directory
E0806 08:34:12.952244   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/custom-flannel-405163/client.crt: no such file or directory
E0806 08:34:13.203673   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/addons-505457/client.crt: no such file or directory
E0806 08:34:26.450246   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/enable-default-cni-405163/client.crt: no such file or directory
E0806 08:34:54.135145   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/enable-default-cni-405163/client.crt: no such file or directory
E0806 08:35:01.379642   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/flannel-405163/client.crt: no such file or directory
E0806 08:35:04.748730   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/bridge-405163/client.crt: no such file or directory
E0806 08:35:20.598257   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/auto-405163/client.crt: no such file or directory
E0806 08:35:29.064766   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/flannel-405163/client.crt: no such file or directory
E0806 08:35:32.432174   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/bridge-405163/client.crt: no such file or directory
E0806 08:35:57.500772   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/kindnet-405163/client.crt: no such file or directory
E0806 08:36:21.824650   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/functional-811691/client.crt: no such file or directory
E0806 08:38:23.040842   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/calico-405163/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-409903 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (11m45.716361582s)

                                                
                                                
-- stdout --
	* [old-k8s-version-409903] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19370
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19370-13057/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19370-13057/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-409903" primary control-plane node in "old-k8s-version-409903" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-409903" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 08:30:58.757517   79927 out.go:291] Setting OutFile to fd 1 ...
	I0806 08:30:58.757772   79927 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 08:30:58.757780   79927 out.go:304] Setting ErrFile to fd 2...
	I0806 08:30:58.757784   79927 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 08:30:58.757992   79927 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19370-13057/.minikube/bin
	I0806 08:30:58.758527   79927 out.go:298] Setting JSON to false
	I0806 08:30:58.759405   79927 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8005,"bootTime":1722925054,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0806 08:30:58.759463   79927 start.go:139] virtualization: kvm guest
	I0806 08:30:58.762200   79927 out.go:177] * [old-k8s-version-409903] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0806 08:30:58.763461   79927 out.go:177]   - MINIKUBE_LOCATION=19370
	I0806 08:30:58.763514   79927 notify.go:220] Checking for updates...
	I0806 08:30:58.766017   79927 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0806 08:30:58.767180   79927 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19370-13057/kubeconfig
	I0806 08:30:58.768270   79927 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19370-13057/.minikube
	I0806 08:30:58.769303   79927 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0806 08:30:58.770385   79927 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0806 08:30:58.771931   79927 config.go:182] Loaded profile config "old-k8s-version-409903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0806 08:30:58.772319   79927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:30:58.772407   79927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:30:58.787845   79927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39563
	I0806 08:30:58.788270   79927 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:30:58.788863   79927 main.go:141] libmachine: Using API Version  1
	I0806 08:30:58.788879   79927 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:30:58.789169   79927 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:30:58.789314   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .DriverName
	I0806 08:30:58.791190   79927 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0806 08:30:58.792487   79927 driver.go:392] Setting default libvirt URI to qemu:///system
	I0806 08:30:58.792776   79927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:30:58.792811   79927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:30:58.807347   79927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33093
	I0806 08:30:58.807788   79927 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:30:58.808260   79927 main.go:141] libmachine: Using API Version  1
	I0806 08:30:58.808276   79927 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:30:58.808633   79927 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:30:58.808849   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .DriverName
	I0806 08:30:58.845305   79927 out.go:177] * Using the kvm2 driver based on existing profile
	I0806 08:30:58.846560   79927 start.go:297] selected driver: kvm2
	I0806 08:30:58.846580   79927 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-409903 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-409903 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.158 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 08:30:58.846684   79927 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0806 08:30:58.847335   79927 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 08:30:58.847395   79927 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19370-13057/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0806 08:30:58.862013   79927 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0806 08:30:58.862398   79927 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0806 08:30:58.862431   79927 cni.go:84] Creating CNI manager for ""
	I0806 08:30:58.862441   79927 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0806 08:30:58.862493   79927 start.go:340] cluster config:
	{Name:old-k8s-version-409903 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-409903 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.158 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 08:30:58.862609   79927 iso.go:125] acquiring lock: {Name:mke58d71de28babe305c5eb0c31c274e9713bb3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 08:30:58.864484   79927 out.go:177] * Starting "old-k8s-version-409903" primary control-plane node in "old-k8s-version-409903" cluster
	I0806 08:30:58.865763   79927 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0806 08:30:58.865796   79927 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0806 08:30:58.865803   79927 cache.go:56] Caching tarball of preloaded images
	I0806 08:30:58.865912   79927 preload.go:172] Found /home/jenkins/minikube-integration/19370-13057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0806 08:30:58.865928   79927 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0806 08:30:58.866013   79927 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/old-k8s-version-409903/config.json ...
	I0806 08:30:58.866228   79927 start.go:360] acquireMachinesLock for old-k8s-version-409903: {Name:mkc602ac9c8cc3360dcc0fcb8104303837aaab61 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 08:34:14.353523   79927 start.go:364] duration metric: took 3m15.487259208s to acquireMachinesLock for "old-k8s-version-409903"
	I0806 08:34:14.353614   79927 start.go:96] Skipping create...Using existing machine configuration
	I0806 08:34:14.353629   79927 fix.go:54] fixHost starting: 
	I0806 08:34:14.354059   79927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:34:14.354096   79927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:34:14.373696   79927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33985
	I0806 08:34:14.374148   79927 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:34:14.374646   79927 main.go:141] libmachine: Using API Version  1
	I0806 08:34:14.374672   79927 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:34:14.374995   79927 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:34:14.375277   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .DriverName
	I0806 08:34:14.375472   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetState
	I0806 08:34:14.377209   79927 fix.go:112] recreateIfNeeded on old-k8s-version-409903: state=Stopped err=<nil>
	I0806 08:34:14.377251   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .DriverName
	W0806 08:34:14.377387   79927 fix.go:138] unexpected machine state, will restart: <nil>
	I0806 08:34:14.379029   79927 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-409903" ...
	I0806 08:34:14.380306   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .Start
	I0806 08:34:14.380486   79927 main.go:141] libmachine: (old-k8s-version-409903) Ensuring networks are active...
	I0806 08:34:14.381155   79927 main.go:141] libmachine: (old-k8s-version-409903) Ensuring network default is active
	I0806 08:34:14.381514   79927 main.go:141] libmachine: (old-k8s-version-409903) Ensuring network mk-old-k8s-version-409903 is active
	I0806 08:34:14.381820   79927 main.go:141] libmachine: (old-k8s-version-409903) Getting domain xml...
	I0806 08:34:14.382679   79927 main.go:141] libmachine: (old-k8s-version-409903) Creating domain...
	I0806 08:34:15.725416   79927 main.go:141] libmachine: (old-k8s-version-409903) Waiting to get IP...
	I0806 08:34:15.726492   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:15.727078   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:34:15.727276   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:34:15.727179   80849 retry.go:31] will retry after 303.235731ms: waiting for machine to come up
	I0806 08:34:16.031996   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:16.032592   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:34:16.032621   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:34:16.032542   80849 retry.go:31] will retry after 355.466914ms: waiting for machine to come up
	I0806 08:34:16.390212   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:16.390736   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:34:16.390768   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:34:16.390674   80849 retry.go:31] will retry after 432.942387ms: waiting for machine to come up
	I0806 08:34:16.825739   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:16.826366   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:34:16.826390   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:34:16.826317   80849 retry.go:31] will retry after 481.17687ms: waiting for machine to come up
	I0806 08:34:17.309005   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:17.309768   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:34:17.309799   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:34:17.309745   80849 retry.go:31] will retry after 631.184181ms: waiting for machine to come up
	I0806 08:34:17.942220   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:17.942742   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:34:17.942766   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:34:17.942701   80849 retry.go:31] will retry after 602.253362ms: waiting for machine to come up
	I0806 08:34:18.546333   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:18.546918   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:34:18.546944   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:34:18.546855   80849 retry.go:31] will retry after 1.061939923s: waiting for machine to come up
	I0806 08:34:19.610165   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:19.610567   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:34:19.610598   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:34:19.610542   80849 retry.go:31] will retry after 1.339464672s: waiting for machine to come up
	I0806 08:34:20.951505   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:20.952049   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:34:20.952080   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:34:20.951975   80849 retry.go:31] will retry after 1.622574331s: waiting for machine to come up
	I0806 08:34:22.576150   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:22.576559   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:34:22.576582   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:34:22.576508   80849 retry.go:31] will retry after 1.408827574s: waiting for machine to come up
	I0806 08:34:23.987256   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:23.987658   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:34:23.987683   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:34:23.987617   80849 retry.go:31] will retry after 2.000071561s: waiting for machine to come up
	I0806 08:34:25.989501   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:25.989880   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:34:25.989906   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:34:25.989838   80849 retry.go:31] will retry after 2.508269501s: waiting for machine to come up
	I0806 08:34:28.501449   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:28.501842   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:34:28.501874   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:34:28.501802   80849 retry.go:31] will retry after 2.7583106s: waiting for machine to come up
	I0806 08:34:31.262960   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:31.263282   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:34:31.263304   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:34:31.263238   80849 retry.go:31] will retry after 4.76596973s: waiting for machine to come up
	I0806 08:34:36.030512   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.030950   79927 main.go:141] libmachine: (old-k8s-version-409903) Found IP for machine: 192.168.61.158
	I0806 08:34:36.030972   79927 main.go:141] libmachine: (old-k8s-version-409903) Reserving static IP address...
	I0806 08:34:36.030989   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has current primary IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.031349   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "old-k8s-version-409903", mac: "52:54:00:12:30:88", ip: "192.168.61.158"} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:36.031376   79927 main.go:141] libmachine: (old-k8s-version-409903) Reserved static IP address: 192.168.61.158
	I0806 08:34:36.031398   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | skip adding static IP to network mk-old-k8s-version-409903 - found existing host DHCP lease matching {name: "old-k8s-version-409903", mac: "52:54:00:12:30:88", ip: "192.168.61.158"}
	I0806 08:34:36.031416   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | Getting to WaitForSSH function...
	I0806 08:34:36.031428   79927 main.go:141] libmachine: (old-k8s-version-409903) Waiting for SSH to be available...
	I0806 08:34:36.033684   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.034094   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:36.034124   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.034327   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | Using SSH client type: external
	I0806 08:34:36.034366   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | Using SSH private key: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/old-k8s-version-409903/id_rsa (-rw-------)
	I0806 08:34:36.034395   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.158 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19370-13057/.minikube/machines/old-k8s-version-409903/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0806 08:34:36.034407   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | About to run SSH command:
	I0806 08:34:36.034415   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | exit 0
	I0806 08:34:36.164605   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | SSH cmd err, output: <nil>: 
	I0806 08:34:36.164952   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetConfigRaw
	I0806 08:34:36.165509   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetIP
	I0806 08:34:36.168135   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.168471   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:36.168499   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.168756   79927 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/old-k8s-version-409903/config.json ...
	I0806 08:34:36.168956   79927 machine.go:94] provisionDockerMachine start ...
	I0806 08:34:36.168974   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .DriverName
	I0806 08:34:36.169186   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHHostname
	I0806 08:34:36.171367   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.171702   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:36.171723   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.171902   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHPort
	I0806 08:34:36.172080   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:34:36.172240   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:34:36.172400   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHUsername
	I0806 08:34:36.172569   79927 main.go:141] libmachine: Using SSH client type: native
	I0806 08:34:36.172798   79927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.158 22 <nil> <nil>}
	I0806 08:34:36.172812   79927 main.go:141] libmachine: About to run SSH command:
	hostname
	I0806 08:34:36.288738   79927 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0806 08:34:36.288770   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetMachineName
	I0806 08:34:36.289073   79927 buildroot.go:166] provisioning hostname "old-k8s-version-409903"
	I0806 08:34:36.289088   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetMachineName
	I0806 08:34:36.289272   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHHostname
	I0806 08:34:36.291958   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.292326   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:36.292354   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.292462   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHPort
	I0806 08:34:36.292662   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:34:36.292807   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:34:36.292953   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHUsername
	I0806 08:34:36.293122   79927 main.go:141] libmachine: Using SSH client type: native
	I0806 08:34:36.293287   79927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.158 22 <nil> <nil>}
	I0806 08:34:36.293300   79927 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-409903 && echo "old-k8s-version-409903" | sudo tee /etc/hostname
	I0806 08:34:36.427591   79927 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-409903
	
	I0806 08:34:36.427620   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHHostname
	I0806 08:34:36.430395   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.430724   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:36.430758   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.430900   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHPort
	I0806 08:34:36.431133   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:34:36.431311   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:34:36.431476   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHUsername
	I0806 08:34:36.431662   79927 main.go:141] libmachine: Using SSH client type: native
	I0806 08:34:36.431876   79927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.158 22 <nil> <nil>}
	I0806 08:34:36.431895   79927 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-409903' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-409903/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-409903' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0806 08:34:36.554542   79927 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 08:34:36.554574   79927 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19370-13057/.minikube CaCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19370-13057/.minikube}
	I0806 08:34:36.554613   79927 buildroot.go:174] setting up certificates
	I0806 08:34:36.554629   79927 provision.go:84] configureAuth start
	I0806 08:34:36.554646   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetMachineName
	I0806 08:34:36.554932   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetIP
	I0806 08:34:36.557899   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.558351   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:36.558373   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.558530   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHHostname
	I0806 08:34:36.560878   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.561153   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:36.561182   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.561339   79927 provision.go:143] copyHostCerts
	I0806 08:34:36.561400   79927 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem, removing ...
	I0806 08:34:36.561410   79927 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem
	I0806 08:34:36.561462   79927 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem (1078 bytes)
	I0806 08:34:36.561563   79927 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem, removing ...
	I0806 08:34:36.561572   79927 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem
	I0806 08:34:36.561594   79927 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem (1123 bytes)
	I0806 08:34:36.561647   79927 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem, removing ...
	I0806 08:34:36.561653   79927 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem
	I0806 08:34:36.561670   79927 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem (1675 bytes)
	I0806 08:34:36.561752   79927 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-409903 san=[127.0.0.1 192.168.61.158 localhost minikube old-k8s-version-409903]
	I0806 08:34:36.646180   79927 provision.go:177] copyRemoteCerts
	I0806 08:34:36.646233   79927 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0806 08:34:36.646257   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHHostname
	I0806 08:34:36.649020   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.649380   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:36.649405   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.649564   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHPort
	I0806 08:34:36.649791   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:34:36.649962   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHUsername
	I0806 08:34:36.650092   79927 sshutil.go:53] new ssh client: &{IP:192.168.61.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/old-k8s-version-409903/id_rsa Username:docker}
	I0806 08:34:36.738988   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0806 08:34:36.765104   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0806 08:34:36.791946   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0806 08:34:36.817128   79927 provision.go:87] duration metric: took 262.482885ms to configureAuth
	I0806 08:34:36.817157   79927 buildroot.go:189] setting minikube options for container-runtime
	I0806 08:34:36.817352   79927 config.go:182] Loaded profile config "old-k8s-version-409903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0806 08:34:36.817435   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHHostname
	I0806 08:34:36.820142   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.820480   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:36.820520   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.820659   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHPort
	I0806 08:34:36.820892   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:34:36.821095   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:34:36.821289   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHUsername
	I0806 08:34:36.821459   79927 main.go:141] libmachine: Using SSH client type: native
	I0806 08:34:36.821703   79927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.158 22 <nil> <nil>}
	I0806 08:34:36.821722   79927 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0806 08:34:37.099949   79927 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0806 08:34:37.099979   79927 machine.go:97] duration metric: took 931.010538ms to provisionDockerMachine
	I0806 08:34:37.099993   79927 start.go:293] postStartSetup for "old-k8s-version-409903" (driver="kvm2")
	I0806 08:34:37.100007   79927 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0806 08:34:37.100028   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .DriverName
	I0806 08:34:37.100379   79927 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0806 08:34:37.100414   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHHostname
	I0806 08:34:37.103297   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:37.103691   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:37.103717   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:37.103887   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHPort
	I0806 08:34:37.104106   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:34:37.104263   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHUsername
	I0806 08:34:37.104402   79927 sshutil.go:53] new ssh client: &{IP:192.168.61.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/old-k8s-version-409903/id_rsa Username:docker}
	I0806 08:34:37.192035   79927 ssh_runner.go:195] Run: cat /etc/os-release
	I0806 08:34:37.196693   79927 info.go:137] Remote host: Buildroot 2023.02.9
	I0806 08:34:37.196712   79927 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-13057/.minikube/addons for local assets ...
	I0806 08:34:37.196776   79927 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-13057/.minikube/files for local assets ...
	I0806 08:34:37.196861   79927 filesync.go:149] local asset: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem -> 202462.pem in /etc/ssl/certs
	I0806 08:34:37.196957   79927 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0806 08:34:37.206906   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem --> /etc/ssl/certs/202462.pem (1708 bytes)
	I0806 08:34:37.232270   79927 start.go:296] duration metric: took 132.262362ms for postStartSetup
	I0806 08:34:37.232311   79927 fix.go:56] duration metric: took 22.878683419s for fixHost
	I0806 08:34:37.232332   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHHostname
	I0806 08:34:37.235278   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:37.235721   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:37.235751   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:37.235946   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHPort
	I0806 08:34:37.236168   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:34:37.236337   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:34:37.236491   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHUsername
	I0806 08:34:37.236640   79927 main.go:141] libmachine: Using SSH client type: native
	I0806 08:34:37.236793   79927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.158 22 <nil> <nil>}
	I0806 08:34:37.236803   79927 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0806 08:34:37.353370   79927 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722933277.327882532
	
	I0806 08:34:37.353397   79927 fix.go:216] guest clock: 1722933277.327882532
	I0806 08:34:37.353408   79927 fix.go:229] Guest: 2024-08-06 08:34:37.327882532 +0000 UTC Remote: 2024-08-06 08:34:37.232315481 +0000 UTC m=+218.506631326 (delta=95.567051ms)
	I0806 08:34:37.353439   79927 fix.go:200] guest clock delta is within tolerance: 95.567051ms
	I0806 08:34:37.353451   79927 start.go:83] releasing machines lock for "old-k8s-version-409903", held for 22.999876259s
	I0806 08:34:37.353490   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .DriverName
	I0806 08:34:37.353775   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetIP
	I0806 08:34:37.356788   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:37.357197   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:37.357224   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:37.357434   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .DriverName
	I0806 08:34:37.358035   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .DriverName
	I0806 08:34:37.358238   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .DriverName
	I0806 08:34:37.358320   79927 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0806 08:34:37.358360   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHHostname
	I0806 08:34:37.358478   79927 ssh_runner.go:195] Run: cat /version.json
	I0806 08:34:37.358505   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHHostname
	I0806 08:34:37.361125   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:37.361429   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:37.361464   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:37.361490   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:37.361681   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHPort
	I0806 08:34:37.361846   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:34:37.361889   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:37.361913   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:37.362018   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHUsername
	I0806 08:34:37.362078   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHPort
	I0806 08:34:37.362188   79927 sshutil.go:53] new ssh client: &{IP:192.168.61.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/old-k8s-version-409903/id_rsa Username:docker}
	I0806 08:34:37.362319   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:34:37.362452   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHUsername
	I0806 08:34:37.362596   79927 sshutil.go:53] new ssh client: &{IP:192.168.61.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/old-k8s-version-409903/id_rsa Username:docker}
	I0806 08:34:37.468465   79927 ssh_runner.go:195] Run: systemctl --version
	I0806 08:34:37.475436   79927 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0806 08:34:37.627725   79927 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0806 08:34:37.635346   79927 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0806 08:34:37.635432   79927 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0806 08:34:37.652546   79927 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0806 08:34:37.652571   79927 start.go:495] detecting cgroup driver to use...
	I0806 08:34:37.652642   79927 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0806 08:34:37.671386   79927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 08:34:37.686408   79927 docker.go:217] disabling cri-docker service (if available) ...
	I0806 08:34:37.686473   79927 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0806 08:34:37.701731   79927 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0806 08:34:37.717528   79927 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0806 08:34:37.845160   79927 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0806 08:34:37.999278   79927 docker.go:233] disabling docker service ...
	I0806 08:34:37.999361   79927 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0806 08:34:38.017101   79927 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0806 08:34:38.033189   79927 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0806 08:34:38.213620   79927 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0806 08:34:38.374790   79927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0806 08:34:38.393394   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 08:34:38.419807   79927 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0806 08:34:38.419878   79927 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:38.434870   79927 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0806 08:34:38.434946   79927 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:38.449029   79927 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:38.460440   79927 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:38.473654   79927 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0806 08:34:38.486032   79927 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0806 08:34:38.495861   79927 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0806 08:34:38.495927   79927 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0806 08:34:38.511021   79927 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0806 08:34:38.521985   79927 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 08:34:38.651919   79927 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0806 08:34:38.822367   79927 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0806 08:34:38.822444   79927 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0806 08:34:38.828148   79927 start.go:563] Will wait 60s for crictl version
	I0806 08:34:38.828211   79927 ssh_runner.go:195] Run: which crictl
	I0806 08:34:38.832084   79927 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0806 08:34:38.872571   79927 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0806 08:34:38.872658   79927 ssh_runner.go:195] Run: crio --version
	I0806 08:34:38.903507   79927 ssh_runner.go:195] Run: crio --version
	I0806 08:34:38.934955   79927 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0806 08:34:38.936206   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetIP
	I0806 08:34:38.939581   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:38.940049   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:38.940076   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:38.940336   79927 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0806 08:34:38.944801   79927 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 08:34:38.959454   79927 kubeadm.go:883] updating cluster {Name:old-k8s-version-409903 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-409903 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.158 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0806 08:34:38.959581   79927 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0806 08:34:38.959647   79927 ssh_runner.go:195] Run: sudo crictl images --output json
	I0806 08:34:39.014798   79927 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0806 08:34:39.014908   79927 ssh_runner.go:195] Run: which lz4
	I0806 08:34:39.019642   79927 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0806 08:34:39.025125   79927 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0806 08:34:39.025165   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0806 08:34:40.772902   79927 crio.go:462] duration metric: took 1.753273519s to copy over tarball
	I0806 08:34:40.772994   79927 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0806 08:34:43.977375   79927 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.204351244s)
	I0806 08:34:43.977404   79927 crio.go:469] duration metric: took 3.204457213s to extract the tarball
	I0806 08:34:43.977412   79927 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0806 08:34:44.023819   79927 ssh_runner.go:195] Run: sudo crictl images --output json
	I0806 08:34:44.070991   79927 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0806 08:34:44.071016   79927 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0806 08:34:44.071089   79927 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0806 08:34:44.071129   79927 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0806 08:34:44.071138   79927 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0806 08:34:44.071090   79927 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 08:34:44.071184   79927 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0806 08:34:44.071192   79927 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0806 08:34:44.071210   79927 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0806 08:34:44.071255   79927 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0806 08:34:44.072641   79927 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0806 08:34:44.072728   79927 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 08:34:44.072743   79927 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0806 08:34:44.072642   79927 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0806 08:34:44.072749   79927 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0806 08:34:44.072811   79927 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0806 08:34:44.073085   79927 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0806 08:34:44.073298   79927 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0806 08:34:44.239872   79927 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0806 08:34:44.287412   79927 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0806 08:34:44.287458   79927 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0806 08:34:44.287504   79927 ssh_runner.go:195] Run: which crictl
	I0806 08:34:44.292050   79927 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0806 08:34:44.309229   79927 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0806 08:34:44.336611   79927 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0806 08:34:44.371389   79927 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0806 08:34:44.371447   79927 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0806 08:34:44.371500   79927 ssh_runner.go:195] Run: which crictl
	I0806 08:34:44.375821   79927 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0806 08:34:44.412139   79927 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0806 08:34:44.415159   79927 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0806 08:34:44.415339   79927 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0806 08:34:44.418296   79927 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0806 08:34:44.423718   79927 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0806 08:34:44.437866   79927 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0806 08:34:44.494716   79927 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0806 08:34:44.494765   79927 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0806 08:34:44.494819   79927 ssh_runner.go:195] Run: which crictl
	I0806 08:34:44.526849   79927 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0806 08:34:44.526898   79927 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0806 08:34:44.526961   79927 ssh_runner.go:195] Run: which crictl
	I0806 08:34:44.555530   79927 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0806 08:34:44.555587   79927 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0806 08:34:44.555620   79927 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0806 08:34:44.555634   79927 ssh_runner.go:195] Run: which crictl
	I0806 08:34:44.555655   79927 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0806 08:34:44.555692   79927 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0806 08:34:44.555707   79927 ssh_runner.go:195] Run: which crictl
	I0806 08:34:44.555714   79927 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0806 08:34:44.555743   79927 ssh_runner.go:195] Run: which crictl
	I0806 08:34:44.555748   79927 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0806 08:34:44.555792   79927 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0806 08:34:44.561072   79927 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0806 08:34:44.624216   79927 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0806 08:34:44.639877   79927 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0806 08:34:44.639895   79927 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0806 08:34:44.639979   79927 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0806 08:34:44.653684   79927 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0806 08:34:44.711774   79927 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0806 08:34:44.711804   79927 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0806 08:34:44.964733   79927 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 08:34:45.112727   79927 cache_images.go:92] duration metric: took 1.041682516s to LoadCachedImages
	W0806 08:34:45.112813   79927 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0806 08:34:45.112837   79927 kubeadm.go:934] updating node { 192.168.61.158 8443 v1.20.0 crio true true} ...
	I0806 08:34:45.112964   79927 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-409903 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.158
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-409903 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0806 08:34:45.113047   79927 ssh_runner.go:195] Run: crio config
	I0806 08:34:45.165383   79927 cni.go:84] Creating CNI manager for ""
	I0806 08:34:45.165413   79927 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0806 08:34:45.165429   79927 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0806 08:34:45.165455   79927 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.158 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-409903 NodeName:old-k8s-version-409903 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.158"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.158 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0806 08:34:45.165687   79927 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.158
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-409903"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.158
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.158"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0806 08:34:45.165760   79927 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0806 08:34:45.177296   79927 binaries.go:44] Found k8s binaries, skipping transfer
	I0806 08:34:45.177371   79927 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0806 08:34:45.187299   79927 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0806 08:34:45.205986   79927 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0806 08:34:45.225072   79927 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0806 08:34:45.244222   79927 ssh_runner.go:195] Run: grep 192.168.61.158	control-plane.minikube.internal$ /etc/hosts
	I0806 08:34:45.248728   79927 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.158	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 08:34:45.263094   79927 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 08:34:45.399106   79927 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 08:34:45.417589   79927 certs.go:68] Setting up /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/old-k8s-version-409903 for IP: 192.168.61.158
	I0806 08:34:45.417611   79927 certs.go:194] generating shared ca certs ...
	I0806 08:34:45.417630   79927 certs.go:226] acquiring lock for ca certs: {Name:mkf5c5aed91f4108054d9f2c742cad66c86afbed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:34:45.417779   79927 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.key
	I0806 08:34:45.417820   79927 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.key
	I0806 08:34:45.417829   79927 certs.go:256] generating profile certs ...
	I0806 08:34:45.417913   79927 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/old-k8s-version-409903/client.key
	I0806 08:34:45.417970   79927 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/old-k8s-version-409903/apiserver.key.1ffd6785
	I0806 08:34:45.418060   79927 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/old-k8s-version-409903/proxy-client.key
	I0806 08:34:45.418242   79927 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246.pem (1338 bytes)
	W0806 08:34:45.418273   79927 certs.go:480] ignoring /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246_empty.pem, impossibly tiny 0 bytes
	I0806 08:34:45.418282   79927 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem (1675 bytes)
	I0806 08:34:45.418302   79927 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem (1078 bytes)
	I0806 08:34:45.418322   79927 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem (1123 bytes)
	I0806 08:34:45.418350   79927 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem (1675 bytes)
	I0806 08:34:45.418389   79927 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem (1708 bytes)
	I0806 08:34:45.419018   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0806 08:34:45.464550   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0806 08:34:45.500764   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0806 08:34:45.549225   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0806 08:34:45.597480   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/old-k8s-version-409903/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0806 08:34:45.639834   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/old-k8s-version-409903/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0806 08:34:45.680789   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/old-k8s-version-409903/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0806 08:34:45.737986   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/old-k8s-version-409903/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0806 08:34:45.766471   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem --> /usr/share/ca-certificates/202462.pem (1708 bytes)
	I0806 08:34:45.797047   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0806 08:34:45.825789   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246.pem --> /usr/share/ca-certificates/20246.pem (1338 bytes)
	I0806 08:34:45.860217   79927 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0806 08:34:45.883841   79927 ssh_runner.go:195] Run: openssl version
	I0806 08:34:45.891103   79927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202462.pem && ln -fs /usr/share/ca-certificates/202462.pem /etc/ssl/certs/202462.pem"
	I0806 08:34:45.905260   79927 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202462.pem
	I0806 08:34:45.910759   79927 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  6 07:17 /usr/share/ca-certificates/202462.pem
	I0806 08:34:45.910837   79927 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202462.pem
	I0806 08:34:45.918440   79927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202462.pem /etc/ssl/certs/3ec20f2e.0"
	I0806 08:34:45.932479   79927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0806 08:34:45.945678   79927 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0806 08:34:45.951612   79927 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  6 07:07 /usr/share/ca-certificates/minikubeCA.pem
	I0806 08:34:45.951691   79927 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0806 08:34:45.957749   79927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0806 08:34:45.969987   79927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20246.pem && ln -fs /usr/share/ca-certificates/20246.pem /etc/ssl/certs/20246.pem"
	I0806 08:34:45.982878   79927 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20246.pem
	I0806 08:34:45.987953   79927 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  6 07:17 /usr/share/ca-certificates/20246.pem
	I0806 08:34:45.988032   79927 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20246.pem
	I0806 08:34:45.994219   79927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20246.pem /etc/ssl/certs/51391683.0"
	I0806 08:34:46.006666   79927 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0806 08:34:46.011624   79927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0806 08:34:46.018416   79927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0806 08:34:46.025220   79927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0806 08:34:46.031854   79927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0806 08:34:46.038570   79927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0806 08:34:46.045142   79927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0806 08:34:46.052198   79927 kubeadm.go:392] StartCluster: {Name:old-k8s-version-409903 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-409903 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.158 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 08:34:46.052313   79927 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0806 08:34:46.052372   79927 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0806 08:34:46.098382   79927 cri.go:89] found id: ""
	I0806 08:34:46.098454   79927 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0806 08:34:46.110861   79927 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0806 08:34:46.110888   79927 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0806 08:34:46.110956   79927 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0806 08:34:46.123420   79927 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0806 08:34:46.124617   79927 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-409903" does not appear in /home/jenkins/minikube-integration/19370-13057/kubeconfig
	I0806 08:34:46.125562   79927 kubeconfig.go:62] /home/jenkins/minikube-integration/19370-13057/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-409903" cluster setting kubeconfig missing "old-k8s-version-409903" context setting]
	I0806 08:34:46.127017   79927 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/kubeconfig: {Name:mk5c066914acf8e36556b29792f75b98076ca34a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:34:46.173767   79927 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0806 08:34:46.186472   79927 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.158
	I0806 08:34:46.186517   79927 kubeadm.go:1160] stopping kube-system containers ...
	I0806 08:34:46.186532   79927 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0806 08:34:46.186592   79927 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0806 08:34:46.227817   79927 cri.go:89] found id: ""
	I0806 08:34:46.227914   79927 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0806 08:34:46.248248   79927 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0806 08:34:46.259555   79927 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0806 08:34:46.259578   79927 kubeadm.go:157] found existing configuration files:
	
	I0806 08:34:46.259647   79927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0806 08:34:46.271246   79927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0806 08:34:46.271311   79927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0806 08:34:46.283559   79927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0806 08:34:46.295584   79927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0806 08:34:46.295660   79927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0806 08:34:46.307223   79927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0806 08:34:46.318046   79927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0806 08:34:46.318136   79927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0806 08:34:46.328827   79927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0806 08:34:46.339577   79927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0806 08:34:46.339665   79927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0806 08:34:46.352107   79927 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0806 08:34:46.369776   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:46.505082   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:47.036627   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:47.285779   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:47.397777   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:47.517913   79927 api_server.go:52] waiting for apiserver process to appear ...
	I0806 08:34:47.518028   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:48.018399   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:48.518708   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:49.018572   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:49.518975   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:50.018355   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:50.518866   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:51.018156   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:51.518855   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:52.018270   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:52.518584   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:53.018606   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:53.518118   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:54.018818   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:54.518987   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:55.018584   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:55.518848   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:56.018094   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:56.518052   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:57.018765   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:57.518128   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:58.018977   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:58.518233   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:59.019117   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:59.518121   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:00.018102   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:00.518402   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:01.019125   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:01.518141   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:02.018063   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:02.518834   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:03.018074   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:03.518904   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:04.018047   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:04.518891   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:05.018447   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:05.518506   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:06.019133   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:06.518070   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:07.018362   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:07.518320   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:08.019034   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:08.518740   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:09.018349   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:09.518189   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:10.018824   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:10.518260   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:11.018914   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:11.518709   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:12.018349   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:12.518719   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:13.019022   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:13.518318   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:14.018930   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:14.518251   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:15.018981   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:15.518764   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:16.018114   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:16.518157   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:17.018124   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:17.518604   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:18.018742   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:18.518755   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:19.018302   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:19.518693   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:20.018804   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:20.518878   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:21.018974   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:21.518747   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:22.018530   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:22.519082   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:23.019088   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:23.518932   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:24.018764   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:24.519010   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:25.019090   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:25.518260   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:26.018784   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:26.518903   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:27.019011   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:27.518740   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:28.018299   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:28.518391   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:29.018449   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:29.518446   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:30.018179   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:30.518270   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:31.018913   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:31.518140   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:32.018621   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:32.518573   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:33.018512   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:33.518604   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:34.018132   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:34.518476   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:35.018797   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:35.518507   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:36.018976   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:36.518062   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:37.018463   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:37.518648   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:38.018861   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:38.518236   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:39.018424   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:39.518065   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:40.018381   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:40.518979   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:41.018310   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:41.518277   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:42.018881   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:42.518157   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:43.018166   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:43.518740   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:44.018075   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:44.518545   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:45.018342   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:45.519066   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:46.018796   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:46.519031   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:47.018876   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:47.518649   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:35:47.518746   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:35:47.559141   79927 cri.go:89] found id: ""
	I0806 08:35:47.559175   79927 logs.go:276] 0 containers: []
	W0806 08:35:47.559188   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:35:47.559195   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:35:47.559253   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:35:47.596840   79927 cri.go:89] found id: ""
	I0806 08:35:47.596900   79927 logs.go:276] 0 containers: []
	W0806 08:35:47.596913   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:35:47.596921   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:35:47.596995   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:35:47.634932   79927 cri.go:89] found id: ""
	I0806 08:35:47.634961   79927 logs.go:276] 0 containers: []
	W0806 08:35:47.634969   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:35:47.634975   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:35:47.635031   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:35:47.672170   79927 cri.go:89] found id: ""
	I0806 08:35:47.672202   79927 logs.go:276] 0 containers: []
	W0806 08:35:47.672213   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:35:47.672220   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:35:47.672284   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:35:47.709643   79927 cri.go:89] found id: ""
	I0806 08:35:47.709672   79927 logs.go:276] 0 containers: []
	W0806 08:35:47.709683   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:35:47.709690   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:35:47.709752   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:35:47.750036   79927 cri.go:89] found id: ""
	I0806 08:35:47.750067   79927 logs.go:276] 0 containers: []
	W0806 08:35:47.750076   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:35:47.750082   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:35:47.750149   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:35:47.791837   79927 cri.go:89] found id: ""
	I0806 08:35:47.791862   79927 logs.go:276] 0 containers: []
	W0806 08:35:47.791872   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:35:47.791879   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:35:47.791938   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:35:47.831972   79927 cri.go:89] found id: ""
	I0806 08:35:47.831997   79927 logs.go:276] 0 containers: []
	W0806 08:35:47.832005   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:35:47.832014   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:35:47.832028   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:35:47.962228   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:35:47.962252   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:35:47.962267   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:35:48.031276   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:35:48.031314   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:35:48.075869   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:35:48.075912   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:35:48.131817   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:35:48.131859   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:35:50.647351   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:50.661916   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:35:50.662002   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:35:50.707294   79927 cri.go:89] found id: ""
	I0806 08:35:50.707323   79927 logs.go:276] 0 containers: []
	W0806 08:35:50.707335   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:35:50.707342   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:35:50.707396   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:35:50.744111   79927 cri.go:89] found id: ""
	I0806 08:35:50.744133   79927 logs.go:276] 0 containers: []
	W0806 08:35:50.744149   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:35:50.744160   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:35:50.744205   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:35:50.781482   79927 cri.go:89] found id: ""
	I0806 08:35:50.781513   79927 logs.go:276] 0 containers: []
	W0806 08:35:50.781524   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:35:50.781531   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:35:50.781591   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:35:50.828788   79927 cri.go:89] found id: ""
	I0806 08:35:50.828817   79927 logs.go:276] 0 containers: []
	W0806 08:35:50.828827   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:35:50.828834   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:35:50.828896   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:35:50.865168   79927 cri.go:89] found id: ""
	I0806 08:35:50.865199   79927 logs.go:276] 0 containers: []
	W0806 08:35:50.865211   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:35:50.865219   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:35:50.865273   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:35:50.902814   79927 cri.go:89] found id: ""
	I0806 08:35:50.902842   79927 logs.go:276] 0 containers: []
	W0806 08:35:50.902851   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:35:50.902856   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:35:50.902907   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:35:50.939861   79927 cri.go:89] found id: ""
	I0806 08:35:50.939891   79927 logs.go:276] 0 containers: []
	W0806 08:35:50.939899   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:35:50.939905   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:35:50.939961   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:35:50.984743   79927 cri.go:89] found id: ""
	I0806 08:35:50.984773   79927 logs.go:276] 0 containers: []
	W0806 08:35:50.984782   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:35:50.984793   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:35:50.984809   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:35:50.998476   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:35:50.998515   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:35:51.081100   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:35:51.081126   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:35:51.081139   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:35:51.157634   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:35:51.157679   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:35:51.210685   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:35:51.210716   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:35:53.778620   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:53.793090   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:35:53.793154   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:35:53.837558   79927 cri.go:89] found id: ""
	I0806 08:35:53.837584   79927 logs.go:276] 0 containers: []
	W0806 08:35:53.837594   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:35:53.837599   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:35:53.837670   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:35:53.878855   79927 cri.go:89] found id: ""
	I0806 08:35:53.878884   79927 logs.go:276] 0 containers: []
	W0806 08:35:53.878893   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:35:53.878898   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:35:53.878945   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:35:53.918330   79927 cri.go:89] found id: ""
	I0806 08:35:53.918357   79927 logs.go:276] 0 containers: []
	W0806 08:35:53.918366   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:35:53.918371   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:35:53.918422   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:35:53.959156   79927 cri.go:89] found id: ""
	I0806 08:35:53.959186   79927 logs.go:276] 0 containers: []
	W0806 08:35:53.959196   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:35:53.959204   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:35:53.959309   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:35:53.996775   79927 cri.go:89] found id: ""
	I0806 08:35:53.996802   79927 logs.go:276] 0 containers: []
	W0806 08:35:53.996810   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:35:53.996816   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:35:53.996874   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:35:54.033047   79927 cri.go:89] found id: ""
	I0806 08:35:54.033072   79927 logs.go:276] 0 containers: []
	W0806 08:35:54.033082   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:35:54.033089   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:35:54.033152   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:35:54.069426   79927 cri.go:89] found id: ""
	I0806 08:35:54.069458   79927 logs.go:276] 0 containers: []
	W0806 08:35:54.069468   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:35:54.069475   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:35:54.069536   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:35:54.109114   79927 cri.go:89] found id: ""
	I0806 08:35:54.109143   79927 logs.go:276] 0 containers: []
	W0806 08:35:54.109153   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:35:54.109164   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:35:54.109179   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:35:54.161264   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:35:54.161298   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:35:54.175216   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:35:54.175247   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:35:54.254970   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:35:54.254999   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:35:54.255014   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:35:54.331561   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:35:54.331596   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:35:56.873110   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:56.889302   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:35:56.889378   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:35:56.931875   79927 cri.go:89] found id: ""
	I0806 08:35:56.931904   79927 logs.go:276] 0 containers: []
	W0806 08:35:56.931913   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:35:56.931919   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:35:56.931975   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:35:56.995549   79927 cri.go:89] found id: ""
	I0806 08:35:56.995587   79927 logs.go:276] 0 containers: []
	W0806 08:35:56.995601   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:35:56.995608   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:35:56.995675   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:35:57.042708   79927 cri.go:89] found id: ""
	I0806 08:35:57.042735   79927 logs.go:276] 0 containers: []
	W0806 08:35:57.042743   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:35:57.042749   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:35:57.042833   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:35:57.079160   79927 cri.go:89] found id: ""
	I0806 08:35:57.079187   79927 logs.go:276] 0 containers: []
	W0806 08:35:57.079197   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:35:57.079203   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:35:57.079260   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:35:57.116358   79927 cri.go:89] found id: ""
	I0806 08:35:57.116409   79927 logs.go:276] 0 containers: []
	W0806 08:35:57.116421   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:35:57.116429   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:35:57.116489   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:35:57.151051   79927 cri.go:89] found id: ""
	I0806 08:35:57.151079   79927 logs.go:276] 0 containers: []
	W0806 08:35:57.151087   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:35:57.151096   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:35:57.151148   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:35:57.189747   79927 cri.go:89] found id: ""
	I0806 08:35:57.189791   79927 logs.go:276] 0 containers: []
	W0806 08:35:57.189800   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:35:57.189806   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:35:57.189871   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:35:57.225251   79927 cri.go:89] found id: ""
	I0806 08:35:57.225277   79927 logs.go:276] 0 containers: []
	W0806 08:35:57.225287   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:35:57.225298   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:35:57.225314   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:35:57.238746   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:35:57.238773   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:35:57.312931   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:35:57.312959   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:35:57.312976   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:35:57.394104   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:35:57.394139   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:35:57.435993   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:35:57.436028   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:35:59.989556   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:00.004129   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:00.004205   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:00.042192   79927 cri.go:89] found id: ""
	I0806 08:36:00.042215   79927 logs.go:276] 0 containers: []
	W0806 08:36:00.042223   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:00.042228   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:00.042273   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:00.080921   79927 cri.go:89] found id: ""
	I0806 08:36:00.080956   79927 logs.go:276] 0 containers: []
	W0806 08:36:00.080966   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:00.080973   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:00.081033   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:00.123186   79927 cri.go:89] found id: ""
	I0806 08:36:00.123206   79927 logs.go:276] 0 containers: []
	W0806 08:36:00.123216   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:00.123223   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:00.123274   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:00.165252   79927 cri.go:89] found id: ""
	I0806 08:36:00.165281   79927 logs.go:276] 0 containers: []
	W0806 08:36:00.165289   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:00.165294   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:00.165340   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:00.201601   79927 cri.go:89] found id: ""
	I0806 08:36:00.201639   79927 logs.go:276] 0 containers: []
	W0806 08:36:00.201653   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:00.201662   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:00.201730   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:00.236933   79927 cri.go:89] found id: ""
	I0806 08:36:00.236964   79927 logs.go:276] 0 containers: []
	W0806 08:36:00.236975   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:00.236983   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:00.237043   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:00.273215   79927 cri.go:89] found id: ""
	I0806 08:36:00.273247   79927 logs.go:276] 0 containers: []
	W0806 08:36:00.273256   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:00.273261   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:00.273319   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:00.308075   79927 cri.go:89] found id: ""
	I0806 08:36:00.308105   79927 logs.go:276] 0 containers: []
	W0806 08:36:00.308116   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:00.308126   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:00.308141   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:00.360096   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:00.360146   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:00.374484   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:00.374521   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:00.454297   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:00.454318   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:00.454334   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:00.542665   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:00.542702   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:03.084297   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:03.098661   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:03.098737   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:03.134751   79927 cri.go:89] found id: ""
	I0806 08:36:03.134786   79927 logs.go:276] 0 containers: []
	W0806 08:36:03.134797   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:03.134805   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:03.134873   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:03.177412   79927 cri.go:89] found id: ""
	I0806 08:36:03.177442   79927 logs.go:276] 0 containers: []
	W0806 08:36:03.177453   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:03.177461   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:03.177516   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:03.213177   79927 cri.go:89] found id: ""
	I0806 08:36:03.213209   79927 logs.go:276] 0 containers: []
	W0806 08:36:03.213221   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:03.213229   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:03.213291   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:03.254569   79927 cri.go:89] found id: ""
	I0806 08:36:03.254594   79927 logs.go:276] 0 containers: []
	W0806 08:36:03.254603   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:03.254608   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:03.254656   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:03.292564   79927 cri.go:89] found id: ""
	I0806 08:36:03.292590   79927 logs.go:276] 0 containers: []
	W0806 08:36:03.292597   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:03.292610   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:03.292671   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:03.331572   79927 cri.go:89] found id: ""
	I0806 08:36:03.331596   79927 logs.go:276] 0 containers: []
	W0806 08:36:03.331605   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:03.331610   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:03.331658   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:03.366368   79927 cri.go:89] found id: ""
	I0806 08:36:03.366398   79927 logs.go:276] 0 containers: []
	W0806 08:36:03.366409   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:03.366417   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:03.366469   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:03.404345   79927 cri.go:89] found id: ""
	I0806 08:36:03.404387   79927 logs.go:276] 0 containers: []
	W0806 08:36:03.404399   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:03.404411   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:03.404425   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:03.454825   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:03.454864   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:03.469437   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:03.469466   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:03.553247   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:03.553274   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:03.553290   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:03.635461   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:03.635509   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:06.175216   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:06.190083   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:06.190158   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:06.226428   79927 cri.go:89] found id: ""
	I0806 08:36:06.226459   79927 logs.go:276] 0 containers: []
	W0806 08:36:06.226468   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:06.226473   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:06.226529   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:06.263781   79927 cri.go:89] found id: ""
	I0806 08:36:06.263805   79927 logs.go:276] 0 containers: []
	W0806 08:36:06.263813   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:06.263819   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:06.263895   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:06.304335   79927 cri.go:89] found id: ""
	I0806 08:36:06.304386   79927 logs.go:276] 0 containers: []
	W0806 08:36:06.304397   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:06.304404   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:06.304466   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:06.341138   79927 cri.go:89] found id: ""
	I0806 08:36:06.341166   79927 logs.go:276] 0 containers: []
	W0806 08:36:06.341177   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:06.341184   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:06.341248   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:06.381547   79927 cri.go:89] found id: ""
	I0806 08:36:06.381577   79927 logs.go:276] 0 containers: []
	W0806 08:36:06.381587   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:06.381594   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:06.381662   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:06.417606   79927 cri.go:89] found id: ""
	I0806 08:36:06.417630   79927 logs.go:276] 0 containers: []
	W0806 08:36:06.417637   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:06.417643   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:06.417700   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:06.456258   79927 cri.go:89] found id: ""
	I0806 08:36:06.456287   79927 logs.go:276] 0 containers: []
	W0806 08:36:06.456298   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:06.456307   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:06.456404   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:06.494268   79927 cri.go:89] found id: ""
	I0806 08:36:06.494309   79927 logs.go:276] 0 containers: []
	W0806 08:36:06.494321   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:06.494332   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:06.494346   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:06.547670   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:06.547710   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:06.562020   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:06.562049   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:06.632691   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:06.632717   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:06.632729   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:06.721737   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:06.721780   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:09.265154   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:09.278664   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:09.278743   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:09.313464   79927 cri.go:89] found id: ""
	I0806 08:36:09.313494   79927 logs.go:276] 0 containers: []
	W0806 08:36:09.313506   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:09.313513   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:09.313561   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:09.352107   79927 cri.go:89] found id: ""
	I0806 08:36:09.352133   79927 logs.go:276] 0 containers: []
	W0806 08:36:09.352146   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:09.352153   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:09.352215   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:09.387547   79927 cri.go:89] found id: ""
	I0806 08:36:09.387581   79927 logs.go:276] 0 containers: []
	W0806 08:36:09.387594   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:09.387601   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:09.387684   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:09.422157   79927 cri.go:89] found id: ""
	I0806 08:36:09.422193   79927 logs.go:276] 0 containers: []
	W0806 08:36:09.422202   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:09.422207   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:09.422275   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:09.463065   79927 cri.go:89] found id: ""
	I0806 08:36:09.463106   79927 logs.go:276] 0 containers: []
	W0806 08:36:09.463122   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:09.463145   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:09.463209   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:09.499450   79927 cri.go:89] found id: ""
	I0806 08:36:09.499474   79927 logs.go:276] 0 containers: []
	W0806 08:36:09.499484   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:09.499491   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:09.499550   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:09.536411   79927 cri.go:89] found id: ""
	I0806 08:36:09.536442   79927 logs.go:276] 0 containers: []
	W0806 08:36:09.536453   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:09.536460   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:09.536533   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:09.572044   79927 cri.go:89] found id: ""
	I0806 08:36:09.572070   79927 logs.go:276] 0 containers: []
	W0806 08:36:09.572079   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:09.572087   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:09.572138   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:09.612385   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:09.612418   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:09.666790   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:09.666852   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:09.680707   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:09.680753   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:09.753446   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:09.753473   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:09.753491   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:12.336824   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:12.352545   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:12.352626   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:12.393158   79927 cri.go:89] found id: ""
	I0806 08:36:12.393188   79927 logs.go:276] 0 containers: []
	W0806 08:36:12.393199   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:12.393206   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:12.393255   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:12.428501   79927 cri.go:89] found id: ""
	I0806 08:36:12.428525   79927 logs.go:276] 0 containers: []
	W0806 08:36:12.428534   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:12.428539   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:12.428596   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:12.463033   79927 cri.go:89] found id: ""
	I0806 08:36:12.463062   79927 logs.go:276] 0 containers: []
	W0806 08:36:12.463072   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:12.463079   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:12.463136   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:12.500350   79927 cri.go:89] found id: ""
	I0806 08:36:12.500402   79927 logs.go:276] 0 containers: []
	W0806 08:36:12.500413   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:12.500421   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:12.500477   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:12.537323   79927 cri.go:89] found id: ""
	I0806 08:36:12.537350   79927 logs.go:276] 0 containers: []
	W0806 08:36:12.537360   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:12.537366   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:12.537414   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:12.575525   79927 cri.go:89] found id: ""
	I0806 08:36:12.575551   79927 logs.go:276] 0 containers: []
	W0806 08:36:12.575561   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:12.575568   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:12.575629   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:12.615375   79927 cri.go:89] found id: ""
	I0806 08:36:12.615436   79927 logs.go:276] 0 containers: []
	W0806 08:36:12.615451   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:12.615459   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:12.615531   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:12.655269   79927 cri.go:89] found id: ""
	I0806 08:36:12.655300   79927 logs.go:276] 0 containers: []
	W0806 08:36:12.655311   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:12.655324   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:12.655342   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:12.709097   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:12.709134   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:12.724330   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:12.724380   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:12.801075   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:12.801099   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:12.801111   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:12.885670   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:12.885704   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:15.427222   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:15.442092   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:15.442206   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:15.485634   79927 cri.go:89] found id: ""
	I0806 08:36:15.485657   79927 logs.go:276] 0 containers: []
	W0806 08:36:15.485665   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:15.485671   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:15.485715   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:15.520791   79927 cri.go:89] found id: ""
	I0806 08:36:15.520820   79927 logs.go:276] 0 containers: []
	W0806 08:36:15.520830   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:15.520837   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:15.520897   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:15.554041   79927 cri.go:89] found id: ""
	I0806 08:36:15.554065   79927 logs.go:276] 0 containers: []
	W0806 08:36:15.554073   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:15.554079   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:15.554143   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:15.587550   79927 cri.go:89] found id: ""
	I0806 08:36:15.587579   79927 logs.go:276] 0 containers: []
	W0806 08:36:15.587591   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:15.587598   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:15.587659   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:15.628657   79927 cri.go:89] found id: ""
	I0806 08:36:15.628681   79927 logs.go:276] 0 containers: []
	W0806 08:36:15.628689   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:15.628695   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:15.628750   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:15.665212   79927 cri.go:89] found id: ""
	I0806 08:36:15.665239   79927 logs.go:276] 0 containers: []
	W0806 08:36:15.665247   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:15.665252   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:15.665298   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:15.705675   79927 cri.go:89] found id: ""
	I0806 08:36:15.705701   79927 logs.go:276] 0 containers: []
	W0806 08:36:15.705709   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:15.705714   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:15.705762   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:15.739835   79927 cri.go:89] found id: ""
	I0806 08:36:15.739862   79927 logs.go:276] 0 containers: []
	W0806 08:36:15.739871   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:15.739879   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:15.739892   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:15.795719   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:15.795749   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:15.809940   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:15.809967   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:15.888710   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:15.888737   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:15.888753   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:15.972728   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:15.972772   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:18.513565   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:18.527480   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:18.527538   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:18.562783   79927 cri.go:89] found id: ""
	I0806 08:36:18.562812   79927 logs.go:276] 0 containers: []
	W0806 08:36:18.562824   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:18.562832   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:18.562896   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:18.600613   79927 cri.go:89] found id: ""
	I0806 08:36:18.600645   79927 logs.go:276] 0 containers: []
	W0806 08:36:18.600659   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:18.600668   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:18.600735   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:18.638074   79927 cri.go:89] found id: ""
	I0806 08:36:18.638104   79927 logs.go:276] 0 containers: []
	W0806 08:36:18.638114   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:18.638119   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:18.638181   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:18.679795   79927 cri.go:89] found id: ""
	I0806 08:36:18.679825   79927 logs.go:276] 0 containers: []
	W0806 08:36:18.679836   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:18.679844   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:18.679907   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:18.715535   79927 cri.go:89] found id: ""
	I0806 08:36:18.715568   79927 logs.go:276] 0 containers: []
	W0806 08:36:18.715577   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:18.715585   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:18.715650   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:18.752705   79927 cri.go:89] found id: ""
	I0806 08:36:18.752736   79927 logs.go:276] 0 containers: []
	W0806 08:36:18.752748   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:18.752759   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:18.752821   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:18.791653   79927 cri.go:89] found id: ""
	I0806 08:36:18.791678   79927 logs.go:276] 0 containers: []
	W0806 08:36:18.791689   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:18.791696   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:18.791763   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:18.828328   79927 cri.go:89] found id: ""
	I0806 08:36:18.828357   79927 logs.go:276] 0 containers: []
	W0806 08:36:18.828378   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:18.828389   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:18.828408   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:18.888111   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:18.888148   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:18.902718   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:18.902750   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:18.986327   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:18.986352   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:18.986365   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:19.072858   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:19.072899   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:21.615497   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:21.631434   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:21.631492   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:21.671446   79927 cri.go:89] found id: ""
	I0806 08:36:21.671471   79927 logs.go:276] 0 containers: []
	W0806 08:36:21.671479   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:21.671484   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:21.671540   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:21.707598   79927 cri.go:89] found id: ""
	I0806 08:36:21.707624   79927 logs.go:276] 0 containers: []
	W0806 08:36:21.707634   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:21.707640   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:21.707686   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:21.753647   79927 cri.go:89] found id: ""
	I0806 08:36:21.753675   79927 logs.go:276] 0 containers: []
	W0806 08:36:21.753686   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:21.753692   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:21.753749   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:21.795284   79927 cri.go:89] found id: ""
	I0806 08:36:21.795311   79927 logs.go:276] 0 containers: []
	W0806 08:36:21.795321   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:21.795329   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:21.795388   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:21.832436   79927 cri.go:89] found id: ""
	I0806 08:36:21.832466   79927 logs.go:276] 0 containers: []
	W0806 08:36:21.832476   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:21.832483   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:21.832550   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:21.870295   79927 cri.go:89] found id: ""
	I0806 08:36:21.870327   79927 logs.go:276] 0 containers: []
	W0806 08:36:21.870338   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:21.870345   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:21.870412   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:21.908210   79927 cri.go:89] found id: ""
	I0806 08:36:21.908242   79927 logs.go:276] 0 containers: []
	W0806 08:36:21.908253   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:21.908261   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:21.908322   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:21.944055   79927 cri.go:89] found id: ""
	I0806 08:36:21.944084   79927 logs.go:276] 0 containers: []
	W0806 08:36:21.944095   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:21.944104   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:21.944125   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:22.000467   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:22.000506   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:22.014590   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:22.014629   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:22.083873   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:22.083907   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:22.083927   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:22.175993   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:22.176056   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:24.720476   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:24.734469   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:24.734551   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:24.770663   79927 cri.go:89] found id: ""
	I0806 08:36:24.770692   79927 logs.go:276] 0 containers: []
	W0806 08:36:24.770705   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:24.770712   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:24.770768   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:24.807861   79927 cri.go:89] found id: ""
	I0806 08:36:24.807889   79927 logs.go:276] 0 containers: []
	W0806 08:36:24.807897   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:24.807908   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:24.807957   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:24.849827   79927 cri.go:89] found id: ""
	I0806 08:36:24.849857   79927 logs.go:276] 0 containers: []
	W0806 08:36:24.849869   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:24.849877   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:24.849937   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:24.885260   79927 cri.go:89] found id: ""
	I0806 08:36:24.885291   79927 logs.go:276] 0 containers: []
	W0806 08:36:24.885303   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:24.885315   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:24.885381   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:24.919918   79927 cri.go:89] found id: ""
	I0806 08:36:24.919949   79927 logs.go:276] 0 containers: []
	W0806 08:36:24.919957   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:24.919964   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:24.920026   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:24.956010   79927 cri.go:89] found id: ""
	I0806 08:36:24.956036   79927 logs.go:276] 0 containers: []
	W0806 08:36:24.956045   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:24.956050   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:24.956109   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:24.993126   79927 cri.go:89] found id: ""
	I0806 08:36:24.993154   79927 logs.go:276] 0 containers: []
	W0806 08:36:24.993165   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:24.993173   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:24.993240   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:25.026732   79927 cri.go:89] found id: ""
	I0806 08:36:25.026756   79927 logs.go:276] 0 containers: []
	W0806 08:36:25.026763   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:25.026771   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:25.026781   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:25.064076   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:25.064113   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:25.118820   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:25.118856   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:25.133640   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:25.133671   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:25.201686   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:25.201714   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:25.201729   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:27.801947   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:27.815472   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:27.815539   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:27.851073   79927 cri.go:89] found id: ""
	I0806 08:36:27.851103   79927 logs.go:276] 0 containers: []
	W0806 08:36:27.851119   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:27.851127   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:27.851184   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:27.886121   79927 cri.go:89] found id: ""
	I0806 08:36:27.886151   79927 logs.go:276] 0 containers: []
	W0806 08:36:27.886162   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:27.886169   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:27.886231   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:27.920950   79927 cri.go:89] found id: ""
	I0806 08:36:27.920979   79927 logs.go:276] 0 containers: []
	W0806 08:36:27.920990   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:27.920997   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:27.921055   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:27.956529   79927 cri.go:89] found id: ""
	I0806 08:36:27.956562   79927 logs.go:276] 0 containers: []
	W0806 08:36:27.956574   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:27.956581   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:27.956631   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:27.992272   79927 cri.go:89] found id: ""
	I0806 08:36:27.992298   79927 logs.go:276] 0 containers: []
	W0806 08:36:27.992306   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:27.992322   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:27.992405   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:28.027242   79927 cri.go:89] found id: ""
	I0806 08:36:28.027269   79927 logs.go:276] 0 containers: []
	W0806 08:36:28.027277   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:28.027283   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:28.027335   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:28.063043   79927 cri.go:89] found id: ""
	I0806 08:36:28.063068   79927 logs.go:276] 0 containers: []
	W0806 08:36:28.063076   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:28.063083   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:28.063148   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:28.099743   79927 cri.go:89] found id: ""
	I0806 08:36:28.099769   79927 logs.go:276] 0 containers: []
	W0806 08:36:28.099780   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:28.099791   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:28.099806   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:28.151354   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:28.151392   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:28.166704   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:28.166731   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:28.242621   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:28.242642   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:28.242656   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:28.324352   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:28.324405   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:30.869063   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:30.882309   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:30.882377   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:30.916264   79927 cri.go:89] found id: ""
	I0806 08:36:30.916296   79927 logs.go:276] 0 containers: []
	W0806 08:36:30.916306   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:30.916313   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:30.916390   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:30.951995   79927 cri.go:89] found id: ""
	I0806 08:36:30.952021   79927 logs.go:276] 0 containers: []
	W0806 08:36:30.952028   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:30.952034   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:30.952111   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:31.005229   79927 cri.go:89] found id: ""
	I0806 08:36:31.005270   79927 logs.go:276] 0 containers: []
	W0806 08:36:31.005281   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:31.005288   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:31.005350   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:31.040439   79927 cri.go:89] found id: ""
	I0806 08:36:31.040465   79927 logs.go:276] 0 containers: []
	W0806 08:36:31.040476   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:31.040482   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:31.040534   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:31.078880   79927 cri.go:89] found id: ""
	I0806 08:36:31.078910   79927 logs.go:276] 0 containers: []
	W0806 08:36:31.078920   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:31.078934   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:31.078996   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:31.115520   79927 cri.go:89] found id: ""
	I0806 08:36:31.115551   79927 logs.go:276] 0 containers: []
	W0806 08:36:31.115562   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:31.115569   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:31.115615   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:31.150450   79927 cri.go:89] found id: ""
	I0806 08:36:31.150483   79927 logs.go:276] 0 containers: []
	W0806 08:36:31.150497   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:31.150506   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:31.150565   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:31.187449   79927 cri.go:89] found id: ""
	I0806 08:36:31.187476   79927 logs.go:276] 0 containers: []
	W0806 08:36:31.187485   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:31.187493   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:31.187507   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:31.246232   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:31.246272   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:31.260691   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:31.260720   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:31.342356   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:31.342383   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:31.342401   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:31.426820   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:31.426865   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:33.974768   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:33.988377   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:33.988442   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:34.023284   79927 cri.go:89] found id: ""
	I0806 08:36:34.023316   79927 logs.go:276] 0 containers: []
	W0806 08:36:34.023328   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:34.023336   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:34.023397   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:34.058633   79927 cri.go:89] found id: ""
	I0806 08:36:34.058676   79927 logs.go:276] 0 containers: []
	W0806 08:36:34.058695   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:34.058704   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:34.058767   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:34.098343   79927 cri.go:89] found id: ""
	I0806 08:36:34.098379   79927 logs.go:276] 0 containers: []
	W0806 08:36:34.098390   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:34.098398   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:34.098471   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:34.134638   79927 cri.go:89] found id: ""
	I0806 08:36:34.134667   79927 logs.go:276] 0 containers: []
	W0806 08:36:34.134677   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:34.134683   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:34.134729   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:34.170118   79927 cri.go:89] found id: ""
	I0806 08:36:34.170144   79927 logs.go:276] 0 containers: []
	W0806 08:36:34.170153   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:34.170159   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:34.170207   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:34.204789   79927 cri.go:89] found id: ""
	I0806 08:36:34.204815   79927 logs.go:276] 0 containers: []
	W0806 08:36:34.204823   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:34.204829   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:34.204877   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:34.245170   79927 cri.go:89] found id: ""
	I0806 08:36:34.245194   79927 logs.go:276] 0 containers: []
	W0806 08:36:34.245203   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:34.245208   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:34.245264   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:34.280026   79927 cri.go:89] found id: ""
	I0806 08:36:34.280063   79927 logs.go:276] 0 containers: []
	W0806 08:36:34.280076   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:34.280086   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:34.280101   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:34.335363   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:34.335408   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:34.350201   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:34.350226   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:34.425205   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:34.425231   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:34.425244   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:34.506385   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:34.506419   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:37.047263   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:37.060179   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:37.060254   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:37.093168   79927 cri.go:89] found id: ""
	I0806 08:36:37.093201   79927 logs.go:276] 0 containers: []
	W0806 08:36:37.093209   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:37.093216   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:37.093264   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:37.131315   79927 cri.go:89] found id: ""
	I0806 08:36:37.131343   79927 logs.go:276] 0 containers: []
	W0806 08:36:37.131353   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:37.131360   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:37.131426   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:37.169799   79927 cri.go:89] found id: ""
	I0806 08:36:37.169829   79927 logs.go:276] 0 containers: []
	W0806 08:36:37.169839   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:37.169845   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:37.169898   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:37.205852   79927 cri.go:89] found id: ""
	I0806 08:36:37.205878   79927 logs.go:276] 0 containers: []
	W0806 08:36:37.205885   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:37.205891   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:37.205938   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:37.242469   79927 cri.go:89] found id: ""
	I0806 08:36:37.242493   79927 logs.go:276] 0 containers: []
	W0806 08:36:37.242501   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:37.242507   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:37.242554   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:37.278515   79927 cri.go:89] found id: ""
	I0806 08:36:37.278545   79927 logs.go:276] 0 containers: []
	W0806 08:36:37.278556   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:37.278563   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:37.278627   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:37.316157   79927 cri.go:89] found id: ""
	I0806 08:36:37.316181   79927 logs.go:276] 0 containers: []
	W0806 08:36:37.316189   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:37.316195   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:37.316249   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:37.353752   79927 cri.go:89] found id: ""
	I0806 08:36:37.353778   79927 logs.go:276] 0 containers: []
	W0806 08:36:37.353785   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:37.353796   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:37.353807   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:37.439435   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:37.439471   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:37.488866   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:37.488903   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:37.540031   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:37.540072   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:37.556109   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:37.556140   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:37.629023   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:40.129579   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:40.143971   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:40.144050   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:40.180490   79927 cri.go:89] found id: ""
	I0806 08:36:40.180515   79927 logs.go:276] 0 containers: []
	W0806 08:36:40.180524   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:40.180530   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:40.180586   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:40.219812   79927 cri.go:89] found id: ""
	I0806 08:36:40.219860   79927 logs.go:276] 0 containers: []
	W0806 08:36:40.219869   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:40.219877   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:40.219939   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:40.256048   79927 cri.go:89] found id: ""
	I0806 08:36:40.256076   79927 logs.go:276] 0 containers: []
	W0806 08:36:40.256088   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:40.256106   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:40.256166   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:40.294693   79927 cri.go:89] found id: ""
	I0806 08:36:40.294720   79927 logs.go:276] 0 containers: []
	W0806 08:36:40.294729   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:40.294734   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:40.294792   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:40.331246   79927 cri.go:89] found id: ""
	I0806 08:36:40.331276   79927 logs.go:276] 0 containers: []
	W0806 08:36:40.331287   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:40.331295   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:40.331351   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:40.373039   79927 cri.go:89] found id: ""
	I0806 08:36:40.373068   79927 logs.go:276] 0 containers: []
	W0806 08:36:40.373095   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:40.373103   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:40.373163   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:40.408137   79927 cri.go:89] found id: ""
	I0806 08:36:40.408167   79927 logs.go:276] 0 containers: []
	W0806 08:36:40.408177   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:40.408184   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:40.408249   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:40.442996   79927 cri.go:89] found id: ""
	I0806 08:36:40.443035   79927 logs.go:276] 0 containers: []
	W0806 08:36:40.443043   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:40.443051   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:40.443063   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:40.499559   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:40.499599   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:40.514687   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:40.514716   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:40.586034   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:40.586059   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:40.586071   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:40.669756   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:40.669796   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:43.213555   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:43.229683   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:43.229751   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:43.267107   79927 cri.go:89] found id: ""
	I0806 08:36:43.267140   79927 logs.go:276] 0 containers: []
	W0806 08:36:43.267153   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:43.267161   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:43.267235   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:43.303405   79927 cri.go:89] found id: ""
	I0806 08:36:43.303435   79927 logs.go:276] 0 containers: []
	W0806 08:36:43.303443   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:43.303449   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:43.303509   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:43.338583   79927 cri.go:89] found id: ""
	I0806 08:36:43.338610   79927 logs.go:276] 0 containers: []
	W0806 08:36:43.338619   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:43.338628   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:43.338692   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:43.377413   79927 cri.go:89] found id: ""
	I0806 08:36:43.377441   79927 logs.go:276] 0 containers: []
	W0806 08:36:43.377449   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:43.377454   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:43.377501   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:43.413036   79927 cri.go:89] found id: ""
	I0806 08:36:43.413133   79927 logs.go:276] 0 containers: []
	W0806 08:36:43.413152   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:43.413160   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:43.413221   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:43.450216   79927 cri.go:89] found id: ""
	I0806 08:36:43.450245   79927 logs.go:276] 0 containers: []
	W0806 08:36:43.450253   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:43.450259   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:43.450305   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:43.485811   79927 cri.go:89] found id: ""
	I0806 08:36:43.485856   79927 logs.go:276] 0 containers: []
	W0806 08:36:43.485864   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:43.485870   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:43.485919   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:43.521586   79927 cri.go:89] found id: ""
	I0806 08:36:43.521617   79927 logs.go:276] 0 containers: []
	W0806 08:36:43.521628   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:43.521650   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:43.521673   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:43.561842   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:43.561889   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:43.617652   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:43.617683   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:43.631239   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:43.631267   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:43.707297   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:43.707317   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:43.707330   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:46.287309   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:46.301917   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:46.301991   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:46.335040   79927 cri.go:89] found id: ""
	I0806 08:36:46.335069   79927 logs.go:276] 0 containers: []
	W0806 08:36:46.335080   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:46.335088   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:46.335177   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:46.371902   79927 cri.go:89] found id: ""
	I0806 08:36:46.371930   79927 logs.go:276] 0 containers: []
	W0806 08:36:46.371939   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:46.371944   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:46.372000   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:46.411945   79927 cri.go:89] found id: ""
	I0806 08:36:46.412119   79927 logs.go:276] 0 containers: []
	W0806 08:36:46.412134   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:46.412145   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:46.412248   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:46.448578   79927 cri.go:89] found id: ""
	I0806 08:36:46.448612   79927 logs.go:276] 0 containers: []
	W0806 08:36:46.448624   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:46.448631   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:46.448691   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:46.492231   79927 cri.go:89] found id: ""
	I0806 08:36:46.492263   79927 logs.go:276] 0 containers: []
	W0806 08:36:46.492276   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:46.492283   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:46.492338   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:46.538502   79927 cri.go:89] found id: ""
	I0806 08:36:46.538528   79927 logs.go:276] 0 containers: []
	W0806 08:36:46.538547   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:46.538555   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:46.538610   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:46.577750   79927 cri.go:89] found id: ""
	I0806 08:36:46.577778   79927 logs.go:276] 0 containers: []
	W0806 08:36:46.577785   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:46.577790   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:46.577872   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:46.611717   79927 cri.go:89] found id: ""
	I0806 08:36:46.611742   79927 logs.go:276] 0 containers: []
	W0806 08:36:46.611750   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:46.611758   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:46.611770   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:46.629144   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:46.629178   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:46.714703   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:46.714723   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:46.714736   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:46.798978   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:46.799025   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:46.837373   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:46.837407   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:49.393231   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:49.407372   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:49.407436   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:49.444210   79927 cri.go:89] found id: ""
	I0806 08:36:49.444240   79927 logs.go:276] 0 containers: []
	W0806 08:36:49.444249   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:49.444254   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:49.444313   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:49.484165   79927 cri.go:89] found id: ""
	I0806 08:36:49.484190   79927 logs.go:276] 0 containers: []
	W0806 08:36:49.484200   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:49.484206   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:49.484278   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:49.519880   79927 cri.go:89] found id: ""
	I0806 08:36:49.519921   79927 logs.go:276] 0 containers: []
	W0806 08:36:49.519932   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:49.519939   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:49.519999   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:49.554954   79927 cri.go:89] found id: ""
	I0806 08:36:49.554979   79927 logs.go:276] 0 containers: []
	W0806 08:36:49.554988   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:49.554995   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:49.555045   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:49.593332   79927 cri.go:89] found id: ""
	I0806 08:36:49.593359   79927 logs.go:276] 0 containers: []
	W0806 08:36:49.593367   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:49.593373   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:49.593432   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:49.630736   79927 cri.go:89] found id: ""
	I0806 08:36:49.630766   79927 logs.go:276] 0 containers: []
	W0806 08:36:49.630774   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:49.630780   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:49.630839   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:49.667460   79927 cri.go:89] found id: ""
	I0806 08:36:49.667487   79927 logs.go:276] 0 containers: []
	W0806 08:36:49.667499   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:49.667505   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:49.667566   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:49.700161   79927 cri.go:89] found id: ""
	I0806 08:36:49.700190   79927 logs.go:276] 0 containers: []
	W0806 08:36:49.700199   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:49.700207   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:49.700218   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:49.786656   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:49.786704   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:49.825711   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:49.825745   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:49.879696   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:49.879731   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:49.893336   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:49.893369   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:49.969462   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:52.469939   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:52.483906   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:52.483974   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:52.523575   79927 cri.go:89] found id: ""
	I0806 08:36:52.523603   79927 logs.go:276] 0 containers: []
	W0806 08:36:52.523615   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:52.523622   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:52.523684   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:52.561590   79927 cri.go:89] found id: ""
	I0806 08:36:52.561620   79927 logs.go:276] 0 containers: []
	W0806 08:36:52.561630   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:52.561639   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:52.561698   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:52.598204   79927 cri.go:89] found id: ""
	I0806 08:36:52.598227   79927 logs.go:276] 0 containers: []
	W0806 08:36:52.598236   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:52.598241   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:52.598302   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:52.636993   79927 cri.go:89] found id: ""
	I0806 08:36:52.637019   79927 logs.go:276] 0 containers: []
	W0806 08:36:52.637027   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:52.637034   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:52.637085   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:52.675090   79927 cri.go:89] found id: ""
	I0806 08:36:52.675118   79927 logs.go:276] 0 containers: []
	W0806 08:36:52.675130   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:52.675138   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:52.675199   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:52.711045   79927 cri.go:89] found id: ""
	I0806 08:36:52.711072   79927 logs.go:276] 0 containers: []
	W0806 08:36:52.711081   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:52.711086   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:52.711151   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:52.749001   79927 cri.go:89] found id: ""
	I0806 08:36:52.749029   79927 logs.go:276] 0 containers: []
	W0806 08:36:52.749039   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:52.749046   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:52.749113   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:52.783692   79927 cri.go:89] found id: ""
	I0806 08:36:52.783720   79927 logs.go:276] 0 containers: []
	W0806 08:36:52.783737   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:52.783748   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:52.783762   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:52.863240   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:52.863281   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:52.902471   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:52.902496   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:52.953299   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:52.953341   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:52.968786   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:52.968818   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:53.045904   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:55.546407   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:55.560166   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:55.560243   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:55.592961   79927 cri.go:89] found id: ""
	I0806 08:36:55.592992   79927 logs.go:276] 0 containers: []
	W0806 08:36:55.593004   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:55.593012   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:55.593075   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:55.629546   79927 cri.go:89] found id: ""
	I0806 08:36:55.629568   79927 logs.go:276] 0 containers: []
	W0806 08:36:55.629578   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:55.629585   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:55.629639   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:55.668677   79927 cri.go:89] found id: ""
	I0806 08:36:55.668707   79927 logs.go:276] 0 containers: []
	W0806 08:36:55.668718   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:55.668726   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:55.668784   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:55.704141   79927 cri.go:89] found id: ""
	I0806 08:36:55.704173   79927 logs.go:276] 0 containers: []
	W0806 08:36:55.704183   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:55.704188   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:55.704257   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:55.740125   79927 cri.go:89] found id: ""
	I0806 08:36:55.740152   79927 logs.go:276] 0 containers: []
	W0806 08:36:55.740160   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:55.740166   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:55.740210   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:55.773709   79927 cri.go:89] found id: ""
	I0806 08:36:55.773739   79927 logs.go:276] 0 containers: []
	W0806 08:36:55.773750   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:55.773759   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:55.773827   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:55.813913   79927 cri.go:89] found id: ""
	I0806 08:36:55.813940   79927 logs.go:276] 0 containers: []
	W0806 08:36:55.813953   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:55.813959   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:55.814004   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:55.850807   79927 cri.go:89] found id: ""
	I0806 08:36:55.850841   79927 logs.go:276] 0 containers: []
	W0806 08:36:55.850853   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:55.850864   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:55.850878   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:55.908202   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:55.908247   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:55.922278   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:55.922311   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:55.996448   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:55.996476   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:55.996489   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:56.076612   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:56.076647   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:58.625610   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:58.640939   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:58.641008   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:58.675638   79927 cri.go:89] found id: ""
	I0806 08:36:58.675666   79927 logs.go:276] 0 containers: []
	W0806 08:36:58.675674   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:58.675680   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:58.675728   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:58.712020   79927 cri.go:89] found id: ""
	I0806 08:36:58.712055   79927 logs.go:276] 0 containers: []
	W0806 08:36:58.712065   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:58.712072   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:58.712126   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:58.749683   79927 cri.go:89] found id: ""
	I0806 08:36:58.749712   79927 logs.go:276] 0 containers: []
	W0806 08:36:58.749724   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:58.749731   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:58.749792   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:58.790005   79927 cri.go:89] found id: ""
	I0806 08:36:58.790026   79927 logs.go:276] 0 containers: []
	W0806 08:36:58.790033   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:58.790039   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:58.790089   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:58.828781   79927 cri.go:89] found id: ""
	I0806 08:36:58.828806   79927 logs.go:276] 0 containers: []
	W0806 08:36:58.828815   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:58.828821   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:58.828873   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:58.867039   79927 cri.go:89] found id: ""
	I0806 08:36:58.867067   79927 logs.go:276] 0 containers: []
	W0806 08:36:58.867075   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:58.867081   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:58.867125   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:58.901362   79927 cri.go:89] found id: ""
	I0806 08:36:58.901396   79927 logs.go:276] 0 containers: []
	W0806 08:36:58.901415   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:58.901423   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:58.901472   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:58.940351   79927 cri.go:89] found id: ""
	I0806 08:36:58.940388   79927 logs.go:276] 0 containers: []
	W0806 08:36:58.940399   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:58.940410   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:58.940425   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:58.995737   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:58.995776   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:59.009191   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:59.009222   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:59.085109   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:59.085133   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:59.085150   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:59.164778   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:59.164819   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:01.712282   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:01.725701   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:01.725759   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:01.762005   79927 cri.go:89] found id: ""
	I0806 08:37:01.762029   79927 logs.go:276] 0 containers: []
	W0806 08:37:01.762038   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:01.762044   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:01.762092   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:01.799277   79927 cri.go:89] found id: ""
	I0806 08:37:01.799304   79927 logs.go:276] 0 containers: []
	W0806 08:37:01.799316   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:01.799322   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:01.799391   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:01.845100   79927 cri.go:89] found id: ""
	I0806 08:37:01.845133   79927 logs.go:276] 0 containers: []
	W0806 08:37:01.845147   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:01.845154   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:01.845214   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:01.882019   79927 cri.go:89] found id: ""
	I0806 08:37:01.882050   79927 logs.go:276] 0 containers: []
	W0806 08:37:01.882061   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:01.882069   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:01.882160   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:01.919224   79927 cri.go:89] found id: ""
	I0806 08:37:01.919251   79927 logs.go:276] 0 containers: []
	W0806 08:37:01.919262   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:01.919269   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:01.919327   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:01.957300   79927 cri.go:89] found id: ""
	I0806 08:37:01.957327   79927 logs.go:276] 0 containers: []
	W0806 08:37:01.957335   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:01.957342   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:01.957389   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:02.000717   79927 cri.go:89] found id: ""
	I0806 08:37:02.000740   79927 logs.go:276] 0 containers: []
	W0806 08:37:02.000749   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:02.000754   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:02.000801   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:02.034710   79927 cri.go:89] found id: ""
	I0806 08:37:02.034739   79927 logs.go:276] 0 containers: []
	W0806 08:37:02.034753   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:02.034763   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:02.034776   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:02.090553   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:02.090593   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:02.104247   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:02.104276   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:02.181990   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:02.182015   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:02.182030   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:02.268127   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:02.268167   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:04.814144   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:04.829120   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:04.829194   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:04.873503   79927 cri.go:89] found id: ""
	I0806 08:37:04.873537   79927 logs.go:276] 0 containers: []
	W0806 08:37:04.873548   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:04.873557   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:04.873614   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:04.913878   79927 cri.go:89] found id: ""
	I0806 08:37:04.913910   79927 logs.go:276] 0 containers: []
	W0806 08:37:04.913918   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:04.913924   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:04.913984   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:04.950273   79927 cri.go:89] found id: ""
	I0806 08:37:04.950303   79927 logs.go:276] 0 containers: []
	W0806 08:37:04.950316   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:04.950323   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:04.950388   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:04.985108   79927 cri.go:89] found id: ""
	I0806 08:37:04.985135   79927 logs.go:276] 0 containers: []
	W0806 08:37:04.985145   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:04.985151   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:04.985215   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:05.021033   79927 cri.go:89] found id: ""
	I0806 08:37:05.021056   79927 logs.go:276] 0 containers: []
	W0806 08:37:05.021064   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:05.021069   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:05.021120   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:05.058416   79927 cri.go:89] found id: ""
	I0806 08:37:05.058440   79927 logs.go:276] 0 containers: []
	W0806 08:37:05.058448   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:05.058454   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:05.058510   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:05.093709   79927 cri.go:89] found id: ""
	I0806 08:37:05.093737   79927 logs.go:276] 0 containers: []
	W0806 08:37:05.093745   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:05.093751   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:05.093815   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:05.129391   79927 cri.go:89] found id: ""
	I0806 08:37:05.129416   79927 logs.go:276] 0 containers: []
	W0806 08:37:05.129424   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:05.129432   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:05.129444   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:05.172149   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:05.172176   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:05.224745   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:05.224781   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:05.238967   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:05.238997   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:05.314834   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:05.314868   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:05.314883   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:07.903732   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:07.918551   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:07.918619   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:07.966068   79927 cri.go:89] found id: ""
	I0806 08:37:07.966093   79927 logs.go:276] 0 containers: []
	W0806 08:37:07.966104   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:07.966112   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:07.966180   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:08.001557   79927 cri.go:89] found id: ""
	I0806 08:37:08.001586   79927 logs.go:276] 0 containers: []
	W0806 08:37:08.001595   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:08.001602   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:08.001663   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:08.034214   79927 cri.go:89] found id: ""
	I0806 08:37:08.034239   79927 logs.go:276] 0 containers: []
	W0806 08:37:08.034249   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:08.034256   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:08.034316   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:08.070645   79927 cri.go:89] found id: ""
	I0806 08:37:08.070673   79927 logs.go:276] 0 containers: []
	W0806 08:37:08.070683   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:08.070689   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:08.070761   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:08.110537   79927 cri.go:89] found id: ""
	I0806 08:37:08.110565   79927 logs.go:276] 0 containers: []
	W0806 08:37:08.110574   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:08.110581   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:08.110635   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:08.146179   79927 cri.go:89] found id: ""
	I0806 08:37:08.146203   79927 logs.go:276] 0 containers: []
	W0806 08:37:08.146211   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:08.146224   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:08.146272   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:08.182924   79927 cri.go:89] found id: ""
	I0806 08:37:08.182953   79927 logs.go:276] 0 containers: []
	W0806 08:37:08.182962   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:08.182968   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:08.183024   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:08.219547   79927 cri.go:89] found id: ""
	I0806 08:37:08.219570   79927 logs.go:276] 0 containers: []
	W0806 08:37:08.219576   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:08.219585   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:08.219599   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:08.298919   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:08.298956   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:08.338475   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:08.338512   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:08.390859   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:08.390898   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:08.404854   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:08.404891   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:08.473553   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:10.973824   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:10.987720   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:10.987777   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:11.022637   79927 cri.go:89] found id: ""
	I0806 08:37:11.022667   79927 logs.go:276] 0 containers: []
	W0806 08:37:11.022677   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:11.022683   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:11.022739   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:11.062625   79927 cri.go:89] found id: ""
	I0806 08:37:11.062655   79927 logs.go:276] 0 containers: []
	W0806 08:37:11.062673   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:11.062681   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:11.062736   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:11.099906   79927 cri.go:89] found id: ""
	I0806 08:37:11.099937   79927 logs.go:276] 0 containers: []
	W0806 08:37:11.099945   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:11.099951   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:11.100019   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:11.139461   79927 cri.go:89] found id: ""
	I0806 08:37:11.139485   79927 logs.go:276] 0 containers: []
	W0806 08:37:11.139493   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:11.139498   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:11.139541   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:11.177916   79927 cri.go:89] found id: ""
	I0806 08:37:11.177940   79927 logs.go:276] 0 containers: []
	W0806 08:37:11.177948   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:11.177954   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:11.178007   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:11.216335   79927 cri.go:89] found id: ""
	I0806 08:37:11.216377   79927 logs.go:276] 0 containers: []
	W0806 08:37:11.216389   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:11.216398   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:11.216453   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:11.278457   79927 cri.go:89] found id: ""
	I0806 08:37:11.278479   79927 logs.go:276] 0 containers: []
	W0806 08:37:11.278487   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:11.278493   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:11.278541   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:11.316325   79927 cri.go:89] found id: ""
	I0806 08:37:11.316348   79927 logs.go:276] 0 containers: []
	W0806 08:37:11.316356   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:11.316381   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:11.316397   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:11.357435   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:11.357469   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:11.413151   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:11.413184   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:11.427149   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:11.427182   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:11.500691   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:11.500723   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:11.500738   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:14.077817   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:14.090888   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:14.090960   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:14.126454   79927 cri.go:89] found id: ""
	I0806 08:37:14.126476   79927 logs.go:276] 0 containers: []
	W0806 08:37:14.126484   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:14.126489   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:14.126535   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:14.163910   79927 cri.go:89] found id: ""
	I0806 08:37:14.163939   79927 logs.go:276] 0 containers: []
	W0806 08:37:14.163950   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:14.163957   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:14.164025   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:14.202918   79927 cri.go:89] found id: ""
	I0806 08:37:14.202942   79927 logs.go:276] 0 containers: []
	W0806 08:37:14.202950   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:14.202956   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:14.202999   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:14.240997   79927 cri.go:89] found id: ""
	I0806 08:37:14.241021   79927 logs.go:276] 0 containers: []
	W0806 08:37:14.241030   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:14.241036   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:14.241083   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:14.277424   79927 cri.go:89] found id: ""
	I0806 08:37:14.277448   79927 logs.go:276] 0 containers: []
	W0806 08:37:14.277465   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:14.277473   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:14.277545   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:14.317402   79927 cri.go:89] found id: ""
	I0806 08:37:14.317426   79927 logs.go:276] 0 containers: []
	W0806 08:37:14.317435   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:14.317440   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:14.317487   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:14.364310   79927 cri.go:89] found id: ""
	I0806 08:37:14.364341   79927 logs.go:276] 0 containers: []
	W0806 08:37:14.364351   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:14.364358   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:14.364445   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:14.398836   79927 cri.go:89] found id: ""
	I0806 08:37:14.398866   79927 logs.go:276] 0 containers: []
	W0806 08:37:14.398874   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:14.398891   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:14.398904   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:14.450550   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:14.450586   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:14.467190   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:14.467220   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:14.538786   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:14.538809   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:14.538824   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:14.615049   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:14.615080   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:17.155906   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:17.170401   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:17.170479   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:17.208276   79927 cri.go:89] found id: ""
	I0806 08:37:17.208311   79927 logs.go:276] 0 containers: []
	W0806 08:37:17.208322   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:17.208330   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:17.208403   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:17.258671   79927 cri.go:89] found id: ""
	I0806 08:37:17.258696   79927 logs.go:276] 0 containers: []
	W0806 08:37:17.258705   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:17.258711   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:17.258771   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:17.296999   79927 cri.go:89] found id: ""
	I0806 08:37:17.297027   79927 logs.go:276] 0 containers: []
	W0806 08:37:17.297035   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:17.297041   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:17.297098   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:17.335818   79927 cri.go:89] found id: ""
	I0806 08:37:17.335841   79927 logs.go:276] 0 containers: []
	W0806 08:37:17.335849   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:17.335855   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:17.335903   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:17.371650   79927 cri.go:89] found id: ""
	I0806 08:37:17.371674   79927 logs.go:276] 0 containers: []
	W0806 08:37:17.371682   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:17.371688   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:17.371737   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:17.408413   79927 cri.go:89] found id: ""
	I0806 08:37:17.408440   79927 logs.go:276] 0 containers: []
	W0806 08:37:17.408448   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:17.408454   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:17.408502   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:17.443877   79927 cri.go:89] found id: ""
	I0806 08:37:17.443909   79927 logs.go:276] 0 containers: []
	W0806 08:37:17.443920   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:17.443928   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:17.443988   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:17.484292   79927 cri.go:89] found id: ""
	I0806 08:37:17.484321   79927 logs.go:276] 0 containers: []
	W0806 08:37:17.484331   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:17.484341   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:17.484356   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:17.541365   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:17.541418   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:17.555961   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:17.555993   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:17.625904   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:17.625923   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:17.625941   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:17.708997   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:17.709045   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:20.250351   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:20.264560   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:20.264625   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:20.301216   79927 cri.go:89] found id: ""
	I0806 08:37:20.301244   79927 logs.go:276] 0 containers: []
	W0806 08:37:20.301252   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:20.301257   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:20.301317   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:20.345820   79927 cri.go:89] found id: ""
	I0806 08:37:20.345849   79927 logs.go:276] 0 containers: []
	W0806 08:37:20.345860   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:20.345867   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:20.345923   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:20.382333   79927 cri.go:89] found id: ""
	I0806 08:37:20.382357   79927 logs.go:276] 0 containers: []
	W0806 08:37:20.382365   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:20.382371   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:20.382426   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:20.420193   79927 cri.go:89] found id: ""
	I0806 08:37:20.420228   79927 logs.go:276] 0 containers: []
	W0806 08:37:20.420238   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:20.420249   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:20.420316   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:20.456200   79927 cri.go:89] found id: ""
	I0806 08:37:20.456227   79927 logs.go:276] 0 containers: []
	W0806 08:37:20.456235   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:20.456241   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:20.456298   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:20.493321   79927 cri.go:89] found id: ""
	I0806 08:37:20.493349   79927 logs.go:276] 0 containers: []
	W0806 08:37:20.493356   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:20.493362   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:20.493414   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:20.531007   79927 cri.go:89] found id: ""
	I0806 08:37:20.531034   79927 logs.go:276] 0 containers: []
	W0806 08:37:20.531044   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:20.531051   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:20.531118   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:20.572014   79927 cri.go:89] found id: ""
	I0806 08:37:20.572044   79927 logs.go:276] 0 containers: []
	W0806 08:37:20.572062   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:20.572071   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:20.572083   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:20.623368   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:20.623399   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:20.638941   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:20.638984   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:20.717287   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:20.717313   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:20.717324   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:20.800157   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:20.800192   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:23.342061   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:23.356954   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:23.357026   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:23.391415   79927 cri.go:89] found id: ""
	I0806 08:37:23.391445   79927 logs.go:276] 0 containers: []
	W0806 08:37:23.391456   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:23.391463   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:23.391526   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:23.426460   79927 cri.go:89] found id: ""
	I0806 08:37:23.426482   79927 logs.go:276] 0 containers: []
	W0806 08:37:23.426491   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:23.426496   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:23.426541   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:23.461078   79927 cri.go:89] found id: ""
	I0806 08:37:23.461111   79927 logs.go:276] 0 containers: []
	W0806 08:37:23.461122   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:23.461129   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:23.461183   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:23.496566   79927 cri.go:89] found id: ""
	I0806 08:37:23.496594   79927 logs.go:276] 0 containers: []
	W0806 08:37:23.496602   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:23.496607   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:23.496665   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:23.529097   79927 cri.go:89] found id: ""
	I0806 08:37:23.529123   79927 logs.go:276] 0 containers: []
	W0806 08:37:23.529134   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:23.529153   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:23.529243   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:23.563690   79927 cri.go:89] found id: ""
	I0806 08:37:23.563720   79927 logs.go:276] 0 containers: []
	W0806 08:37:23.563730   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:23.563737   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:23.563796   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:23.598787   79927 cri.go:89] found id: ""
	I0806 08:37:23.598861   79927 logs.go:276] 0 containers: []
	W0806 08:37:23.598875   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:23.598884   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:23.598942   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:23.655675   79927 cri.go:89] found id: ""
	I0806 08:37:23.655705   79927 logs.go:276] 0 containers: []
	W0806 08:37:23.655715   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:23.655723   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:23.655734   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:23.715853   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:23.715887   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:23.735805   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:23.735844   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:23.815238   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:23.815261   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:23.815274   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:23.899436   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:23.899479   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:26.441837   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:26.456410   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:26.456472   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:26.497707   79927 cri.go:89] found id: ""
	I0806 08:37:26.497733   79927 logs.go:276] 0 containers: []
	W0806 08:37:26.497741   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:26.497746   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:26.497795   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:26.536101   79927 cri.go:89] found id: ""
	I0806 08:37:26.536126   79927 logs.go:276] 0 containers: []
	W0806 08:37:26.536133   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:26.536145   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:26.536190   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:26.573067   79927 cri.go:89] found id: ""
	I0806 08:37:26.573091   79927 logs.go:276] 0 containers: []
	W0806 08:37:26.573102   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:26.573110   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:26.573162   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:26.609820   79927 cri.go:89] found id: ""
	I0806 08:37:26.609846   79927 logs.go:276] 0 containers: []
	W0806 08:37:26.609854   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:26.609860   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:26.609914   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:26.645721   79927 cri.go:89] found id: ""
	I0806 08:37:26.645748   79927 logs.go:276] 0 containers: []
	W0806 08:37:26.645758   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:26.645764   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:26.645825   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:26.679941   79927 cri.go:89] found id: ""
	I0806 08:37:26.679988   79927 logs.go:276] 0 containers: []
	W0806 08:37:26.679999   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:26.680006   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:26.680068   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:26.717247   79927 cri.go:89] found id: ""
	I0806 08:37:26.717277   79927 logs.go:276] 0 containers: []
	W0806 08:37:26.717288   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:26.717295   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:26.717382   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:26.760157   79927 cri.go:89] found id: ""
	I0806 08:37:26.760184   79927 logs.go:276] 0 containers: []
	W0806 08:37:26.760194   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:26.760205   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:26.760221   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:26.816034   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:26.816071   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:26.830141   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:26.830170   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:26.900078   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:26.900103   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:26.900121   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:26.984460   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:26.984501   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:29.527184   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:29.541974   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:29.542052   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:29.576062   79927 cri.go:89] found id: ""
	I0806 08:37:29.576095   79927 logs.go:276] 0 containers: []
	W0806 08:37:29.576107   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:29.576114   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:29.576176   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:29.610841   79927 cri.go:89] found id: ""
	I0806 08:37:29.610865   79927 logs.go:276] 0 containers: []
	W0806 08:37:29.610881   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:29.610892   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:29.610948   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:29.650326   79927 cri.go:89] found id: ""
	I0806 08:37:29.650350   79927 logs.go:276] 0 containers: []
	W0806 08:37:29.650360   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:29.650367   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:29.650434   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:29.689382   79927 cri.go:89] found id: ""
	I0806 08:37:29.689418   79927 logs.go:276] 0 containers: []
	W0806 08:37:29.689429   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:29.689436   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:29.689495   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:29.725893   79927 cri.go:89] found id: ""
	I0806 08:37:29.725918   79927 logs.go:276] 0 containers: []
	W0806 08:37:29.725926   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:29.725932   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:29.725980   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:29.767354   79927 cri.go:89] found id: ""
	I0806 08:37:29.767379   79927 logs.go:276] 0 containers: []
	W0806 08:37:29.767386   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:29.767392   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:29.767441   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:29.811139   79927 cri.go:89] found id: ""
	I0806 08:37:29.811161   79927 logs.go:276] 0 containers: []
	W0806 08:37:29.811169   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:29.811174   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:29.811219   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:29.848252   79927 cri.go:89] found id: ""
	I0806 08:37:29.848285   79927 logs.go:276] 0 containers: []
	W0806 08:37:29.848294   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:29.848303   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:29.848315   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:29.899850   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:29.899884   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:29.913744   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:29.913774   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:29.988016   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:29.988039   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:29.988054   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:30.069701   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:30.069735   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:32.606958   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:32.622621   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:32.622684   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:32.658816   79927 cri.go:89] found id: ""
	I0806 08:37:32.658844   79927 logs.go:276] 0 containers: []
	W0806 08:37:32.658855   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:32.658862   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:32.658928   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:32.697734   79927 cri.go:89] found id: ""
	I0806 08:37:32.697769   79927 logs.go:276] 0 containers: []
	W0806 08:37:32.697780   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:32.697786   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:32.697868   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:32.735540   79927 cri.go:89] found id: ""
	I0806 08:37:32.735572   79927 logs.go:276] 0 containers: []
	W0806 08:37:32.735584   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:32.735591   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:32.735658   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:32.771319   79927 cri.go:89] found id: ""
	I0806 08:37:32.771343   79927 logs.go:276] 0 containers: []
	W0806 08:37:32.771357   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:32.771365   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:32.771421   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:32.808137   79927 cri.go:89] found id: ""
	I0806 08:37:32.808162   79927 logs.go:276] 0 containers: []
	W0806 08:37:32.808172   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:32.808179   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:32.808234   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:32.843307   79927 cri.go:89] found id: ""
	I0806 08:37:32.843333   79927 logs.go:276] 0 containers: []
	W0806 08:37:32.843343   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:32.843350   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:32.843404   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:32.883855   79927 cri.go:89] found id: ""
	I0806 08:37:32.883922   79927 logs.go:276] 0 containers: []
	W0806 08:37:32.883935   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:32.883944   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:32.884014   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:32.920965   79927 cri.go:89] found id: ""
	I0806 08:37:32.921008   79927 logs.go:276] 0 containers: []
	W0806 08:37:32.921019   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:32.921027   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:32.921039   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:32.974933   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:32.974966   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:32.989326   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:32.989352   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:33.058274   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:33.058302   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:33.058314   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:33.136632   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:33.136670   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:35.680315   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:35.694438   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:35.694497   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:35.729541   79927 cri.go:89] found id: ""
	I0806 08:37:35.729569   79927 logs.go:276] 0 containers: []
	W0806 08:37:35.729580   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:35.729587   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:35.729650   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:35.765783   79927 cri.go:89] found id: ""
	I0806 08:37:35.765813   79927 logs.go:276] 0 containers: []
	W0806 08:37:35.765822   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:35.765827   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:35.765885   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:35.802975   79927 cri.go:89] found id: ""
	I0806 08:37:35.803002   79927 logs.go:276] 0 containers: []
	W0806 08:37:35.803010   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:35.803016   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:35.803062   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:35.838588   79927 cri.go:89] found id: ""
	I0806 08:37:35.838614   79927 logs.go:276] 0 containers: []
	W0806 08:37:35.838623   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:35.838629   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:35.838675   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:35.874413   79927 cri.go:89] found id: ""
	I0806 08:37:35.874442   79927 logs.go:276] 0 containers: []
	W0806 08:37:35.874452   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:35.874460   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:35.874528   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:35.913816   79927 cri.go:89] found id: ""
	I0806 08:37:35.913852   79927 logs.go:276] 0 containers: []
	W0806 08:37:35.913871   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:35.913885   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:35.913963   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:35.951719   79927 cri.go:89] found id: ""
	I0806 08:37:35.951750   79927 logs.go:276] 0 containers: []
	W0806 08:37:35.951761   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:35.951770   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:35.951828   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:35.987720   79927 cri.go:89] found id: ""
	I0806 08:37:35.987749   79927 logs.go:276] 0 containers: []
	W0806 08:37:35.987760   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:35.987769   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:35.987780   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:36.043300   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:36.043339   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:36.057138   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:36.057169   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:36.126382   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:36.126399   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:36.126415   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:36.205243   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:36.205284   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:38.747978   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:38.762123   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:38.762190   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:38.805074   79927 cri.go:89] found id: ""
	I0806 08:37:38.805101   79927 logs.go:276] 0 containers: []
	W0806 08:37:38.805111   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:38.805117   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:38.805185   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:38.839305   79927 cri.go:89] found id: ""
	I0806 08:37:38.839330   79927 logs.go:276] 0 containers: []
	W0806 08:37:38.839338   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:38.839344   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:38.839406   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:38.874174   79927 cri.go:89] found id: ""
	I0806 08:37:38.874203   79927 logs.go:276] 0 containers: []
	W0806 08:37:38.874214   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:38.874221   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:38.874277   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:38.909607   79927 cri.go:89] found id: ""
	I0806 08:37:38.909630   79927 logs.go:276] 0 containers: []
	W0806 08:37:38.909639   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:38.909645   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:38.909702   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:38.945584   79927 cri.go:89] found id: ""
	I0806 08:37:38.945615   79927 logs.go:276] 0 containers: []
	W0806 08:37:38.945626   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:38.945634   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:38.945696   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:38.988993   79927 cri.go:89] found id: ""
	I0806 08:37:38.989022   79927 logs.go:276] 0 containers: []
	W0806 08:37:38.989033   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:38.989041   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:38.989100   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:39.028902   79927 cri.go:89] found id: ""
	I0806 08:37:39.028934   79927 logs.go:276] 0 containers: []
	W0806 08:37:39.028942   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:39.028948   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:39.029005   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:39.070163   79927 cri.go:89] found id: ""
	I0806 08:37:39.070192   79927 logs.go:276] 0 containers: []
	W0806 08:37:39.070202   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:39.070213   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:39.070229   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:39.124891   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:39.124927   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:39.139898   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:39.139926   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:39.216157   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:39.216181   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:39.216196   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:39.301089   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:39.301136   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:41.840165   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:41.855659   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:41.855792   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:41.892620   79927 cri.go:89] found id: ""
	I0806 08:37:41.892648   79927 logs.go:276] 0 containers: []
	W0806 08:37:41.892660   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:41.892668   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:41.892728   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:41.929064   79927 cri.go:89] found id: ""
	I0806 08:37:41.929097   79927 logs.go:276] 0 containers: []
	W0806 08:37:41.929107   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:41.929114   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:41.929165   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:41.968334   79927 cri.go:89] found id: ""
	I0806 08:37:41.968371   79927 logs.go:276] 0 containers: []
	W0806 08:37:41.968382   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:41.968389   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:41.968450   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:42.008566   79927 cri.go:89] found id: ""
	I0806 08:37:42.008587   79927 logs.go:276] 0 containers: []
	W0806 08:37:42.008595   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:42.008601   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:42.008649   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:42.044281   79927 cri.go:89] found id: ""
	I0806 08:37:42.044308   79927 logs.go:276] 0 containers: []
	W0806 08:37:42.044315   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:42.044321   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:42.044392   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:42.081987   79927 cri.go:89] found id: ""
	I0806 08:37:42.082012   79927 logs.go:276] 0 containers: []
	W0806 08:37:42.082020   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:42.082026   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:42.082072   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:42.120818   79927 cri.go:89] found id: ""
	I0806 08:37:42.120847   79927 logs.go:276] 0 containers: []
	W0806 08:37:42.120857   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:42.120865   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:42.120926   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:42.160685   79927 cri.go:89] found id: ""
	I0806 08:37:42.160719   79927 logs.go:276] 0 containers: []
	W0806 08:37:42.160731   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:42.160742   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:42.160759   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:42.236322   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:42.236344   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:42.236357   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:42.328488   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:42.328526   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:42.369520   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:42.369545   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:42.425449   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:42.425490   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:44.941557   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:44.954984   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:44.955043   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:44.992186   79927 cri.go:89] found id: ""
	I0806 08:37:44.992216   79927 logs.go:276] 0 containers: []
	W0806 08:37:44.992228   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:44.992234   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:44.992285   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:45.027114   79927 cri.go:89] found id: ""
	I0806 08:37:45.027143   79927 logs.go:276] 0 containers: []
	W0806 08:37:45.027154   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:45.027161   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:45.027222   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:45.062629   79927 cri.go:89] found id: ""
	I0806 08:37:45.062660   79927 logs.go:276] 0 containers: []
	W0806 08:37:45.062672   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:45.062679   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:45.062741   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:45.097499   79927 cri.go:89] found id: ""
	I0806 08:37:45.097530   79927 logs.go:276] 0 containers: []
	W0806 08:37:45.097541   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:45.097548   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:45.097605   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:45.137130   79927 cri.go:89] found id: ""
	I0806 08:37:45.137162   79927 logs.go:276] 0 containers: []
	W0806 08:37:45.137184   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:45.137191   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:45.137264   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:45.170617   79927 cri.go:89] found id: ""
	I0806 08:37:45.170642   79927 logs.go:276] 0 containers: []
	W0806 08:37:45.170659   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:45.170667   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:45.170726   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:45.205729   79927 cri.go:89] found id: ""
	I0806 08:37:45.205758   79927 logs.go:276] 0 containers: []
	W0806 08:37:45.205768   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:45.205774   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:45.205827   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:45.242345   79927 cri.go:89] found id: ""
	I0806 08:37:45.242373   79927 logs.go:276] 0 containers: []
	W0806 08:37:45.242384   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:45.242397   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:45.242414   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:45.283778   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:45.283810   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:45.341421   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:45.341457   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:45.356581   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:45.356606   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:45.428512   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:45.428541   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:45.428559   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:48.011744   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:48.025452   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:48.025533   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:48.064315   79927 cri.go:89] found id: ""
	I0806 08:37:48.064339   79927 logs.go:276] 0 containers: []
	W0806 08:37:48.064348   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:48.064353   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:48.064430   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:48.103855   79927 cri.go:89] found id: ""
	I0806 08:37:48.103891   79927 logs.go:276] 0 containers: []
	W0806 08:37:48.103902   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:48.103909   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:48.103968   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:48.141402   79927 cri.go:89] found id: ""
	I0806 08:37:48.141427   79927 logs.go:276] 0 containers: []
	W0806 08:37:48.141435   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:48.141440   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:48.141491   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:48.177449   79927 cri.go:89] found id: ""
	I0806 08:37:48.177479   79927 logs.go:276] 0 containers: []
	W0806 08:37:48.177490   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:48.177497   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:48.177566   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:48.214613   79927 cri.go:89] found id: ""
	I0806 08:37:48.214637   79927 logs.go:276] 0 containers: []
	W0806 08:37:48.214644   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:48.214650   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:48.214695   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:48.254049   79927 cri.go:89] found id: ""
	I0806 08:37:48.254075   79927 logs.go:276] 0 containers: []
	W0806 08:37:48.254083   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:48.254089   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:48.254147   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:48.296206   79927 cri.go:89] found id: ""
	I0806 08:37:48.296230   79927 logs.go:276] 0 containers: []
	W0806 08:37:48.296240   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:48.296246   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:48.296306   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:48.337614   79927 cri.go:89] found id: ""
	I0806 08:37:48.337641   79927 logs.go:276] 0 containers: []
	W0806 08:37:48.337651   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:48.337662   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:48.337679   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:48.391932   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:48.391971   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:48.405422   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:48.405450   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:48.477374   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:48.477402   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:48.477419   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:48.559252   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:48.559288   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:51.104881   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:51.120113   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:51.120181   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:51.155703   79927 cri.go:89] found id: ""
	I0806 08:37:51.155733   79927 logs.go:276] 0 containers: []
	W0806 08:37:51.155741   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:51.155747   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:51.155792   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:51.191211   79927 cri.go:89] found id: ""
	I0806 08:37:51.191240   79927 logs.go:276] 0 containers: []
	W0806 08:37:51.191250   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:51.191257   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:51.191324   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:51.226740   79927 cri.go:89] found id: ""
	I0806 08:37:51.226771   79927 logs.go:276] 0 containers: []
	W0806 08:37:51.226783   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:51.226790   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:51.226852   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:51.269101   79927 cri.go:89] found id: ""
	I0806 08:37:51.269132   79927 logs.go:276] 0 containers: []
	W0806 08:37:51.269141   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:51.269146   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:51.269196   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:51.307176   79927 cri.go:89] found id: ""
	I0806 08:37:51.307206   79927 logs.go:276] 0 containers: []
	W0806 08:37:51.307216   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:51.307224   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:51.307290   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:51.342824   79927 cri.go:89] found id: ""
	I0806 08:37:51.342854   79927 logs.go:276] 0 containers: []
	W0806 08:37:51.342864   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:51.342871   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:51.342935   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:51.383309   79927 cri.go:89] found id: ""
	I0806 08:37:51.383337   79927 logs.go:276] 0 containers: []
	W0806 08:37:51.383348   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:51.383355   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:51.383420   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:51.424851   79927 cri.go:89] found id: ""
	I0806 08:37:51.424896   79927 logs.go:276] 0 containers: []
	W0806 08:37:51.424908   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:51.424918   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:51.424950   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:51.507286   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:51.507307   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:51.507318   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:51.590512   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:51.590556   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:51.640453   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:51.640482   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:51.694699   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:51.694740   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:54.211106   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:54.225858   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:54.225936   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:54.267219   79927 cri.go:89] found id: ""
	I0806 08:37:54.267247   79927 logs.go:276] 0 containers: []
	W0806 08:37:54.267258   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:54.267266   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:54.267323   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:54.306734   79927 cri.go:89] found id: ""
	I0806 08:37:54.306763   79927 logs.go:276] 0 containers: []
	W0806 08:37:54.306774   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:54.306781   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:54.306839   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:54.342257   79927 cri.go:89] found id: ""
	I0806 08:37:54.342294   79927 logs.go:276] 0 containers: []
	W0806 08:37:54.342306   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:54.342314   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:54.342383   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:54.380661   79927 cri.go:89] found id: ""
	I0806 08:37:54.380685   79927 logs.go:276] 0 containers: []
	W0806 08:37:54.380693   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:54.380699   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:54.380743   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:54.416503   79927 cri.go:89] found id: ""
	I0806 08:37:54.416525   79927 logs.go:276] 0 containers: []
	W0806 08:37:54.416533   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:54.416539   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:54.416589   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:54.453470   79927 cri.go:89] found id: ""
	I0806 08:37:54.453508   79927 logs.go:276] 0 containers: []
	W0806 08:37:54.453520   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:54.453528   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:54.453591   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:54.497630   79927 cri.go:89] found id: ""
	I0806 08:37:54.497671   79927 logs.go:276] 0 containers: []
	W0806 08:37:54.497682   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:54.497691   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:54.497750   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:54.536516   79927 cri.go:89] found id: ""
	I0806 08:37:54.536546   79927 logs.go:276] 0 containers: []
	W0806 08:37:54.536557   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:54.536568   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:54.536587   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:54.592997   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:54.593032   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:54.607568   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:54.607598   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:54.679369   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:54.679402   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:54.679423   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:54.760001   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:54.760041   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:57.305405   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:57.319427   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:57.319493   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:57.357465   79927 cri.go:89] found id: ""
	I0806 08:37:57.357493   79927 logs.go:276] 0 containers: []
	W0806 08:37:57.357504   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:57.357513   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:57.357571   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:57.390131   79927 cri.go:89] found id: ""
	I0806 08:37:57.390155   79927 logs.go:276] 0 containers: []
	W0806 08:37:57.390165   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:57.390172   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:57.390238   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:57.428571   79927 cri.go:89] found id: ""
	I0806 08:37:57.428600   79927 logs.go:276] 0 containers: []
	W0806 08:37:57.428608   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:57.428613   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:57.428675   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:57.462688   79927 cri.go:89] found id: ""
	I0806 08:37:57.462717   79927 logs.go:276] 0 containers: []
	W0806 08:37:57.462727   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:57.462733   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:57.462791   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:57.525008   79927 cri.go:89] found id: ""
	I0806 08:37:57.525033   79927 logs.go:276] 0 containers: []
	W0806 08:37:57.525041   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:57.525046   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:57.525108   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:57.565992   79927 cri.go:89] found id: ""
	I0806 08:37:57.566024   79927 logs.go:276] 0 containers: []
	W0806 08:37:57.566035   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:57.566045   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:57.566123   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:57.610170   79927 cri.go:89] found id: ""
	I0806 08:37:57.610198   79927 logs.go:276] 0 containers: []
	W0806 08:37:57.610208   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:57.610215   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:57.610262   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:57.647054   79927 cri.go:89] found id: ""
	I0806 08:37:57.647085   79927 logs.go:276] 0 containers: []
	W0806 08:37:57.647096   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:57.647106   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:57.647121   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:57.698177   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:57.698214   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:57.712244   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:57.712271   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:57.781683   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:57.781707   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:57.781723   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:57.861097   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:57.861133   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:00.401186   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:00.414546   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:00.414602   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:00.454163   79927 cri.go:89] found id: ""
	I0806 08:38:00.454190   79927 logs.go:276] 0 containers: []
	W0806 08:38:00.454201   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:00.454208   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:00.454269   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:00.497427   79927 cri.go:89] found id: ""
	I0806 08:38:00.497455   79927 logs.go:276] 0 containers: []
	W0806 08:38:00.497466   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:00.497473   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:00.497538   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:00.537590   79927 cri.go:89] found id: ""
	I0806 08:38:00.537619   79927 logs.go:276] 0 containers: []
	W0806 08:38:00.537627   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:00.537633   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:00.537695   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:00.572337   79927 cri.go:89] found id: ""
	I0806 08:38:00.572383   79927 logs.go:276] 0 containers: []
	W0806 08:38:00.572396   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:00.572403   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:00.572460   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:00.607729   79927 cri.go:89] found id: ""
	I0806 08:38:00.607772   79927 logs.go:276] 0 containers: []
	W0806 08:38:00.607783   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:00.607791   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:00.607852   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:00.644777   79927 cri.go:89] found id: ""
	I0806 08:38:00.644843   79927 logs.go:276] 0 containers: []
	W0806 08:38:00.644856   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:00.644864   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:00.644935   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:00.684698   79927 cri.go:89] found id: ""
	I0806 08:38:00.684727   79927 logs.go:276] 0 containers: []
	W0806 08:38:00.684739   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:00.684747   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:00.684863   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:00.718472   79927 cri.go:89] found id: ""
	I0806 08:38:00.718506   79927 logs.go:276] 0 containers: []
	W0806 08:38:00.718518   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:00.718529   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:00.718546   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:00.770265   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:00.770309   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:00.787438   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:00.787469   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:00.854538   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:00.854558   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:00.854570   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:00.936810   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:00.936847   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:03.477109   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:03.494379   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:03.494439   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:03.536166   79927 cri.go:89] found id: ""
	I0806 08:38:03.536193   79927 logs.go:276] 0 containers: []
	W0806 08:38:03.536202   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:03.536208   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:03.536255   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:03.577497   79927 cri.go:89] found id: ""
	I0806 08:38:03.577522   79927 logs.go:276] 0 containers: []
	W0806 08:38:03.577531   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:03.577536   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:03.577585   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:03.614534   79927 cri.go:89] found id: ""
	I0806 08:38:03.614560   79927 logs.go:276] 0 containers: []
	W0806 08:38:03.614569   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:03.614576   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:03.614634   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:03.655824   79927 cri.go:89] found id: ""
	I0806 08:38:03.655860   79927 logs.go:276] 0 containers: []
	W0806 08:38:03.655871   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:03.655878   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:03.655952   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:03.699843   79927 cri.go:89] found id: ""
	I0806 08:38:03.699875   79927 logs.go:276] 0 containers: []
	W0806 08:38:03.699888   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:03.699896   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:03.699957   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:03.740131   79927 cri.go:89] found id: ""
	I0806 08:38:03.740159   79927 logs.go:276] 0 containers: []
	W0806 08:38:03.740168   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:03.740174   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:03.740223   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:03.774103   79927 cri.go:89] found id: ""
	I0806 08:38:03.774149   79927 logs.go:276] 0 containers: []
	W0806 08:38:03.774163   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:03.774170   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:03.774230   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:03.814051   79927 cri.go:89] found id: ""
	I0806 08:38:03.814075   79927 logs.go:276] 0 containers: []
	W0806 08:38:03.814083   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:03.814091   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:03.814102   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:03.854869   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:03.854898   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:03.908284   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:03.908318   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:03.923265   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:03.923303   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:04.001785   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:04.001806   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:04.001819   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:06.583809   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:06.596764   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:06.596827   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:06.636611   79927 cri.go:89] found id: ""
	I0806 08:38:06.636639   79927 logs.go:276] 0 containers: []
	W0806 08:38:06.636650   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:06.636657   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:06.636707   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:06.675392   79927 cri.go:89] found id: ""
	I0806 08:38:06.675419   79927 logs.go:276] 0 containers: []
	W0806 08:38:06.675430   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:06.675438   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:06.675492   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:06.711270   79927 cri.go:89] found id: ""
	I0806 08:38:06.711302   79927 logs.go:276] 0 containers: []
	W0806 08:38:06.711313   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:06.711320   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:06.711388   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:06.747252   79927 cri.go:89] found id: ""
	I0806 08:38:06.747280   79927 logs.go:276] 0 containers: []
	W0806 08:38:06.747291   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:06.747297   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:06.747360   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:06.781999   79927 cri.go:89] found id: ""
	I0806 08:38:06.782028   79927 logs.go:276] 0 containers: []
	W0806 08:38:06.782038   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:06.782043   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:06.782095   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:06.818810   79927 cri.go:89] found id: ""
	I0806 08:38:06.818847   79927 logs.go:276] 0 containers: []
	W0806 08:38:06.818860   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:06.818869   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:06.818935   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:06.857937   79927 cri.go:89] found id: ""
	I0806 08:38:06.857972   79927 logs.go:276] 0 containers: []
	W0806 08:38:06.857983   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:06.857989   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:06.858049   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:06.897920   79927 cri.go:89] found id: ""
	I0806 08:38:06.897949   79927 logs.go:276] 0 containers: []
	W0806 08:38:06.897957   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:06.897965   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:06.897979   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:06.912199   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:06.912228   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:06.983129   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:06.983157   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:06.983173   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:07.067783   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:07.067853   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:07.107756   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:07.107787   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:09.657958   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:09.670726   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:09.670807   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:09.709184   79927 cri.go:89] found id: ""
	I0806 08:38:09.709220   79927 logs.go:276] 0 containers: []
	W0806 08:38:09.709232   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:09.709239   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:09.709299   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:09.747032   79927 cri.go:89] found id: ""
	I0806 08:38:09.747061   79927 logs.go:276] 0 containers: []
	W0806 08:38:09.747073   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:09.747080   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:09.747141   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:09.785134   79927 cri.go:89] found id: ""
	I0806 08:38:09.785170   79927 logs.go:276] 0 containers: []
	W0806 08:38:09.785180   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:09.785188   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:09.785248   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:09.825136   79927 cri.go:89] found id: ""
	I0806 08:38:09.825168   79927 logs.go:276] 0 containers: []
	W0806 08:38:09.825179   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:09.825186   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:09.825249   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:09.865292   79927 cri.go:89] found id: ""
	I0806 08:38:09.865327   79927 logs.go:276] 0 containers: []
	W0806 08:38:09.865336   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:09.865345   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:09.865415   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:09.911145   79927 cri.go:89] found id: ""
	I0806 08:38:09.911172   79927 logs.go:276] 0 containers: []
	W0806 08:38:09.911183   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:09.911190   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:09.911247   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:09.961259   79927 cri.go:89] found id: ""
	I0806 08:38:09.961289   79927 logs.go:276] 0 containers: []
	W0806 08:38:09.961299   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:09.961307   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:09.961364   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:10.009572   79927 cri.go:89] found id: ""
	I0806 08:38:10.009594   79927 logs.go:276] 0 containers: []
	W0806 08:38:10.009602   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:10.009610   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:10.009623   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:10.063733   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:10.063770   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:10.080004   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:10.080044   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:10.158792   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:10.158823   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:10.158840   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:10.242126   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:10.242162   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:12.781497   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:12.796715   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:12.796804   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:12.831223   79927 cri.go:89] found id: ""
	I0806 08:38:12.831254   79927 logs.go:276] 0 containers: []
	W0806 08:38:12.831265   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:12.831273   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:12.831332   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:12.872202   79927 cri.go:89] found id: ""
	I0806 08:38:12.872224   79927 logs.go:276] 0 containers: []
	W0806 08:38:12.872231   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:12.872236   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:12.872284   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:12.913457   79927 cri.go:89] found id: ""
	I0806 08:38:12.913486   79927 logs.go:276] 0 containers: []
	W0806 08:38:12.913497   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:12.913505   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:12.913563   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:12.950382   79927 cri.go:89] found id: ""
	I0806 08:38:12.950413   79927 logs.go:276] 0 containers: []
	W0806 08:38:12.950424   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:12.950432   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:12.950497   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:12.988956   79927 cri.go:89] found id: ""
	I0806 08:38:12.988984   79927 logs.go:276] 0 containers: []
	W0806 08:38:12.988992   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:12.988998   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:12.989054   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:13.025308   79927 cri.go:89] found id: ""
	I0806 08:38:13.025334   79927 logs.go:276] 0 containers: []
	W0806 08:38:13.025342   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:13.025348   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:13.025401   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:13.060156   79927 cri.go:89] found id: ""
	I0806 08:38:13.060184   79927 logs.go:276] 0 containers: []
	W0806 08:38:13.060196   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:13.060204   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:13.060268   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:13.096302   79927 cri.go:89] found id: ""
	I0806 08:38:13.096331   79927 logs.go:276] 0 containers: []
	W0806 08:38:13.096344   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:13.096360   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:13.096387   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:13.149567   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:13.149603   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:13.164680   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:13.164715   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:13.236695   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:13.236718   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:13.236733   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:13.323043   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:13.323081   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:15.867128   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:15.885504   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:15.885579   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:15.929060   79927 cri.go:89] found id: ""
	I0806 08:38:15.929087   79927 logs.go:276] 0 containers: []
	W0806 08:38:15.929098   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:15.929106   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:15.929157   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:15.996919   79927 cri.go:89] found id: ""
	I0806 08:38:15.996946   79927 logs.go:276] 0 containers: []
	W0806 08:38:15.996954   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:15.996959   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:15.997013   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:16.033299   79927 cri.go:89] found id: ""
	I0806 08:38:16.033323   79927 logs.go:276] 0 containers: []
	W0806 08:38:16.033331   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:16.033338   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:16.033399   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:16.070754   79927 cri.go:89] found id: ""
	I0806 08:38:16.070781   79927 logs.go:276] 0 containers: []
	W0806 08:38:16.070789   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:16.070795   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:16.070853   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:16.107906   79927 cri.go:89] found id: ""
	I0806 08:38:16.107931   79927 logs.go:276] 0 containers: []
	W0806 08:38:16.107943   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:16.107951   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:16.108011   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:16.145814   79927 cri.go:89] found id: ""
	I0806 08:38:16.145844   79927 logs.go:276] 0 containers: []
	W0806 08:38:16.145853   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:16.145861   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:16.145921   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:16.183752   79927 cri.go:89] found id: ""
	I0806 08:38:16.183782   79927 logs.go:276] 0 containers: []
	W0806 08:38:16.183793   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:16.183828   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:16.183890   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:16.220973   79927 cri.go:89] found id: ""
	I0806 08:38:16.221000   79927 logs.go:276] 0 containers: []
	W0806 08:38:16.221010   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:16.221020   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:16.221034   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:16.274514   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:16.274555   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:16.291387   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:16.291417   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:16.365737   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:16.365769   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:16.365785   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:16.446578   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:16.446617   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:18.989406   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:19.003113   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:19.003174   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:19.038794   79927 cri.go:89] found id: ""
	I0806 08:38:19.038821   79927 logs.go:276] 0 containers: []
	W0806 08:38:19.038832   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:19.038839   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:19.038899   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:19.072521   79927 cri.go:89] found id: ""
	I0806 08:38:19.072546   79927 logs.go:276] 0 containers: []
	W0806 08:38:19.072554   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:19.072561   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:19.072606   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:19.105949   79927 cri.go:89] found id: ""
	I0806 08:38:19.105982   79927 logs.go:276] 0 containers: []
	W0806 08:38:19.105994   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:19.106002   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:19.106053   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:19.141277   79927 cri.go:89] found id: ""
	I0806 08:38:19.141303   79927 logs.go:276] 0 containers: []
	W0806 08:38:19.141311   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:19.141317   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:19.141373   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:19.179144   79927 cri.go:89] found id: ""
	I0806 08:38:19.179173   79927 logs.go:276] 0 containers: []
	W0806 08:38:19.179184   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:19.179191   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:19.179252   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:19.218970   79927 cri.go:89] found id: ""
	I0806 08:38:19.218998   79927 logs.go:276] 0 containers: []
	W0806 08:38:19.219010   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:19.219022   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:19.219084   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:19.257623   79927 cri.go:89] found id: ""
	I0806 08:38:19.257652   79927 logs.go:276] 0 containers: []
	W0806 08:38:19.257663   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:19.257671   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:19.257740   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:19.294222   79927 cri.go:89] found id: ""
	I0806 08:38:19.294250   79927 logs.go:276] 0 containers: []
	W0806 08:38:19.294258   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:19.294267   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:19.294279   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:19.374327   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:19.374352   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:19.374368   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:19.458570   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:19.458604   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:19.504329   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:19.504382   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:19.554199   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:19.554237   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:22.069574   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:22.082919   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:22.082981   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:22.115952   79927 cri.go:89] found id: ""
	I0806 08:38:22.115974   79927 logs.go:276] 0 containers: []
	W0806 08:38:22.115981   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:22.115987   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:22.116031   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:22.158322   79927 cri.go:89] found id: ""
	I0806 08:38:22.158349   79927 logs.go:276] 0 containers: []
	W0806 08:38:22.158359   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:22.158366   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:22.158425   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:22.203375   79927 cri.go:89] found id: ""
	I0806 08:38:22.203403   79927 logs.go:276] 0 containers: []
	W0806 08:38:22.203412   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:22.203418   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:22.203473   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:22.240808   79927 cri.go:89] found id: ""
	I0806 08:38:22.240837   79927 logs.go:276] 0 containers: []
	W0806 08:38:22.240849   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:22.240857   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:22.240913   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:22.277885   79927 cri.go:89] found id: ""
	I0806 08:38:22.277914   79927 logs.go:276] 0 containers: []
	W0806 08:38:22.277925   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:22.277932   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:22.277993   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:22.314990   79927 cri.go:89] found id: ""
	I0806 08:38:22.315020   79927 logs.go:276] 0 containers: []
	W0806 08:38:22.315031   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:22.315039   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:22.315098   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:22.352159   79927 cri.go:89] found id: ""
	I0806 08:38:22.352189   79927 logs.go:276] 0 containers: []
	W0806 08:38:22.352198   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:22.352204   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:22.352267   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:22.387349   79927 cri.go:89] found id: ""
	I0806 08:38:22.387382   79927 logs.go:276] 0 containers: []
	W0806 08:38:22.387393   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:22.387404   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:22.387416   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:22.425834   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:22.425871   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:22.479680   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:22.479712   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:22.495104   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:22.495140   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:22.567013   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:22.567030   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:22.567043   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:25.146383   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:25.160013   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:25.160078   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:25.200981   79927 cri.go:89] found id: ""
	I0806 08:38:25.201006   79927 logs.go:276] 0 containers: []
	W0806 08:38:25.201018   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:25.201026   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:25.201091   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:25.239852   79927 cri.go:89] found id: ""
	I0806 08:38:25.239895   79927 logs.go:276] 0 containers: []
	W0806 08:38:25.239907   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:25.239914   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:25.239975   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:25.275521   79927 cri.go:89] found id: ""
	I0806 08:38:25.275549   79927 logs.go:276] 0 containers: []
	W0806 08:38:25.275561   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:25.275568   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:25.275630   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:25.311921   79927 cri.go:89] found id: ""
	I0806 08:38:25.311952   79927 logs.go:276] 0 containers: []
	W0806 08:38:25.311962   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:25.311969   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:25.312036   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:25.353047   79927 cri.go:89] found id: ""
	I0806 08:38:25.353081   79927 logs.go:276] 0 containers: []
	W0806 08:38:25.353093   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:25.353102   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:25.353177   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:25.389442   79927 cri.go:89] found id: ""
	I0806 08:38:25.389470   79927 logs.go:276] 0 containers: []
	W0806 08:38:25.389479   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:25.389485   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:25.389529   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:25.426435   79927 cri.go:89] found id: ""
	I0806 08:38:25.426464   79927 logs.go:276] 0 containers: []
	W0806 08:38:25.426474   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:25.426481   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:25.426541   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:25.460095   79927 cri.go:89] found id: ""
	I0806 08:38:25.460132   79927 logs.go:276] 0 containers: []
	W0806 08:38:25.460157   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:25.460169   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:25.460192   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:25.511363   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:25.511398   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:25.525953   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:25.525980   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:25.598690   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:25.598711   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:25.598723   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:25.674996   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:25.675034   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:28.216533   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:28.230080   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:28.230171   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:28.269239   79927 cri.go:89] found id: ""
	I0806 08:38:28.269274   79927 logs.go:276] 0 containers: []
	W0806 08:38:28.269288   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:28.269296   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:28.269361   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:28.311773   79927 cri.go:89] found id: ""
	I0806 08:38:28.311809   79927 logs.go:276] 0 containers: []
	W0806 08:38:28.311820   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:28.311828   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:28.311896   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:28.351492   79927 cri.go:89] found id: ""
	I0806 08:38:28.351522   79927 logs.go:276] 0 containers: []
	W0806 08:38:28.351530   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:28.351535   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:28.351581   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:28.389039   79927 cri.go:89] found id: ""
	I0806 08:38:28.389067   79927 logs.go:276] 0 containers: []
	W0806 08:38:28.389079   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:28.389086   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:28.389175   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:28.432249   79927 cri.go:89] found id: ""
	I0806 08:38:28.432277   79927 logs.go:276] 0 containers: []
	W0806 08:38:28.432287   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:28.432296   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:28.432354   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:28.478277   79927 cri.go:89] found id: ""
	I0806 08:38:28.478303   79927 logs.go:276] 0 containers: []
	W0806 08:38:28.478313   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:28.478321   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:28.478380   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:28.516167   79927 cri.go:89] found id: ""
	I0806 08:38:28.516199   79927 logs.go:276] 0 containers: []
	W0806 08:38:28.516211   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:28.516219   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:28.516286   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:28.555493   79927 cri.go:89] found id: ""
	I0806 08:38:28.555523   79927 logs.go:276] 0 containers: []
	W0806 08:38:28.555534   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:28.555545   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:28.555560   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:28.627101   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:28.627124   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:28.627147   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:28.716655   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:28.716709   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:28.755469   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:28.755510   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:28.812203   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:28.812237   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:31.327103   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:31.340745   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:31.340826   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:31.382470   79927 cri.go:89] found id: ""
	I0806 08:38:31.382503   79927 logs.go:276] 0 containers: []
	W0806 08:38:31.382515   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:31.382523   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:31.382593   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:31.423319   79927 cri.go:89] found id: ""
	I0806 08:38:31.423343   79927 logs.go:276] 0 containers: []
	W0806 08:38:31.423353   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:31.423359   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:31.423423   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:31.463073   79927 cri.go:89] found id: ""
	I0806 08:38:31.463099   79927 logs.go:276] 0 containers: []
	W0806 08:38:31.463108   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:31.463115   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:31.463165   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:31.499169   79927 cri.go:89] found id: ""
	I0806 08:38:31.499209   79927 logs.go:276] 0 containers: []
	W0806 08:38:31.499220   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:31.499228   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:31.499284   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:31.535030   79927 cri.go:89] found id: ""
	I0806 08:38:31.535060   79927 logs.go:276] 0 containers: []
	W0806 08:38:31.535071   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:31.535079   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:31.535145   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:31.575872   79927 cri.go:89] found id: ""
	I0806 08:38:31.575894   79927 logs.go:276] 0 containers: []
	W0806 08:38:31.575902   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:31.575907   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:31.575952   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:31.614766   79927 cri.go:89] found id: ""
	I0806 08:38:31.614787   79927 logs.go:276] 0 containers: []
	W0806 08:38:31.614797   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:31.614804   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:31.614860   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:31.655929   79927 cri.go:89] found id: ""
	I0806 08:38:31.655954   79927 logs.go:276] 0 containers: []
	W0806 08:38:31.655964   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:31.655992   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:31.656009   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:31.728563   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:31.728611   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:31.745445   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:31.745479   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:31.827965   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:31.827998   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:31.828014   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:31.935051   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:31.935093   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:34.478890   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:34.493573   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:34.493650   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:34.529054   79927 cri.go:89] found id: ""
	I0806 08:38:34.529083   79927 logs.go:276] 0 containers: []
	W0806 08:38:34.529094   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:34.529101   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:34.529171   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:34.570214   79927 cri.go:89] found id: ""
	I0806 08:38:34.570245   79927 logs.go:276] 0 containers: []
	W0806 08:38:34.570256   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:34.570264   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:34.570325   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:34.607198   79927 cri.go:89] found id: ""
	I0806 08:38:34.607232   79927 logs.go:276] 0 containers: []
	W0806 08:38:34.607243   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:34.607250   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:34.607296   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:34.646238   79927 cri.go:89] found id: ""
	I0806 08:38:34.646266   79927 logs.go:276] 0 containers: []
	W0806 08:38:34.646278   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:34.646287   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:34.646354   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:34.682137   79927 cri.go:89] found id: ""
	I0806 08:38:34.682161   79927 logs.go:276] 0 containers: []
	W0806 08:38:34.682169   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:34.682175   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:34.682227   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:34.721211   79927 cri.go:89] found id: ""
	I0806 08:38:34.721238   79927 logs.go:276] 0 containers: []
	W0806 08:38:34.721246   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:34.721252   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:34.721306   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:34.759693   79927 cri.go:89] found id: ""
	I0806 08:38:34.759728   79927 logs.go:276] 0 containers: []
	W0806 08:38:34.759737   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:34.759743   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:34.759793   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:34.799224   79927 cri.go:89] found id: ""
	I0806 08:38:34.799246   79927 logs.go:276] 0 containers: []
	W0806 08:38:34.799254   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:34.799262   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:34.799273   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:34.852682   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:34.852727   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:34.866456   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:34.866494   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:34.933029   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:34.933055   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:34.933076   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:35.017215   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:35.017253   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:37.567662   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:37.582220   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:37.582284   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:37.622112   79927 cri.go:89] found id: ""
	I0806 08:38:37.622145   79927 logs.go:276] 0 containers: []
	W0806 08:38:37.622153   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:37.622164   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:37.622228   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:37.657767   79927 cri.go:89] found id: ""
	I0806 08:38:37.657791   79927 logs.go:276] 0 containers: []
	W0806 08:38:37.657798   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:37.657804   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:37.657849   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:37.691442   79927 cri.go:89] found id: ""
	I0806 08:38:37.691471   79927 logs.go:276] 0 containers: []
	W0806 08:38:37.691480   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:37.691485   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:37.691539   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:37.725810   79927 cri.go:89] found id: ""
	I0806 08:38:37.725837   79927 logs.go:276] 0 containers: []
	W0806 08:38:37.725859   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:37.725866   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:37.725922   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:37.764708   79927 cri.go:89] found id: ""
	I0806 08:38:37.764737   79927 logs.go:276] 0 containers: []
	W0806 08:38:37.764748   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:37.764755   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:37.764818   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:37.802915   79927 cri.go:89] found id: ""
	I0806 08:38:37.802939   79927 logs.go:276] 0 containers: []
	W0806 08:38:37.802947   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:37.802953   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:37.803008   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:37.838492   79927 cri.go:89] found id: ""
	I0806 08:38:37.838525   79927 logs.go:276] 0 containers: []
	W0806 08:38:37.838536   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:37.838544   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:37.838607   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:37.873937   79927 cri.go:89] found id: ""
	I0806 08:38:37.873969   79927 logs.go:276] 0 containers: []
	W0806 08:38:37.873979   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:37.873990   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:37.874005   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:37.912169   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:37.912193   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:37.968269   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:37.968305   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:37.983414   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:37.983440   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:38.055031   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:38.055049   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:38.055062   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:40.635449   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:40.649962   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:40.650045   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:40.688975   79927 cri.go:89] found id: ""
	I0806 08:38:40.689006   79927 logs.go:276] 0 containers: []
	W0806 08:38:40.689017   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:40.689029   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:40.689093   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:40.726740   79927 cri.go:89] found id: ""
	I0806 08:38:40.726770   79927 logs.go:276] 0 containers: []
	W0806 08:38:40.726780   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:40.726788   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:40.726886   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:40.767402   79927 cri.go:89] found id: ""
	I0806 08:38:40.767432   79927 logs.go:276] 0 containers: []
	W0806 08:38:40.767445   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:40.767453   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:40.767510   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:40.809994   79927 cri.go:89] found id: ""
	I0806 08:38:40.810023   79927 logs.go:276] 0 containers: []
	W0806 08:38:40.810033   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:40.810042   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:40.810102   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:40.845624   79927 cri.go:89] found id: ""
	I0806 08:38:40.845653   79927 logs.go:276] 0 containers: []
	W0806 08:38:40.845662   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:40.845668   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:40.845715   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:40.883830   79927 cri.go:89] found id: ""
	I0806 08:38:40.883856   79927 logs.go:276] 0 containers: []
	W0806 08:38:40.883871   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:40.883879   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:40.883937   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:40.919930   79927 cri.go:89] found id: ""
	I0806 08:38:40.919958   79927 logs.go:276] 0 containers: []
	W0806 08:38:40.919966   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:40.919971   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:40.920018   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:40.954563   79927 cri.go:89] found id: ""
	I0806 08:38:40.954594   79927 logs.go:276] 0 containers: []
	W0806 08:38:40.954605   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:40.954616   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:40.954629   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:41.008780   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:41.008813   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:41.022292   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:41.022321   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:41.090241   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:41.090273   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:41.090294   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:41.171739   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:41.171779   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:43.712410   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:43.726676   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:43.726747   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:43.765343   79927 cri.go:89] found id: ""
	I0806 08:38:43.765379   79927 logs.go:276] 0 containers: []
	W0806 08:38:43.765390   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:43.765398   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:43.765457   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:43.803813   79927 cri.go:89] found id: ""
	I0806 08:38:43.803846   79927 logs.go:276] 0 containers: []
	W0806 08:38:43.803859   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:43.803868   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:43.803933   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:43.841307   79927 cri.go:89] found id: ""
	I0806 08:38:43.841339   79927 logs.go:276] 0 containers: []
	W0806 08:38:43.841349   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:43.841356   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:43.841418   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:43.876488   79927 cri.go:89] found id: ""
	I0806 08:38:43.876521   79927 logs.go:276] 0 containers: []
	W0806 08:38:43.876534   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:43.876544   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:43.876606   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:43.910972   79927 cri.go:89] found id: ""
	I0806 08:38:43.911000   79927 logs.go:276] 0 containers: []
	W0806 08:38:43.911010   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:43.911017   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:43.911077   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:43.945541   79927 cri.go:89] found id: ""
	I0806 08:38:43.945573   79927 logs.go:276] 0 containers: []
	W0806 08:38:43.945584   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:43.945592   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:43.945655   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:43.981366   79927 cri.go:89] found id: ""
	I0806 08:38:43.981399   79927 logs.go:276] 0 containers: []
	W0806 08:38:43.981408   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:43.981413   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:43.981463   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:44.015349   79927 cri.go:89] found id: ""
	I0806 08:38:44.015375   79927 logs.go:276] 0 containers: []
	W0806 08:38:44.015384   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:44.015394   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:44.015408   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:44.068089   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:44.068129   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:44.082496   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:44.082524   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:44.157765   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:44.157788   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:44.157805   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:44.242842   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:44.242885   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:46.789280   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:46.803490   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:46.803562   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:46.844164   79927 cri.go:89] found id: ""
	I0806 08:38:46.844192   79927 logs.go:276] 0 containers: []
	W0806 08:38:46.844200   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:46.844206   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:46.844247   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:46.880968   79927 cri.go:89] found id: ""
	I0806 08:38:46.881003   79927 logs.go:276] 0 containers: []
	W0806 08:38:46.881015   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:46.881022   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:46.881082   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:46.919291   79927 cri.go:89] found id: ""
	I0806 08:38:46.919311   79927 logs.go:276] 0 containers: []
	W0806 08:38:46.919319   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:46.919324   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:46.919369   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:46.960591   79927 cri.go:89] found id: ""
	I0806 08:38:46.960616   79927 logs.go:276] 0 containers: []
	W0806 08:38:46.960626   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:46.960632   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:46.960681   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:46.995913   79927 cri.go:89] found id: ""
	I0806 08:38:46.995943   79927 logs.go:276] 0 containers: []
	W0806 08:38:46.995954   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:46.995961   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:46.996045   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:47.035185   79927 cri.go:89] found id: ""
	I0806 08:38:47.035213   79927 logs.go:276] 0 containers: []
	W0806 08:38:47.035225   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:47.035236   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:47.035292   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:47.079605   79927 cri.go:89] found id: ""
	I0806 08:38:47.079634   79927 logs.go:276] 0 containers: []
	W0806 08:38:47.079646   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:47.079653   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:47.079701   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:47.118819   79927 cri.go:89] found id: ""
	I0806 08:38:47.118849   79927 logs.go:276] 0 containers: []
	W0806 08:38:47.118861   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:47.118873   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:47.118888   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:47.181357   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:47.181394   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:47.197676   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:47.197707   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:47.270304   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:47.270331   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:47.270348   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:47.354145   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:47.354176   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:49.901481   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:49.914862   79927 kubeadm.go:597] duration metric: took 4m3.80396481s to restartPrimaryControlPlane
	W0806 08:38:49.914926   79927 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0806 08:38:49.914952   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0806 08:38:50.394829   79927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 08:38:50.409425   79927 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0806 08:38:50.419349   79927 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0806 08:38:50.429688   79927 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0806 08:38:50.429711   79927 kubeadm.go:157] found existing configuration files:
	
	I0806 08:38:50.429756   79927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0806 08:38:50.440055   79927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0806 08:38:50.440126   79927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0806 08:38:50.450734   79927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0806 08:38:50.461860   79927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0806 08:38:50.461943   79927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0806 08:38:50.473324   79927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0806 08:38:50.483381   79927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0806 08:38:50.483440   79927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0806 08:38:50.493247   79927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0806 08:38:50.502800   79927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0806 08:38:50.502861   79927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0806 08:38:50.512897   79927 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0806 08:38:50.585658   79927 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0806 08:38:50.585795   79927 kubeadm.go:310] [preflight] Running pre-flight checks
	I0806 08:38:50.740936   79927 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0806 08:38:50.741079   79927 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0806 08:38:50.741200   79927 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0806 08:38:50.938086   79927 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0806 08:38:50.940205   79927 out.go:204]   - Generating certificates and keys ...
	I0806 08:38:50.940315   79927 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0806 08:38:50.940428   79927 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0806 08:38:50.940543   79927 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0806 08:38:50.940621   79927 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0806 08:38:50.940723   79927 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0806 08:38:50.940800   79927 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0806 08:38:50.940881   79927 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0806 08:38:50.941150   79927 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0806 08:38:50.941954   79927 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0806 08:38:50.942765   79927 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0806 08:38:50.942910   79927 kubeadm.go:310] [certs] Using the existing "sa" key
	I0806 08:38:50.942993   79927 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0806 08:38:51.039558   79927 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0806 08:38:51.265243   79927 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0806 08:38:51.348501   79927 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0806 08:38:51.509558   79927 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0806 08:38:51.530725   79927 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0806 08:38:51.534108   79927 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0806 08:38:51.534297   79927 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0806 08:38:51.680149   79927 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0806 08:38:51.682195   79927 out.go:204]   - Booting up control plane ...
	I0806 08:38:51.682331   79927 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0806 08:38:51.693736   79927 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0806 08:38:51.695315   79927 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0806 08:38:51.697107   79927 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0806 08:38:51.700906   79927 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0806 08:39:31.701723   79927 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0806 08:39:31.702058   79927 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0806 08:39:31.702242   79927 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0806 08:39:36.702773   79927 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0806 08:39:36.703067   79927 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0806 08:39:46.703330   79927 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0806 08:39:46.703594   79927 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0806 08:40:06.704889   79927 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0806 08:40:06.705136   79927 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0806 08:40:46.706828   79927 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0806 08:40:46.707130   79927 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0806 08:40:46.707156   79927 kubeadm.go:310] 
	I0806 08:40:46.707218   79927 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0806 08:40:46.707278   79927 kubeadm.go:310] 		timed out waiting for the condition
	I0806 08:40:46.707304   79927 kubeadm.go:310] 
	I0806 08:40:46.707381   79927 kubeadm.go:310] 	This error is likely caused by:
	I0806 08:40:46.707439   79927 kubeadm.go:310] 		- The kubelet is not running
	I0806 08:40:46.707569   79927 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0806 08:40:46.707580   79927 kubeadm.go:310] 
	I0806 08:40:46.707705   79927 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0806 08:40:46.707756   79927 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0806 08:40:46.707805   79927 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0806 08:40:46.707814   79927 kubeadm.go:310] 
	I0806 08:40:46.707974   79927 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0806 08:40:46.708118   79927 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0806 08:40:46.708130   79927 kubeadm.go:310] 
	I0806 08:40:46.708297   79927 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0806 08:40:46.708448   79927 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0806 08:40:46.708577   79927 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0806 08:40:46.708673   79927 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0806 08:40:46.708684   79927 kubeadm.go:310] 
	I0806 08:40:46.709441   79927 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0806 08:40:46.709593   79927 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0806 08:40:46.709791   79927 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0806 08:40:46.709862   79927 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0806 08:40:46.709928   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0806 08:40:47.169443   79927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 08:40:47.187784   79927 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0806 08:40:47.199239   79927 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0806 08:40:47.199267   79927 kubeadm.go:157] found existing configuration files:
	
	I0806 08:40:47.199325   79927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0806 08:40:47.209294   79927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0806 08:40:47.209361   79927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0806 08:40:47.219483   79927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0806 08:40:47.229674   79927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0806 08:40:47.229732   79927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0806 08:40:47.240729   79927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0806 08:40:47.250964   79927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0806 08:40:47.251018   79927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0806 08:40:47.261322   79927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0806 08:40:47.271411   79927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0806 08:40:47.271502   79927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0806 08:40:47.281986   79927 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0806 08:40:47.514409   79927 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0806 08:42:43.760446   79927 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0806 08:42:43.760586   79927 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0806 08:42:43.762233   79927 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0806 08:42:43.762301   79927 kubeadm.go:310] [preflight] Running pre-flight checks
	I0806 08:42:43.762391   79927 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0806 08:42:43.762504   79927 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0806 08:42:43.762602   79927 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0806 08:42:43.762653   79927 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0806 08:42:43.764435   79927 out.go:204]   - Generating certificates and keys ...
	I0806 08:42:43.764523   79927 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0806 08:42:43.764587   79927 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0806 08:42:43.764684   79927 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0806 08:42:43.764761   79927 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0806 08:42:43.764833   79927 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0806 08:42:43.764905   79927 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0806 08:42:43.765002   79927 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0806 08:42:43.765100   79927 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0806 08:42:43.765202   79927 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0806 08:42:43.765338   79927 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0806 08:42:43.765400   79927 kubeadm.go:310] [certs] Using the existing "sa" key
	I0806 08:42:43.765516   79927 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0806 08:42:43.765601   79927 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0806 08:42:43.765680   79927 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0806 08:42:43.765774   79927 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0806 08:42:43.765854   79927 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0806 08:42:43.766002   79927 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0806 08:42:43.766133   79927 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0806 08:42:43.766194   79927 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0806 08:42:43.766273   79927 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0806 08:42:43.767861   79927 out.go:204]   - Booting up control plane ...
	I0806 08:42:43.767957   79927 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0806 08:42:43.768051   79927 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0806 08:42:43.768136   79927 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0806 08:42:43.768232   79927 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0806 08:42:43.768487   79927 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0806 08:42:43.768559   79927 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0806 08:42:43.768635   79927 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0806 08:42:43.768862   79927 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0806 08:42:43.768945   79927 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0806 08:42:43.769124   79927 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0806 08:42:43.769195   79927 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0806 08:42:43.769379   79927 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0806 08:42:43.769463   79927 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0806 08:42:43.769721   79927 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0806 08:42:43.769823   79927 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0806 08:42:43.770089   79927 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0806 08:42:43.770102   79927 kubeadm.go:310] 
	I0806 08:42:43.770161   79927 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0806 08:42:43.770223   79927 kubeadm.go:310] 		timed out waiting for the condition
	I0806 08:42:43.770234   79927 kubeadm.go:310] 
	I0806 08:42:43.770274   79927 kubeadm.go:310] 	This error is likely caused by:
	I0806 08:42:43.770303   79927 kubeadm.go:310] 		- The kubelet is not running
	I0806 08:42:43.770389   79927 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0806 08:42:43.770399   79927 kubeadm.go:310] 
	I0806 08:42:43.770487   79927 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0806 08:42:43.770519   79927 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0806 08:42:43.770547   79927 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0806 08:42:43.770554   79927 kubeadm.go:310] 
	I0806 08:42:43.770657   79927 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0806 08:42:43.770756   79927 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0806 08:42:43.770769   79927 kubeadm.go:310] 
	I0806 08:42:43.770932   79927 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0806 08:42:43.771079   79927 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0806 08:42:43.771178   79927 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0806 08:42:43.771266   79927 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0806 08:42:43.771315   79927 kubeadm.go:310] 
	I0806 08:42:43.771336   79927 kubeadm.go:394] duration metric: took 7m57.719144545s to StartCluster
	I0806 08:42:43.771381   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:42:43.771451   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:42:43.817981   79927 cri.go:89] found id: ""
	I0806 08:42:43.818010   79927 logs.go:276] 0 containers: []
	W0806 08:42:43.818021   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:42:43.818028   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:42:43.818087   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:42:43.855481   79927 cri.go:89] found id: ""
	I0806 08:42:43.855506   79927 logs.go:276] 0 containers: []
	W0806 08:42:43.855517   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:42:43.855524   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:42:43.855584   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:42:43.893110   79927 cri.go:89] found id: ""
	I0806 08:42:43.893141   79927 logs.go:276] 0 containers: []
	W0806 08:42:43.893152   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:42:43.893172   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:42:43.893255   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:42:43.929785   79927 cri.go:89] found id: ""
	I0806 08:42:43.929844   79927 logs.go:276] 0 containers: []
	W0806 08:42:43.929853   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:42:43.929858   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:42:43.929921   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:42:43.970501   79927 cri.go:89] found id: ""
	I0806 08:42:43.970533   79927 logs.go:276] 0 containers: []
	W0806 08:42:43.970541   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:42:43.970546   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:42:43.970590   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:42:44.007344   79927 cri.go:89] found id: ""
	I0806 08:42:44.007373   79927 logs.go:276] 0 containers: []
	W0806 08:42:44.007382   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:42:44.007388   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:42:44.007434   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:42:44.043724   79927 cri.go:89] found id: ""
	I0806 08:42:44.043747   79927 logs.go:276] 0 containers: []
	W0806 08:42:44.043755   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:42:44.043761   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:42:44.043810   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:42:44.084642   79927 cri.go:89] found id: ""
	I0806 08:42:44.084682   79927 logs.go:276] 0 containers: []
	W0806 08:42:44.084694   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:42:44.084709   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:42:44.084724   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:42:44.157081   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:42:44.157125   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:42:44.175703   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:42:44.175746   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:42:44.275665   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:42:44.275699   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:42:44.275717   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:42:44.385031   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:42:44.385076   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0806 08:42:44.425573   79927 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0806 08:42:44.425616   79927 out.go:239] * 
	* 
	W0806 08:42:44.425683   79927 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0806 08:42:44.425713   79927 out.go:239] * 
	* 
	W0806 08:42:44.426633   79927 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0806 08:42:44.429961   79927 out.go:177] 
	W0806 08:42:44.431352   79927 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0806 08:42:44.431398   79927 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0806 08:42:44.431414   79927 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0806 08:42:44.432963   79927 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-409903 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-409903 -n old-k8s-version-409903
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-409903 -n old-k8s-version-409903: exit status 2 (230.01094ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-409903 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-409903 logs -n 25: (1.681883709s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p flannel-405163 sudo cat                             | flannel-405163               | jenkins | v1.33.1 | 06 Aug 24 08:25 UTC | 06 Aug 24 08:25 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p flannel-405163 sudo                                 | flannel-405163               | jenkins | v1.33.1 | 06 Aug 24 08:25 UTC | 06 Aug 24 08:25 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p flannel-405163 sudo                                 | flannel-405163               | jenkins | v1.33.1 | 06 Aug 24 08:25 UTC | 06 Aug 24 08:25 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p flannel-405163 sudo                                 | flannel-405163               | jenkins | v1.33.1 | 06 Aug 24 08:25 UTC | 06 Aug 24 08:25 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p flannel-405163 sudo find                            | flannel-405163               | jenkins | v1.33.1 | 06 Aug 24 08:25 UTC | 06 Aug 24 08:25 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p flannel-405163 sudo crio                            | flannel-405163               | jenkins | v1.33.1 | 06 Aug 24 08:25 UTC | 06 Aug 24 08:25 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p flannel-405163                                      | flannel-405163               | jenkins | v1.33.1 | 06 Aug 24 08:25 UTC | 06 Aug 24 08:25 UTC |
	| delete  | -p                                                     | disable-driver-mounts-607762 | jenkins | v1.33.1 | 06 Aug 24 08:25 UTC | 06 Aug 24 08:25 UTC |
	|         | disable-driver-mounts-607762                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-990751 | jenkins | v1.33.1 | 06 Aug 24 08:25 UTC | 06 Aug 24 08:27 UTC |
	|         | default-k8s-diff-port-990751                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-703340             | no-preload-703340            | jenkins | v1.33.1 | 06 Aug 24 08:26 UTC | 06 Aug 24 08:26 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-703340                                   | no-preload-703340            | jenkins | v1.33.1 | 06 Aug 24 08:26 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-324808            | embed-certs-324808           | jenkins | v1.33.1 | 06 Aug 24 08:26 UTC | 06 Aug 24 08:26 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-324808                                  | embed-certs-324808           | jenkins | v1.33.1 | 06 Aug 24 08:26 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-990751  | default-k8s-diff-port-990751 | jenkins | v1.33.1 | 06 Aug 24 08:27 UTC | 06 Aug 24 08:27 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-990751 | jenkins | v1.33.1 | 06 Aug 24 08:27 UTC |                     |
	|         | default-k8s-diff-port-990751                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-703340                  | no-preload-703340            | jenkins | v1.33.1 | 06 Aug 24 08:28 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-703340                                   | no-preload-703340            | jenkins | v1.33.1 | 06 Aug 24 08:28 UTC | 06 Aug 24 08:40 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-409903        | old-k8s-version-409903       | jenkins | v1.33.1 | 06 Aug 24 08:28 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-324808                 | embed-certs-324808           | jenkins | v1.33.1 | 06 Aug 24 08:29 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-324808                                  | embed-certs-324808           | jenkins | v1.33.1 | 06 Aug 24 08:29 UTC | 06 Aug 24 08:38 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-990751       | default-k8s-diff-port-990751 | jenkins | v1.33.1 | 06 Aug 24 08:30 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-990751 | jenkins | v1.33.1 | 06 Aug 24 08:30 UTC | 06 Aug 24 08:38 UTC |
	|         | default-k8s-diff-port-990751                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-409903                              | old-k8s-version-409903       | jenkins | v1.33.1 | 06 Aug 24 08:30 UTC | 06 Aug 24 08:30 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-409903             | old-k8s-version-409903       | jenkins | v1.33.1 | 06 Aug 24 08:30 UTC | 06 Aug 24 08:30 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-409903                              | old-k8s-version-409903       | jenkins | v1.33.1 | 06 Aug 24 08:30 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/06 08:30:58
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0806 08:30:58.757517   79927 out.go:291] Setting OutFile to fd 1 ...
	I0806 08:30:58.757772   79927 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 08:30:58.757780   79927 out.go:304] Setting ErrFile to fd 2...
	I0806 08:30:58.757784   79927 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 08:30:58.757992   79927 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19370-13057/.minikube/bin
	I0806 08:30:58.758527   79927 out.go:298] Setting JSON to false
	I0806 08:30:58.759405   79927 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8005,"bootTime":1722925054,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0806 08:30:58.759463   79927 start.go:139] virtualization: kvm guest
	I0806 08:30:58.762200   79927 out.go:177] * [old-k8s-version-409903] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0806 08:30:58.763461   79927 out.go:177]   - MINIKUBE_LOCATION=19370
	I0806 08:30:58.763514   79927 notify.go:220] Checking for updates...
	I0806 08:30:58.766017   79927 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0806 08:30:58.767180   79927 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19370-13057/kubeconfig
	I0806 08:30:58.768270   79927 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19370-13057/.minikube
	I0806 08:30:58.769303   79927 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0806 08:30:58.770385   79927 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0806 08:30:58.771931   79927 config.go:182] Loaded profile config "old-k8s-version-409903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0806 08:30:58.772319   79927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:30:58.772407   79927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:30:58.787845   79927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39563
	I0806 08:30:58.788270   79927 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:30:58.788863   79927 main.go:141] libmachine: Using API Version  1
	I0806 08:30:58.788879   79927 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:30:58.789169   79927 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:30:58.789314   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .DriverName
	I0806 08:30:58.791190   79927 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0806 08:30:58.792487   79927 driver.go:392] Setting default libvirt URI to qemu:///system
	I0806 08:30:58.792776   79927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:30:58.792811   79927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:30:58.807347   79927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33093
	I0806 08:30:58.807788   79927 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:30:58.808260   79927 main.go:141] libmachine: Using API Version  1
	I0806 08:30:58.808276   79927 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:30:58.808633   79927 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:30:58.808849   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .DriverName
	I0806 08:30:58.845305   79927 out.go:177] * Using the kvm2 driver based on existing profile
	I0806 08:30:58.846560   79927 start.go:297] selected driver: kvm2
	I0806 08:30:58.846580   79927 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-409903 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-409903 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.158 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 08:30:58.846684   79927 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0806 08:30:58.847335   79927 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 08:30:58.847395   79927 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19370-13057/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0806 08:30:58.862013   79927 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0806 08:30:58.862398   79927 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0806 08:30:58.862431   79927 cni.go:84] Creating CNI manager for ""
	I0806 08:30:58.862441   79927 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0806 08:30:58.862493   79927 start.go:340] cluster config:
	{Name:old-k8s-version-409903 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-409903 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.158 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 08:30:58.862609   79927 iso.go:125] acquiring lock: {Name:mke58d71de28babe305c5eb0c31c274e9713bb3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 08:30:58.864484   79927 out.go:177] * Starting "old-k8s-version-409903" primary control-plane node in "old-k8s-version-409903" cluster
	I0806 08:30:58.865763   79927 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0806 08:30:58.865796   79927 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0806 08:30:58.865803   79927 cache.go:56] Caching tarball of preloaded images
	I0806 08:30:58.865912   79927 preload.go:172] Found /home/jenkins/minikube-integration/19370-13057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0806 08:30:58.865928   79927 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0806 08:30:58.866013   79927 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/old-k8s-version-409903/config.json ...
	I0806 08:30:58.866228   79927 start.go:360] acquireMachinesLock for old-k8s-version-409903: {Name:mkc602ac9c8cc3360dcc0fcb8104303837aaab61 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 08:31:01.700664   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:31:04.772608   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:31:10.852634   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:31:13.924605   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:31:20.004661   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:31:23.076602   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:31:29.156621   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:31:32.228616   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:31:38.308684   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:31:41.380622   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:31:47.460632   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:31:50.532638   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:31:56.612610   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:31:59.684696   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:32:05.764642   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:32:08.836560   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:32:14.916653   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:32:17.988720   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:32:24.068597   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:32:27.140686   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:32:33.220630   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:32:36.292609   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:32:42.372654   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:32:45.444619   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:32:51.524637   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:32:54.596677   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:33:00.676664   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:33:03.748647   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:33:09.828643   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:33:12.900674   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:33:18.980642   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:33:22.052616   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:33:28.132639   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:33:31.204678   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:33:34.209491   79219 start.go:364] duration metric: took 4m19.788239383s to acquireMachinesLock for "embed-certs-324808"
	I0806 08:33:34.209554   79219 start.go:96] Skipping create...Using existing machine configuration
	I0806 08:33:34.209563   79219 fix.go:54] fixHost starting: 
	I0806 08:33:34.209914   79219 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:33:34.209945   79219 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:33:34.225180   79219 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44037
	I0806 08:33:34.225607   79219 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:33:34.226014   79219 main.go:141] libmachine: Using API Version  1
	I0806 08:33:34.226033   79219 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:33:34.226323   79219 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:33:34.226522   79219 main.go:141] libmachine: (embed-certs-324808) Calling .DriverName
	I0806 08:33:34.226684   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetState
	I0806 08:33:34.228172   79219 fix.go:112] recreateIfNeeded on embed-certs-324808: state=Stopped err=<nil>
	I0806 08:33:34.228200   79219 main.go:141] libmachine: (embed-certs-324808) Calling .DriverName
	W0806 08:33:34.228355   79219 fix.go:138] unexpected machine state, will restart: <nil>
	I0806 08:33:34.231261   79219 out.go:177] * Restarting existing kvm2 VM for "embed-certs-324808" ...
	I0806 08:33:34.232543   79219 main.go:141] libmachine: (embed-certs-324808) Calling .Start
	I0806 08:33:34.232728   79219 main.go:141] libmachine: (embed-certs-324808) Ensuring networks are active...
	I0806 08:33:34.233475   79219 main.go:141] libmachine: (embed-certs-324808) Ensuring network default is active
	I0806 08:33:34.233819   79219 main.go:141] libmachine: (embed-certs-324808) Ensuring network mk-embed-certs-324808 is active
	I0806 08:33:34.234291   79219 main.go:141] libmachine: (embed-certs-324808) Getting domain xml...
	I0806 08:33:34.235129   79219 main.go:141] libmachine: (embed-certs-324808) Creating domain...
	I0806 08:33:34.206591   78922 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 08:33:34.206636   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetMachineName
	I0806 08:33:34.206950   78922 buildroot.go:166] provisioning hostname "no-preload-703340"
	I0806 08:33:34.206975   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetMachineName
	I0806 08:33:34.207187   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHHostname
	I0806 08:33:34.209379   78922 machine.go:97] duration metric: took 4m37.416826471s to provisionDockerMachine
	I0806 08:33:34.209416   78922 fix.go:56] duration metric: took 4m37.443014145s for fixHost
	I0806 08:33:34.209424   78922 start.go:83] releasing machines lock for "no-preload-703340", held for 4m37.443037097s
	W0806 08:33:34.209442   78922 start.go:714] error starting host: provision: host is not running
	W0806 08:33:34.209538   78922 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0806 08:33:34.209545   78922 start.go:729] Will try again in 5 seconds ...
	I0806 08:33:35.434516   79219 main.go:141] libmachine: (embed-certs-324808) Waiting to get IP...
	I0806 08:33:35.435546   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:35.435973   79219 main.go:141] libmachine: (embed-certs-324808) DBG | unable to find current IP address of domain embed-certs-324808 in network mk-embed-certs-324808
	I0806 08:33:35.436045   79219 main.go:141] libmachine: (embed-certs-324808) DBG | I0806 08:33:35.435966   80472 retry.go:31] will retry after 190.247547ms: waiting for machine to come up
	I0806 08:33:35.627362   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:35.627768   79219 main.go:141] libmachine: (embed-certs-324808) DBG | unable to find current IP address of domain embed-certs-324808 in network mk-embed-certs-324808
	I0806 08:33:35.627803   79219 main.go:141] libmachine: (embed-certs-324808) DBG | I0806 08:33:35.627742   80472 retry.go:31] will retry after 306.927129ms: waiting for machine to come up
	I0806 08:33:35.936242   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:35.936714   79219 main.go:141] libmachine: (embed-certs-324808) DBG | unable to find current IP address of domain embed-certs-324808 in network mk-embed-certs-324808
	I0806 08:33:35.936738   79219 main.go:141] libmachine: (embed-certs-324808) DBG | I0806 08:33:35.936661   80472 retry.go:31] will retry after 363.399025ms: waiting for machine to come up
	I0806 08:33:36.301454   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:36.301934   79219 main.go:141] libmachine: (embed-certs-324808) DBG | unable to find current IP address of domain embed-certs-324808 in network mk-embed-certs-324808
	I0806 08:33:36.301967   79219 main.go:141] libmachine: (embed-certs-324808) DBG | I0806 08:33:36.301879   80472 retry.go:31] will retry after 495.07719ms: waiting for machine to come up
	I0806 08:33:36.798489   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:36.798930   79219 main.go:141] libmachine: (embed-certs-324808) DBG | unable to find current IP address of domain embed-certs-324808 in network mk-embed-certs-324808
	I0806 08:33:36.798958   79219 main.go:141] libmachine: (embed-certs-324808) DBG | I0806 08:33:36.798870   80472 retry.go:31] will retry after 755.286107ms: waiting for machine to come up
	I0806 08:33:37.555799   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:37.556155   79219 main.go:141] libmachine: (embed-certs-324808) DBG | unable to find current IP address of domain embed-certs-324808 in network mk-embed-certs-324808
	I0806 08:33:37.556187   79219 main.go:141] libmachine: (embed-certs-324808) DBG | I0806 08:33:37.556094   80472 retry.go:31] will retry after 600.530399ms: waiting for machine to come up
	I0806 08:33:38.157817   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:38.158303   79219 main.go:141] libmachine: (embed-certs-324808) DBG | unable to find current IP address of domain embed-certs-324808 in network mk-embed-certs-324808
	I0806 08:33:38.158334   79219 main.go:141] libmachine: (embed-certs-324808) DBG | I0806 08:33:38.158241   80472 retry.go:31] will retry after 943.262354ms: waiting for machine to come up
	I0806 08:33:39.103226   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:39.103615   79219 main.go:141] libmachine: (embed-certs-324808) DBG | unable to find current IP address of domain embed-certs-324808 in network mk-embed-certs-324808
	I0806 08:33:39.103645   79219 main.go:141] libmachine: (embed-certs-324808) DBG | I0806 08:33:39.103563   80472 retry.go:31] will retry after 1.100862399s: waiting for machine to come up
	I0806 08:33:39.211204   78922 start.go:360] acquireMachinesLock for no-preload-703340: {Name:mkc602ac9c8cc3360dcc0fcb8104303837aaab61 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 08:33:40.205796   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:40.206130   79219 main.go:141] libmachine: (embed-certs-324808) DBG | unable to find current IP address of domain embed-certs-324808 in network mk-embed-certs-324808
	I0806 08:33:40.206160   79219 main.go:141] libmachine: (embed-certs-324808) DBG | I0806 08:33:40.206077   80472 retry.go:31] will retry after 1.405550913s: waiting for machine to come up
	I0806 08:33:41.613629   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:41.614060   79219 main.go:141] libmachine: (embed-certs-324808) DBG | unable to find current IP address of domain embed-certs-324808 in network mk-embed-certs-324808
	I0806 08:33:41.614092   79219 main.go:141] libmachine: (embed-certs-324808) DBG | I0806 08:33:41.614001   80472 retry.go:31] will retry after 2.190867783s: waiting for machine to come up
	I0806 08:33:43.807417   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:43.808017   79219 main.go:141] libmachine: (embed-certs-324808) DBG | unable to find current IP address of domain embed-certs-324808 in network mk-embed-certs-324808
	I0806 08:33:43.808049   79219 main.go:141] libmachine: (embed-certs-324808) DBG | I0806 08:33:43.807958   80472 retry.go:31] will retry after 2.233558066s: waiting for machine to come up
	I0806 08:33:46.043427   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:46.043803   79219 main.go:141] libmachine: (embed-certs-324808) DBG | unable to find current IP address of domain embed-certs-324808 in network mk-embed-certs-324808
	I0806 08:33:46.043830   79219 main.go:141] libmachine: (embed-certs-324808) DBG | I0806 08:33:46.043754   80472 retry.go:31] will retry after 3.171710196s: waiting for machine to come up
	I0806 08:33:49.218967   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:49.219319   79219 main.go:141] libmachine: (embed-certs-324808) DBG | unable to find current IP address of domain embed-certs-324808 in network mk-embed-certs-324808
	I0806 08:33:49.219349   79219 main.go:141] libmachine: (embed-certs-324808) DBG | I0806 08:33:49.219266   80472 retry.go:31] will retry after 4.141473633s: waiting for machine to come up
	I0806 08:33:53.361887   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.362300   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has current primary IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.362315   79219 main.go:141] libmachine: (embed-certs-324808) Found IP for machine: 192.168.39.190
	I0806 08:33:53.362324   79219 main.go:141] libmachine: (embed-certs-324808) Reserving static IP address...
	I0806 08:33:53.362762   79219 main.go:141] libmachine: (embed-certs-324808) Reserved static IP address: 192.168.39.190
	I0806 08:33:53.362812   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "embed-certs-324808", mac: "52:54:00:b9:df:8c", ip: "192.168.39.190"} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:33:53.362826   79219 main.go:141] libmachine: (embed-certs-324808) Waiting for SSH to be available...
	I0806 08:33:53.362858   79219 main.go:141] libmachine: (embed-certs-324808) DBG | skip adding static IP to network mk-embed-certs-324808 - found existing host DHCP lease matching {name: "embed-certs-324808", mac: "52:54:00:b9:df:8c", ip: "192.168.39.190"}
	I0806 08:33:53.362871   79219 main.go:141] libmachine: (embed-certs-324808) DBG | Getting to WaitForSSH function...
	I0806 08:33:53.364761   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.365170   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:33:53.365204   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.365301   79219 main.go:141] libmachine: (embed-certs-324808) DBG | Using SSH client type: external
	I0806 08:33:53.365333   79219 main.go:141] libmachine: (embed-certs-324808) DBG | Using SSH private key: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/embed-certs-324808/id_rsa (-rw-------)
	I0806 08:33:53.365366   79219 main.go:141] libmachine: (embed-certs-324808) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.190 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19370-13057/.minikube/machines/embed-certs-324808/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0806 08:33:53.365378   79219 main.go:141] libmachine: (embed-certs-324808) DBG | About to run SSH command:
	I0806 08:33:53.365386   79219 main.go:141] libmachine: (embed-certs-324808) DBG | exit 0
	I0806 08:33:53.492385   79219 main.go:141] libmachine: (embed-certs-324808) DBG | SSH cmd err, output: <nil>: 
	I0806 08:33:53.492739   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetConfigRaw
	I0806 08:33:53.493342   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetIP
	I0806 08:33:53.495371   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.495663   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:33:53.495693   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.495982   79219 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/embed-certs-324808/config.json ...
	I0806 08:33:53.496215   79219 machine.go:94] provisionDockerMachine start ...
	I0806 08:33:53.496238   79219 main.go:141] libmachine: (embed-certs-324808) Calling .DriverName
	I0806 08:33:53.496460   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHHostname
	I0806 08:33:53.498625   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.498924   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:33:53.498945   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.499096   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHPort
	I0806 08:33:53.499265   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHKeyPath
	I0806 08:33:53.499394   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHKeyPath
	I0806 08:33:53.499593   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHUsername
	I0806 08:33:53.499815   79219 main.go:141] libmachine: Using SSH client type: native
	I0806 08:33:53.500050   79219 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.190 22 <nil> <nil>}
	I0806 08:33:53.500063   79219 main.go:141] libmachine: About to run SSH command:
	hostname
	I0806 08:33:53.612698   79219 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0806 08:33:53.612725   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetMachineName
	I0806 08:33:53.612979   79219 buildroot.go:166] provisioning hostname "embed-certs-324808"
	I0806 08:33:53.613002   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetMachineName
	I0806 08:33:53.613164   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHHostname
	I0806 08:33:53.615731   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.616123   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:33:53.616157   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.616316   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHPort
	I0806 08:33:53.616530   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHKeyPath
	I0806 08:33:53.616676   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHKeyPath
	I0806 08:33:53.616810   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHUsername
	I0806 08:33:53.616961   79219 main.go:141] libmachine: Using SSH client type: native
	I0806 08:33:53.617124   79219 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.190 22 <nil> <nil>}
	I0806 08:33:53.617138   79219 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-324808 && echo "embed-certs-324808" | sudo tee /etc/hostname
	I0806 08:33:53.742677   79219 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-324808
	
	I0806 08:33:53.742722   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHHostname
	I0806 08:33:53.745670   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.745984   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:33:53.746015   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.746195   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHPort
	I0806 08:33:53.746380   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHKeyPath
	I0806 08:33:53.746545   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHKeyPath
	I0806 08:33:53.746676   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHUsername
	I0806 08:33:53.746867   79219 main.go:141] libmachine: Using SSH client type: native
	I0806 08:33:53.747033   79219 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.190 22 <nil> <nil>}
	I0806 08:33:53.747048   79219 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-324808' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-324808/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-324808' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0806 08:33:53.869596   79219 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 08:33:53.869629   79219 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19370-13057/.minikube CaCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19370-13057/.minikube}
	I0806 08:33:53.869652   79219 buildroot.go:174] setting up certificates
	I0806 08:33:53.869663   79219 provision.go:84] configureAuth start
	I0806 08:33:53.869680   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetMachineName
	I0806 08:33:53.869953   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetIP
	I0806 08:33:53.872701   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.873071   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:33:53.873105   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.873246   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHHostname
	I0806 08:33:53.875521   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.875928   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:33:53.875959   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.876009   79219 provision.go:143] copyHostCerts
	I0806 08:33:53.876077   79219 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem, removing ...
	I0806 08:33:53.876089   79219 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem
	I0806 08:33:53.876170   79219 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem (1123 bytes)
	I0806 08:33:53.876287   79219 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem, removing ...
	I0806 08:33:53.876297   79219 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem
	I0806 08:33:53.876337   79219 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem (1675 bytes)
	I0806 08:33:53.876450   79219 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem, removing ...
	I0806 08:33:53.876460   79219 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem
	I0806 08:33:53.876499   79219 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem (1078 bytes)
	I0806 08:33:53.876578   79219 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem org=jenkins.embed-certs-324808 san=[127.0.0.1 192.168.39.190 embed-certs-324808 localhost minikube]
	I0806 08:33:53.961275   79219 provision.go:177] copyRemoteCerts
	I0806 08:33:53.961344   79219 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0806 08:33:53.961375   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHHostname
	I0806 08:33:53.964004   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.964356   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:33:53.964404   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.964596   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHPort
	I0806 08:33:53.964829   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHKeyPath
	I0806 08:33:53.964983   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHUsername
	I0806 08:33:53.965115   79219 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/embed-certs-324808/id_rsa Username:docker}
	I0806 08:33:54.050335   79219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0806 08:33:54.074977   79219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0806 08:33:54.100201   79219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0806 08:33:54.123356   79219 provision.go:87] duration metric: took 253.6775ms to configureAuth
	I0806 08:33:54.123386   79219 buildroot.go:189] setting minikube options for container-runtime
	I0806 08:33:54.123556   79219 config.go:182] Loaded profile config "embed-certs-324808": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 08:33:54.123634   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHHostname
	I0806 08:33:54.126279   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:54.126721   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:33:54.126746   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:54.126941   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHPort
	I0806 08:33:54.127143   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHKeyPath
	I0806 08:33:54.127309   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHKeyPath
	I0806 08:33:54.127444   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHUsername
	I0806 08:33:54.127611   79219 main.go:141] libmachine: Using SSH client type: native
	I0806 08:33:54.127785   79219 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.190 22 <nil> <nil>}
	I0806 08:33:54.127800   79219 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0806 08:33:54.653395   79614 start.go:364] duration metric: took 3m35.719137455s to acquireMachinesLock for "default-k8s-diff-port-990751"
	I0806 08:33:54.653444   79614 start.go:96] Skipping create...Using existing machine configuration
	I0806 08:33:54.653452   79614 fix.go:54] fixHost starting: 
	I0806 08:33:54.653812   79614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:33:54.653844   79614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:33:54.673686   79614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35867
	I0806 08:33:54.674123   79614 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:33:54.674581   79614 main.go:141] libmachine: Using API Version  1
	I0806 08:33:54.674604   79614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:33:54.674950   79614 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:33:54.675108   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .DriverName
	I0806 08:33:54.675246   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetState
	I0806 08:33:54.676908   79614 fix.go:112] recreateIfNeeded on default-k8s-diff-port-990751: state=Stopped err=<nil>
	I0806 08:33:54.676939   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .DriverName
	W0806 08:33:54.677097   79614 fix.go:138] unexpected machine state, will restart: <nil>
	I0806 08:33:54.678912   79614 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-990751" ...
	I0806 08:33:54.403758   79219 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0806 08:33:54.403785   79219 machine.go:97] duration metric: took 907.554493ms to provisionDockerMachine
	I0806 08:33:54.403799   79219 start.go:293] postStartSetup for "embed-certs-324808" (driver="kvm2")
	I0806 08:33:54.403814   79219 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0806 08:33:54.403854   79219 main.go:141] libmachine: (embed-certs-324808) Calling .DriverName
	I0806 08:33:54.404139   79219 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0806 08:33:54.404168   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHHostname
	I0806 08:33:54.406768   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:54.407114   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:33:54.407142   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:54.407328   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHPort
	I0806 08:33:54.407510   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHKeyPath
	I0806 08:33:54.407647   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHUsername
	I0806 08:33:54.407765   79219 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/embed-certs-324808/id_rsa Username:docker}
	I0806 08:33:54.495482   79219 ssh_runner.go:195] Run: cat /etc/os-release
	I0806 08:33:54.499691   79219 info.go:137] Remote host: Buildroot 2023.02.9
	I0806 08:33:54.499725   79219 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-13057/.minikube/addons for local assets ...
	I0806 08:33:54.499915   79219 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-13057/.minikube/files for local assets ...
	I0806 08:33:54.500104   79219 filesync.go:149] local asset: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem -> 202462.pem in /etc/ssl/certs
	I0806 08:33:54.500243   79219 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0806 08:33:54.509690   79219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem --> /etc/ssl/certs/202462.pem (1708 bytes)
	I0806 08:33:54.534233   79219 start.go:296] duration metric: took 130.420444ms for postStartSetup
	I0806 08:33:54.534282   79219 fix.go:56] duration metric: took 20.324719602s for fixHost
	I0806 08:33:54.534307   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHHostname
	I0806 08:33:54.536871   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:54.537202   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:33:54.537234   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:54.537393   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHPort
	I0806 08:33:54.537562   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHKeyPath
	I0806 08:33:54.537746   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHKeyPath
	I0806 08:33:54.537911   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHUsername
	I0806 08:33:54.538105   79219 main.go:141] libmachine: Using SSH client type: native
	I0806 08:33:54.538285   79219 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.190 22 <nil> <nil>}
	I0806 08:33:54.538295   79219 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0806 08:33:54.653264   79219 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722933234.630593565
	
	I0806 08:33:54.653288   79219 fix.go:216] guest clock: 1722933234.630593565
	I0806 08:33:54.653298   79219 fix.go:229] Guest: 2024-08-06 08:33:54.630593565 +0000 UTC Remote: 2024-08-06 08:33:54.534287484 +0000 UTC m=+280.251105524 (delta=96.306081ms)
	I0806 08:33:54.653327   79219 fix.go:200] guest clock delta is within tolerance: 96.306081ms
	I0806 08:33:54.653337   79219 start.go:83] releasing machines lock for "embed-certs-324808", held for 20.443802529s
	I0806 08:33:54.653361   79219 main.go:141] libmachine: (embed-certs-324808) Calling .DriverName
	I0806 08:33:54.653640   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetIP
	I0806 08:33:54.656332   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:54.656741   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:33:54.656766   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:54.656933   79219 main.go:141] libmachine: (embed-certs-324808) Calling .DriverName
	I0806 08:33:54.657443   79219 main.go:141] libmachine: (embed-certs-324808) Calling .DriverName
	I0806 08:33:54.657623   79219 main.go:141] libmachine: (embed-certs-324808) Calling .DriverName
	I0806 08:33:54.657705   79219 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0806 08:33:54.657748   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHHostname
	I0806 08:33:54.657857   79219 ssh_runner.go:195] Run: cat /version.json
	I0806 08:33:54.657877   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHHostname
	I0806 08:33:54.660302   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:54.660482   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:54.660694   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:33:54.660729   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:54.660823   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:33:54.660831   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHPort
	I0806 08:33:54.660844   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:54.661011   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHPort
	I0806 08:33:54.661094   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHKeyPath
	I0806 08:33:54.661165   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHKeyPath
	I0806 08:33:54.661223   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHUsername
	I0806 08:33:54.661280   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHUsername
	I0806 08:33:54.661338   79219 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/embed-certs-324808/id_rsa Username:docker}
	I0806 08:33:54.661386   79219 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/embed-certs-324808/id_rsa Username:docker}
	I0806 08:33:54.770935   79219 ssh_runner.go:195] Run: systemctl --version
	I0806 08:33:54.777166   79219 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0806 08:33:54.920252   79219 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0806 08:33:54.927740   79219 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0806 08:33:54.927829   79219 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0806 08:33:54.949691   79219 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0806 08:33:54.949720   79219 start.go:495] detecting cgroup driver to use...
	I0806 08:33:54.949794   79219 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0806 08:33:54.967575   79219 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 08:33:54.982019   79219 docker.go:217] disabling cri-docker service (if available) ...
	I0806 08:33:54.982094   79219 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0806 08:33:55.001629   79219 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0806 08:33:55.018332   79219 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0806 08:33:55.139207   79219 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0806 08:33:55.302225   79219 docker.go:233] disabling docker service ...
	I0806 08:33:55.302300   79219 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0806 08:33:55.321470   79219 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0806 08:33:55.334925   79219 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0806 08:33:55.456388   79219 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0806 08:33:55.579725   79219 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0806 08:33:55.594362   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 08:33:55.613084   79219 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0806 08:33:55.613165   79219 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:33:55.623414   79219 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0806 08:33:55.623487   79219 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:33:55.633837   79219 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:33:55.645893   79219 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:33:55.658623   79219 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0806 08:33:55.670016   79219 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:33:55.681227   79219 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:33:55.700047   79219 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:33:55.711409   79219 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0806 08:33:55.721076   79219 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0806 08:33:55.721138   79219 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0806 08:33:55.734470   79219 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0806 08:33:55.750919   79219 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 08:33:55.880306   79219 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0806 08:33:56.031323   79219 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0806 08:33:56.031401   79219 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0806 08:33:56.036456   79219 start.go:563] Will wait 60s for crictl version
	I0806 08:33:56.036522   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:33:56.040542   79219 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0806 08:33:56.081166   79219 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0806 08:33:56.081258   79219 ssh_runner.go:195] Run: crio --version
	I0806 08:33:56.109963   79219 ssh_runner.go:195] Run: crio --version
	I0806 08:33:56.141118   79219 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0806 08:33:54.680267   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .Start
	I0806 08:33:54.680448   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Ensuring networks are active...
	I0806 08:33:54.681411   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Ensuring network default is active
	I0806 08:33:54.681778   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Ensuring network mk-default-k8s-diff-port-990751 is active
	I0806 08:33:54.682205   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Getting domain xml...
	I0806 08:33:54.683028   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Creating domain...
	I0806 08:33:55.964313   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting to get IP...
	I0806 08:33:55.965483   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:33:55.965965   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | unable to find current IP address of domain default-k8s-diff-port-990751 in network mk-default-k8s-diff-port-990751
	I0806 08:33:55.966053   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | I0806 08:33:55.965953   80608 retry.go:31] will retry after 294.177981ms: waiting for machine to come up
	I0806 08:33:56.261404   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:33:56.262032   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | unable to find current IP address of domain default-k8s-diff-port-990751 in network mk-default-k8s-diff-port-990751
	I0806 08:33:56.262059   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | I0806 08:33:56.261969   80608 retry.go:31] will retry after 375.14393ms: waiting for machine to come up
	I0806 08:33:56.638714   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:33:56.639150   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | unable to find current IP address of domain default-k8s-diff-port-990751 in network mk-default-k8s-diff-port-990751
	I0806 08:33:56.639178   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | I0806 08:33:56.639118   80608 retry.go:31] will retry after 439.582122ms: waiting for machine to come up
	I0806 08:33:57.080785   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:33:57.081335   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | unable to find current IP address of domain default-k8s-diff-port-990751 in network mk-default-k8s-diff-port-990751
	I0806 08:33:57.081373   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | I0806 08:33:57.081299   80608 retry.go:31] will retry after 523.875054ms: waiting for machine to come up
	I0806 08:33:57.606731   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:33:57.607220   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | unable to find current IP address of domain default-k8s-diff-port-990751 in network mk-default-k8s-diff-port-990751
	I0806 08:33:57.607250   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | I0806 08:33:57.607178   80608 retry.go:31] will retry after 727.363483ms: waiting for machine to come up
	I0806 08:33:58.336352   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:33:58.336852   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | unable to find current IP address of domain default-k8s-diff-port-990751 in network mk-default-k8s-diff-port-990751
	I0806 08:33:58.336885   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | I0806 08:33:58.336782   80608 retry.go:31] will retry after 662.304027ms: waiting for machine to come up
	I0806 08:33:56.142523   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetIP
	I0806 08:33:56.145641   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:56.145997   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:33:56.146023   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:56.146239   79219 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0806 08:33:56.150763   79219 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 08:33:56.165617   79219 kubeadm.go:883] updating cluster {Name:embed-certs-324808 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-324808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.190 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0806 08:33:56.165775   79219 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0806 08:33:56.165837   79219 ssh_runner.go:195] Run: sudo crictl images --output json
	I0806 08:33:56.205302   79219 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0806 08:33:56.205393   79219 ssh_runner.go:195] Run: which lz4
	I0806 08:33:56.209791   79219 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0806 08:33:56.214338   79219 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0806 08:33:56.214369   79219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0806 08:33:57.735973   79219 crio.go:462] duration metric: took 1.526221921s to copy over tarball
	I0806 08:33:57.736049   79219 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0806 08:33:59.000478   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:33:59.000905   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | unable to find current IP address of domain default-k8s-diff-port-990751 in network mk-default-k8s-diff-port-990751
	I0806 08:33:59.000971   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | I0806 08:33:59.000829   80608 retry.go:31] will retry after 1.110230284s: waiting for machine to come up
	I0806 08:34:00.112658   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:00.113097   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | unable to find current IP address of domain default-k8s-diff-port-990751 in network mk-default-k8s-diff-port-990751
	I0806 08:34:00.113124   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | I0806 08:34:00.113053   80608 retry.go:31] will retry after 1.081213279s: waiting for machine to come up
	I0806 08:34:01.195646   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:01.196062   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | unable to find current IP address of domain default-k8s-diff-port-990751 in network mk-default-k8s-diff-port-990751
	I0806 08:34:01.196097   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | I0806 08:34:01.196014   80608 retry.go:31] will retry after 1.334553019s: waiting for machine to come up
	I0806 08:34:02.532009   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:02.532492   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | unable to find current IP address of domain default-k8s-diff-port-990751 in network mk-default-k8s-diff-port-990751
	I0806 08:34:02.532520   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | I0806 08:34:02.532449   80608 retry.go:31] will retry after 1.42743583s: waiting for machine to come up
	I0806 08:33:59.991340   79219 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.255258522s)
	I0806 08:33:59.991368   79219 crio.go:469] duration metric: took 2.255367027s to extract the tarball
	I0806 08:33:59.991378   79219 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0806 08:34:00.030261   79219 ssh_runner.go:195] Run: sudo crictl images --output json
	I0806 08:34:00.075217   79219 crio.go:514] all images are preloaded for cri-o runtime.
	I0806 08:34:00.075243   79219 cache_images.go:84] Images are preloaded, skipping loading
	I0806 08:34:00.075253   79219 kubeadm.go:934] updating node { 192.168.39.190 8443 v1.30.3 crio true true} ...
	I0806 08:34:00.075396   79219 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-324808 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.190
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-324808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0806 08:34:00.075477   79219 ssh_runner.go:195] Run: crio config
	I0806 08:34:00.124134   79219 cni.go:84] Creating CNI manager for ""
	I0806 08:34:00.124160   79219 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0806 08:34:00.124174   79219 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0806 08:34:00.124195   79219 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.190 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-324808 NodeName:embed-certs-324808 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.190"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.190 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0806 08:34:00.124348   79219 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.190
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-324808"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.190
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.190"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0806 08:34:00.124446   79219 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0806 08:34:00.134782   79219 binaries.go:44] Found k8s binaries, skipping transfer
	I0806 08:34:00.134874   79219 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0806 08:34:00.144892   79219 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0806 08:34:00.164614   79219 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0806 08:34:00.183911   79219 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0806 08:34:00.203821   79219 ssh_runner.go:195] Run: grep 192.168.39.190	control-plane.minikube.internal$ /etc/hosts
	I0806 08:34:00.207980   79219 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.190	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 08:34:00.220566   79219 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 08:34:00.339428   79219 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 08:34:00.357441   79219 certs.go:68] Setting up /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/embed-certs-324808 for IP: 192.168.39.190
	I0806 08:34:00.357470   79219 certs.go:194] generating shared ca certs ...
	I0806 08:34:00.357486   79219 certs.go:226] acquiring lock for ca certs: {Name:mkf5c5aed91f4108054d9f2c742cad66c86afbed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:34:00.357648   79219 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.key
	I0806 08:34:00.357694   79219 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.key
	I0806 08:34:00.357702   79219 certs.go:256] generating profile certs ...
	I0806 08:34:00.357788   79219 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/embed-certs-324808/client.key
	I0806 08:34:00.357870   79219 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/embed-certs-324808/apiserver.key.078f53b0
	I0806 08:34:00.357907   79219 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/embed-certs-324808/proxy-client.key
	I0806 08:34:00.358033   79219 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246.pem (1338 bytes)
	W0806 08:34:00.358070   79219 certs.go:480] ignoring /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246_empty.pem, impossibly tiny 0 bytes
	I0806 08:34:00.358085   79219 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem (1675 bytes)
	I0806 08:34:00.358122   79219 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem (1078 bytes)
	I0806 08:34:00.358151   79219 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem (1123 bytes)
	I0806 08:34:00.358174   79219 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem (1675 bytes)
	I0806 08:34:00.358231   79219 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem (1708 bytes)
	I0806 08:34:00.359196   79219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0806 08:34:00.397544   79219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0806 08:34:00.436824   79219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0806 08:34:00.478633   79219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0806 08:34:00.525641   79219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/embed-certs-324808/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0806 08:34:00.571003   79219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/embed-certs-324808/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0806 08:34:00.596423   79219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/embed-certs-324808/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0806 08:34:00.621129   79219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/embed-certs-324808/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0806 08:34:00.646297   79219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem --> /usr/share/ca-certificates/202462.pem (1708 bytes)
	I0806 08:34:00.672396   79219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0806 08:34:00.698468   79219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246.pem --> /usr/share/ca-certificates/20246.pem (1338 bytes)
	I0806 08:34:00.724115   79219 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0806 08:34:00.742200   79219 ssh_runner.go:195] Run: openssl version
	I0806 08:34:00.748418   79219 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202462.pem && ln -fs /usr/share/ca-certificates/202462.pem /etc/ssl/certs/202462.pem"
	I0806 08:34:00.760234   79219 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202462.pem
	I0806 08:34:00.764947   79219 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  6 07:17 /usr/share/ca-certificates/202462.pem
	I0806 08:34:00.765013   79219 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202462.pem
	I0806 08:34:00.771065   79219 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202462.pem /etc/ssl/certs/3ec20f2e.0"
	I0806 08:34:00.782995   79219 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0806 08:34:00.795092   79219 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0806 08:34:00.800011   79219 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  6 07:07 /usr/share/ca-certificates/minikubeCA.pem
	I0806 08:34:00.800079   79219 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0806 08:34:00.806038   79219 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0806 08:34:00.817660   79219 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20246.pem && ln -fs /usr/share/ca-certificates/20246.pem /etc/ssl/certs/20246.pem"
	I0806 08:34:00.829045   79219 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20246.pem
	I0806 08:34:00.833640   79219 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  6 07:17 /usr/share/ca-certificates/20246.pem
	I0806 08:34:00.833704   79219 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20246.pem
	I0806 08:34:00.839618   79219 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20246.pem /etc/ssl/certs/51391683.0"
	I0806 08:34:00.850938   79219 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0806 08:34:00.855577   79219 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0806 08:34:00.861960   79219 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0806 08:34:00.868626   79219 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0806 08:34:00.875291   79219 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0806 08:34:00.881553   79219 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0806 08:34:00.887650   79219 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0806 08:34:00.894134   79219 kubeadm.go:392] StartCluster: {Name:embed-certs-324808 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-324808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.190 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 08:34:00.894223   79219 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0806 08:34:00.894272   79219 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0806 08:34:00.938980   79219 cri.go:89] found id: ""
	I0806 08:34:00.939046   79219 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0806 08:34:00.951339   79219 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0806 08:34:00.951369   79219 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0806 08:34:00.951430   79219 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0806 08:34:00.962789   79219 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0806 08:34:00.964165   79219 kubeconfig.go:125] found "embed-certs-324808" server: "https://192.168.39.190:8443"
	I0806 08:34:00.967221   79219 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0806 08:34:00.977500   79219 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.190
	I0806 08:34:00.977535   79219 kubeadm.go:1160] stopping kube-system containers ...
	I0806 08:34:00.977547   79219 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0806 08:34:00.977603   79219 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0806 08:34:01.017218   79219 cri.go:89] found id: ""
	I0806 08:34:01.017315   79219 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0806 08:34:01.035305   79219 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0806 08:34:01.046210   79219 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0806 08:34:01.046233   79219 kubeadm.go:157] found existing configuration files:
	
	I0806 08:34:01.046290   79219 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0806 08:34:01.056904   79219 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0806 08:34:01.056974   79219 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0806 08:34:01.066640   79219 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0806 08:34:01.076042   79219 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0806 08:34:01.076114   79219 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0806 08:34:01.085735   79219 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0806 08:34:01.094602   79219 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0806 08:34:01.094661   79219 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0806 08:34:01.104160   79219 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0806 08:34:01.113176   79219 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0806 08:34:01.113249   79219 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0806 08:34:01.122679   79219 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0806 08:34:01.133032   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:01.259256   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:02.252011   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:02.481629   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:02.570215   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:02.710597   79219 api_server.go:52] waiting for apiserver process to appear ...
	I0806 08:34:02.710700   79219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:03.210803   79219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:03.711111   79219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:03.738971   79219 api_server.go:72] duration metric: took 1.028375323s to wait for apiserver process to appear ...
	I0806 08:34:03.739003   79219 api_server.go:88] waiting for apiserver healthz status ...
	I0806 08:34:03.739025   79219 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8443/healthz ...
	I0806 08:34:03.962232   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:03.962678   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | unable to find current IP address of domain default-k8s-diff-port-990751 in network mk-default-k8s-diff-port-990751
	I0806 08:34:03.962709   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | I0806 08:34:03.962638   80608 retry.go:31] will retry after 2.113116057s: waiting for machine to come up
	I0806 08:34:06.078010   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:06.078537   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | unable to find current IP address of domain default-k8s-diff-port-990751 in network mk-default-k8s-diff-port-990751
	I0806 08:34:06.078570   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | I0806 08:34:06.078503   80608 retry.go:31] will retry after 2.580076274s: waiting for machine to come up
	I0806 08:34:08.662313   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:08.662651   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | unable to find current IP address of domain default-k8s-diff-port-990751 in network mk-default-k8s-diff-port-990751
	I0806 08:34:08.662673   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | I0806 08:34:08.662609   80608 retry.go:31] will retry after 4.365982677s: waiting for machine to come up
	I0806 08:34:06.791233   79219 api_server.go:279] https://192.168.39.190:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0806 08:34:06.791272   79219 api_server.go:103] status: https://192.168.39.190:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0806 08:34:06.791311   79219 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8443/healthz ...
	I0806 08:34:06.804038   79219 api_server.go:279] https://192.168.39.190:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0806 08:34:06.804074   79219 api_server.go:103] status: https://192.168.39.190:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0806 08:34:07.239192   79219 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8443/healthz ...
	I0806 08:34:07.244973   79219 api_server.go:279] https://192.168.39.190:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0806 08:34:07.245005   79219 api_server.go:103] status: https://192.168.39.190:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0806 08:34:07.739556   79219 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8443/healthz ...
	I0806 08:34:07.754354   79219 api_server.go:279] https://192.168.39.190:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0806 08:34:07.754382   79219 api_server.go:103] status: https://192.168.39.190:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0806 08:34:08.240114   79219 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8443/healthz ...
	I0806 08:34:08.244799   79219 api_server.go:279] https://192.168.39.190:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0806 08:34:08.244829   79219 api_server.go:103] status: https://192.168.39.190:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0806 08:34:08.739407   79219 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8443/healthz ...
	I0806 08:34:08.743770   79219 api_server.go:279] https://192.168.39.190:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0806 08:34:08.743804   79219 api_server.go:103] status: https://192.168.39.190:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0806 08:34:09.239855   79219 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8443/healthz ...
	I0806 08:34:09.244078   79219 api_server.go:279] https://192.168.39.190:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0806 08:34:09.244135   79219 api_server.go:103] status: https://192.168.39.190:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0806 08:34:09.739859   79219 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8443/healthz ...
	I0806 08:34:09.744236   79219 api_server.go:279] https://192.168.39.190:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0806 08:34:09.744272   79219 api_server.go:103] status: https://192.168.39.190:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0806 08:34:10.240036   79219 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8443/healthz ...
	I0806 08:34:10.244217   79219 api_server.go:279] https://192.168.39.190:8443/healthz returned 200:
	ok
	I0806 08:34:10.250333   79219 api_server.go:141] control plane version: v1.30.3
	I0806 08:34:10.250361   79219 api_server.go:131] duration metric: took 6.511350709s to wait for apiserver health ...
	I0806 08:34:10.250370   79219 cni.go:84] Creating CNI manager for ""
	I0806 08:34:10.250376   79219 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0806 08:34:10.252472   79219 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0806 08:34:10.253779   79219 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0806 08:34:10.264772   79219 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0806 08:34:10.282904   79219 system_pods.go:43] waiting for kube-system pods to appear ...
	I0806 08:34:10.292343   79219 system_pods.go:59] 8 kube-system pods found
	I0806 08:34:10.292397   79219 system_pods.go:61] "coredns-7db6d8ff4d-4xxwm" [eeb47dea-32e9-458f-9ad9-2af6d4e68cc0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0806 08:34:10.292408   79219 system_pods.go:61] "etcd-embed-certs-324808" [1e0a82cd-6704-4da3-8992-15c68a7aaf70] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0806 08:34:10.292419   79219 system_pods.go:61] "kube-apiserver-embed-certs-324808" [a33c54d9-1fc5-4243-8fe9-d535c40908c3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0806 08:34:10.292426   79219 system_pods.go:61] "kube-controller-manager-embed-certs-324808" [b1c519b8-9c96-4c87-90f3-cc277cfe7763] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0806 08:34:10.292432   79219 system_pods.go:61] "kube-proxy-8lbzv" [0c45e6c0-9b7f-4c48-bff5-38d4a5949bec] Running
	I0806 08:34:10.292438   79219 system_pods.go:61] "kube-scheduler-embed-certs-324808" [78c2827f-341a-4441-bf7e-eff947fc3ea8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0806 08:34:10.292445   79219 system_pods.go:61] "metrics-server-569cc877fc-8xb9k" [7a5fb326-11a7-48fa-bcde-a22026a1dccb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0806 08:34:10.292452   79219 system_pods.go:61] "storage-provisioner" [15c7d89a-e994-4cca-931c-f646157a15e8] Running
	I0806 08:34:10.292465   79219 system_pods.go:74] duration metric: took 9.536183ms to wait for pod list to return data ...
	I0806 08:34:10.292474   79219 node_conditions.go:102] verifying NodePressure condition ...
	I0806 08:34:10.295584   79219 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0806 08:34:10.295616   79219 node_conditions.go:123] node cpu capacity is 2
	I0806 08:34:10.295632   79219 node_conditions.go:105] duration metric: took 3.152403ms to run NodePressure ...
	I0806 08:34:10.295654   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:10.566724   79219 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0806 08:34:10.571777   79219 kubeadm.go:739] kubelet initialised
	I0806 08:34:10.571801   79219 kubeadm.go:740] duration metric: took 5.042307ms waiting for restarted kubelet to initialise ...
	I0806 08:34:10.571812   79219 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 08:34:10.576738   79219 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-4xxwm" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:10.581943   79219 pod_ready.go:97] node "embed-certs-324808" hosting pod "coredns-7db6d8ff4d-4xxwm" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-324808" has status "Ready":"False"
	I0806 08:34:10.581976   79219 pod_ready.go:81] duration metric: took 5.207743ms for pod "coredns-7db6d8ff4d-4xxwm" in "kube-system" namespace to be "Ready" ...
	E0806 08:34:10.581989   79219 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-324808" hosting pod "coredns-7db6d8ff4d-4xxwm" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-324808" has status "Ready":"False"
	I0806 08:34:10.581997   79219 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-324808" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:10.586983   79219 pod_ready.go:97] node "embed-certs-324808" hosting pod "etcd-embed-certs-324808" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-324808" has status "Ready":"False"
	I0806 08:34:10.587007   79219 pod_ready.go:81] duration metric: took 5.000306ms for pod "etcd-embed-certs-324808" in "kube-system" namespace to be "Ready" ...
	E0806 08:34:10.587016   79219 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-324808" hosting pod "etcd-embed-certs-324808" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-324808" has status "Ready":"False"
	I0806 08:34:10.587022   79219 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-324808" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:10.592042   79219 pod_ready.go:97] node "embed-certs-324808" hosting pod "kube-apiserver-embed-certs-324808" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-324808" has status "Ready":"False"
	I0806 08:34:10.592066   79219 pod_ready.go:81] duration metric: took 5.035031ms for pod "kube-apiserver-embed-certs-324808" in "kube-system" namespace to be "Ready" ...
	E0806 08:34:10.592074   79219 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-324808" hosting pod "kube-apiserver-embed-certs-324808" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-324808" has status "Ready":"False"
	I0806 08:34:10.592080   79219 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-324808" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:10.687156   79219 pod_ready.go:97] node "embed-certs-324808" hosting pod "kube-controller-manager-embed-certs-324808" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-324808" has status "Ready":"False"
	I0806 08:34:10.687193   79219 pod_ready.go:81] duration metric: took 95.100916ms for pod "kube-controller-manager-embed-certs-324808" in "kube-system" namespace to be "Ready" ...
	E0806 08:34:10.687207   79219 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-324808" hosting pod "kube-controller-manager-embed-certs-324808" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-324808" has status "Ready":"False"
	I0806 08:34:10.687217   79219 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-8lbzv" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:11.086180   79219 pod_ready.go:97] node "embed-certs-324808" hosting pod "kube-proxy-8lbzv" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-324808" has status "Ready":"False"
	I0806 08:34:11.086213   79219 pod_ready.go:81] duration metric: took 398.984678ms for pod "kube-proxy-8lbzv" in "kube-system" namespace to be "Ready" ...
	E0806 08:34:11.086224   79219 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-324808" hosting pod "kube-proxy-8lbzv" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-324808" has status "Ready":"False"
	I0806 08:34:11.086230   79219 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-324808" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:11.486971   79219 pod_ready.go:97] node "embed-certs-324808" hosting pod "kube-scheduler-embed-certs-324808" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-324808" has status "Ready":"False"
	I0806 08:34:11.486995   79219 pod_ready.go:81] duration metric: took 400.759449ms for pod "kube-scheduler-embed-certs-324808" in "kube-system" namespace to be "Ready" ...
	E0806 08:34:11.487005   79219 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-324808" hosting pod "kube-scheduler-embed-certs-324808" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-324808" has status "Ready":"False"
	I0806 08:34:11.487011   79219 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:11.887030   79219 pod_ready.go:97] node "embed-certs-324808" hosting pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-324808" has status "Ready":"False"
	I0806 08:34:11.887052   79219 pod_ready.go:81] duration metric: took 400.033763ms for pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace to be "Ready" ...
	E0806 08:34:11.887061   79219 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-324808" hosting pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-324808" has status "Ready":"False"
	I0806 08:34:11.887068   79219 pod_ready.go:38] duration metric: took 1.315245982s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 08:34:11.887084   79219 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0806 08:34:11.899633   79219 ops.go:34] apiserver oom_adj: -16
	I0806 08:34:11.899649   79219 kubeadm.go:597] duration metric: took 10.948272769s to restartPrimaryControlPlane
	I0806 08:34:11.899657   79219 kubeadm.go:394] duration metric: took 11.005530169s to StartCluster
	I0806 08:34:11.899676   79219 settings.go:142] acquiring lock: {Name:mkb0bfbcae3b4205ef43487a9de84cfd8afd2e5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:34:11.899745   79219 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19370-13057/kubeconfig
	I0806 08:34:11.901341   79219 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/kubeconfig: {Name:mk5c066914acf8e36556b29792f75b98076ca34a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:34:11.901603   79219 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.190 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0806 08:34:11.901676   79219 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0806 08:34:11.901753   79219 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-324808"
	I0806 08:34:11.901785   79219 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-324808"
	W0806 08:34:11.901796   79219 addons.go:243] addon storage-provisioner should already be in state true
	I0806 08:34:11.901788   79219 addons.go:69] Setting default-storageclass=true in profile "embed-certs-324808"
	I0806 08:34:11.901834   79219 host.go:66] Checking if "embed-certs-324808" exists ...
	I0806 08:34:11.901851   79219 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-324808"
	I0806 08:34:11.901854   79219 config.go:182] Loaded profile config "embed-certs-324808": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 08:34:11.901842   79219 addons.go:69] Setting metrics-server=true in profile "embed-certs-324808"
	I0806 08:34:11.901921   79219 addons.go:234] Setting addon metrics-server=true in "embed-certs-324808"
	W0806 08:34:11.901940   79219 addons.go:243] addon metrics-server should already be in state true
	I0806 08:34:11.901998   79219 host.go:66] Checking if "embed-certs-324808" exists ...
	I0806 08:34:11.902216   79219 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:34:11.902252   79219 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:34:11.902280   79219 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:34:11.902308   79219 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:34:11.902383   79219 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:34:11.902411   79219 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:34:11.904343   79219 out.go:177] * Verifying Kubernetes components...
	I0806 08:34:11.905934   79219 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 08:34:11.917271   79219 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32971
	I0806 08:34:11.917375   79219 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38975
	I0806 08:34:11.917833   79219 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:34:11.917874   79219 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:34:11.917981   79219 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33569
	I0806 08:34:11.918394   79219 main.go:141] libmachine: Using API Version  1
	I0806 08:34:11.918416   79219 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:34:11.918453   79219 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:34:11.918530   79219 main.go:141] libmachine: Using API Version  1
	I0806 08:34:11.918552   79219 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:34:11.918761   79219 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:34:11.918819   79219 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:34:11.918913   79219 main.go:141] libmachine: Using API Version  1
	I0806 08:34:11.918939   79219 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:34:11.918965   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetState
	I0806 08:34:11.919280   79219 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:34:11.919322   79219 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:34:11.919367   79219 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:34:11.919873   79219 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:34:11.919901   79219 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:34:11.921815   79219 addons.go:234] Setting addon default-storageclass=true in "embed-certs-324808"
	W0806 08:34:11.921834   79219 addons.go:243] addon default-storageclass should already be in state true
	I0806 08:34:11.921856   79219 host.go:66] Checking if "embed-certs-324808" exists ...
	I0806 08:34:11.922132   79219 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:34:11.922168   79219 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:34:11.936191   79219 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40029
	I0806 08:34:11.936678   79219 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:34:11.937209   79219 main.go:141] libmachine: Using API Version  1
	I0806 08:34:11.937234   79219 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:34:11.940428   79219 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35087
	I0806 08:34:11.940508   79219 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38421
	I0806 08:34:11.940845   79219 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:34:11.941345   79219 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:34:11.941346   79219 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:34:11.941707   79219 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:34:11.941752   79219 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:34:11.941831   79219 main.go:141] libmachine: Using API Version  1
	I0806 08:34:11.941872   79219 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:34:11.941904   79219 main.go:141] libmachine: Using API Version  1
	I0806 08:34:11.941917   79219 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:34:11.942273   79219 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:34:11.942359   79219 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:34:11.942514   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetState
	I0806 08:34:11.942532   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetState
	I0806 08:34:11.944317   79219 main.go:141] libmachine: (embed-certs-324808) Calling .DriverName
	I0806 08:34:11.944503   79219 main.go:141] libmachine: (embed-certs-324808) Calling .DriverName
	I0806 08:34:11.946511   79219 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0806 08:34:11.946525   79219 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 08:34:11.947929   79219 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0806 08:34:11.947948   79219 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0806 08:34:11.947976   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHHostname
	I0806 08:34:11.948044   79219 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0806 08:34:11.948066   79219 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0806 08:34:11.948085   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHHostname
	I0806 08:34:11.951758   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:34:11.951988   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:34:11.952181   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:34:11.952205   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:34:11.952530   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:34:11.952565   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHPort
	I0806 08:34:11.952566   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:34:11.952693   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHPort
	I0806 08:34:11.952819   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHKeyPath
	I0806 08:34:11.952858   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHKeyPath
	I0806 08:34:11.952970   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHUsername
	I0806 08:34:11.952974   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHUsername
	I0806 08:34:11.953143   79219 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/embed-certs-324808/id_rsa Username:docker}
	I0806 08:34:11.953207   79219 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/embed-certs-324808/id_rsa Username:docker}
	I0806 08:34:11.959949   79219 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35275
	I0806 08:34:11.960353   79219 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:34:11.960900   79219 main.go:141] libmachine: Using API Version  1
	I0806 08:34:11.960929   79219 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:34:11.961241   79219 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:34:11.961451   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetState
	I0806 08:34:11.963031   79219 main.go:141] libmachine: (embed-certs-324808) Calling .DriverName
	I0806 08:34:11.963333   79219 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0806 08:34:11.963349   79219 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0806 08:34:11.963366   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHHostname
	I0806 08:34:11.966079   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:34:11.966448   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:34:11.966481   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:34:11.966559   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHPort
	I0806 08:34:11.966737   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHKeyPath
	I0806 08:34:11.966881   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHUsername
	I0806 08:34:11.967013   79219 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/embed-certs-324808/id_rsa Username:docker}
	I0806 08:34:12.081433   79219 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 08:34:12.100701   79219 node_ready.go:35] waiting up to 6m0s for node "embed-certs-324808" to be "Ready" ...
	I0806 08:34:12.196131   79219 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0806 08:34:12.196151   79219 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0806 08:34:12.201603   79219 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0806 08:34:12.217590   79219 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0806 08:34:12.217626   79219 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0806 08:34:12.239354   79219 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0806 08:34:12.239377   79219 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0806 08:34:12.254418   79219 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0806 08:34:12.266126   79219 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0806 08:34:13.441725   79219 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.187269206s)
	I0806 08:34:13.441780   79219 main.go:141] libmachine: Making call to close driver server
	I0806 08:34:13.441791   79219 main.go:141] libmachine: (embed-certs-324808) Calling .Close
	I0806 08:34:13.441856   79219 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.240222324s)
	I0806 08:34:13.441895   79219 main.go:141] libmachine: Making call to close driver server
	I0806 08:34:13.441910   79219 main.go:141] libmachine: (embed-certs-324808) Calling .Close
	I0806 08:34:13.442096   79219 main.go:141] libmachine: (embed-certs-324808) DBG | Closing plugin on server side
	I0806 08:34:13.442116   79219 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:34:13.442128   79219 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:34:13.442136   79219 main.go:141] libmachine: Making call to close driver server
	I0806 08:34:13.442147   79219 main.go:141] libmachine: (embed-certs-324808) Calling .Close
	I0806 08:34:13.442211   79219 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:34:13.442225   79219 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:34:13.442235   79219 main.go:141] libmachine: Making call to close driver server
	I0806 08:34:13.442245   79219 main.go:141] libmachine: (embed-certs-324808) Calling .Close
	I0806 08:34:13.442414   79219 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:34:13.442424   79219 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:34:13.443716   79219 main.go:141] libmachine: (embed-certs-324808) DBG | Closing plugin on server side
	I0806 08:34:13.443727   79219 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:34:13.443748   79219 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:34:13.448919   79219 main.go:141] libmachine: Making call to close driver server
	I0806 08:34:13.448945   79219 main.go:141] libmachine: (embed-certs-324808) Calling .Close
	I0806 08:34:13.449225   79219 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:34:13.449242   79219 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:34:13.521481   79219 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.255292404s)
	I0806 08:34:13.521542   79219 main.go:141] libmachine: Making call to close driver server
	I0806 08:34:13.521561   79219 main.go:141] libmachine: (embed-certs-324808) Calling .Close
	I0806 08:34:13.521875   79219 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:34:13.521898   79219 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:34:13.521899   79219 main.go:141] libmachine: (embed-certs-324808) DBG | Closing plugin on server side
	I0806 08:34:13.521916   79219 main.go:141] libmachine: Making call to close driver server
	I0806 08:34:13.521927   79219 main.go:141] libmachine: (embed-certs-324808) Calling .Close
	I0806 08:34:13.522147   79219 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:34:13.522165   79219 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:34:13.522175   79219 addons.go:475] Verifying addon metrics-server=true in "embed-certs-324808"
	I0806 08:34:13.522185   79219 main.go:141] libmachine: (embed-certs-324808) DBG | Closing plugin on server side
	I0806 08:34:13.524209   79219 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0806 08:34:13.032813   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.033366   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Found IP for machine: 192.168.50.235
	I0806 08:34:13.033394   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Reserving static IP address...
	I0806 08:34:13.033414   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has current primary IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.033956   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-990751", mac: "52:54:00:fb:51:bd", ip: "192.168.50.235"} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:13.033992   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Reserved static IP address: 192.168.50.235
	I0806 08:34:13.034012   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | skip adding static IP to network mk-default-k8s-diff-port-990751 - found existing host DHCP lease matching {name: "default-k8s-diff-port-990751", mac: "52:54:00:fb:51:bd", ip: "192.168.50.235"}
	I0806 08:34:13.034031   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | Getting to WaitForSSH function...
	I0806 08:34:13.034046   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for SSH to be available...
	I0806 08:34:13.036483   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.036843   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:13.036874   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.037013   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | Using SSH client type: external
	I0806 08:34:13.037042   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | Using SSH private key: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/default-k8s-diff-port-990751/id_rsa (-rw-------)
	I0806 08:34:13.037066   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.235 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19370-13057/.minikube/machines/default-k8s-diff-port-990751/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0806 08:34:13.037078   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | About to run SSH command:
	I0806 08:34:13.037119   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | exit 0
	I0806 08:34:13.164932   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | SSH cmd err, output: <nil>: 
	I0806 08:34:13.165301   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetConfigRaw
	I0806 08:34:13.165892   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetIP
	I0806 08:34:13.169003   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.169408   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:13.169449   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.169783   79614 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/default-k8s-diff-port-990751/config.json ...
	I0806 08:34:13.170431   79614 machine.go:94] provisionDockerMachine start ...
	I0806 08:34:13.170452   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .DriverName
	I0806 08:34:13.170703   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHHostname
	I0806 08:34:13.173536   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.173994   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:13.174022   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.174241   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHPort
	I0806 08:34:13.174456   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHKeyPath
	I0806 08:34:13.174632   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHKeyPath
	I0806 08:34:13.174789   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHUsername
	I0806 08:34:13.175056   79614 main.go:141] libmachine: Using SSH client type: native
	I0806 08:34:13.175329   79614 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.235 22 <nil> <nil>}
	I0806 08:34:13.175344   79614 main.go:141] libmachine: About to run SSH command:
	hostname
	I0806 08:34:13.281288   79614 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0806 08:34:13.281322   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetMachineName
	I0806 08:34:13.281573   79614 buildroot.go:166] provisioning hostname "default-k8s-diff-port-990751"
	I0806 08:34:13.281606   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetMachineName
	I0806 08:34:13.281808   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHHostname
	I0806 08:34:13.284463   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.284825   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:13.284860   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.284988   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHPort
	I0806 08:34:13.285178   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHKeyPath
	I0806 08:34:13.285350   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHKeyPath
	I0806 08:34:13.285468   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHUsername
	I0806 08:34:13.285614   79614 main.go:141] libmachine: Using SSH client type: native
	I0806 08:34:13.285807   79614 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.235 22 <nil> <nil>}
	I0806 08:34:13.285821   79614 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-990751 && echo "default-k8s-diff-port-990751" | sudo tee /etc/hostname
	I0806 08:34:13.413970   79614 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-990751
	
	I0806 08:34:13.414005   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHHostname
	I0806 08:34:13.417279   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.417708   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:13.417757   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.417947   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHPort
	I0806 08:34:13.418153   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHKeyPath
	I0806 08:34:13.418337   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHKeyPath
	I0806 08:34:13.418495   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHUsername
	I0806 08:34:13.418683   79614 main.go:141] libmachine: Using SSH client type: native
	I0806 08:34:13.418935   79614 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.235 22 <nil> <nil>}
	I0806 08:34:13.418970   79614 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-990751' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-990751/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-990751' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0806 08:34:13.534550   79614 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 08:34:13.534578   79614 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19370-13057/.minikube CaCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19370-13057/.minikube}
	I0806 08:34:13.534630   79614 buildroot.go:174] setting up certificates
	I0806 08:34:13.534647   79614 provision.go:84] configureAuth start
	I0806 08:34:13.534664   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetMachineName
	I0806 08:34:13.534962   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetIP
	I0806 08:34:13.537582   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.538144   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:13.538173   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.538329   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHHostname
	I0806 08:34:13.541132   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.541422   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:13.541447   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.541638   79614 provision.go:143] copyHostCerts
	I0806 08:34:13.541693   79614 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem, removing ...
	I0806 08:34:13.541703   79614 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem
	I0806 08:34:13.541761   79614 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem (1078 bytes)
	I0806 08:34:13.541862   79614 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem, removing ...
	I0806 08:34:13.541870   79614 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem
	I0806 08:34:13.541891   79614 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem (1123 bytes)
	I0806 08:34:13.541943   79614 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem, removing ...
	I0806 08:34:13.541950   79614 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem
	I0806 08:34:13.541966   79614 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem (1675 bytes)
	I0806 08:34:13.542013   79614 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-990751 san=[127.0.0.1 192.168.50.235 default-k8s-diff-port-990751 localhost minikube]
	I0806 08:34:13.632290   79614 provision.go:177] copyRemoteCerts
	I0806 08:34:13.632345   79614 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0806 08:34:13.632389   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHHostname
	I0806 08:34:13.635332   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.635619   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:13.635644   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.635868   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHPort
	I0806 08:34:13.636069   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHKeyPath
	I0806 08:34:13.636208   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHUsername
	I0806 08:34:13.636381   79614 sshutil.go:53] new ssh client: &{IP:192.168.50.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/default-k8s-diff-port-990751/id_rsa Username:docker}
	I0806 08:34:13.719415   79614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0806 08:34:13.745981   79614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0806 08:34:13.775906   79614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0806 08:34:13.806948   79614 provision.go:87] duration metric: took 272.283009ms to configureAuth
	I0806 08:34:13.806986   79614 buildroot.go:189] setting minikube options for container-runtime
	I0806 08:34:13.807234   79614 config.go:182] Loaded profile config "default-k8s-diff-port-990751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 08:34:13.807328   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHHostname
	I0806 08:34:13.810500   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.810895   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:13.810928   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.811074   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHPort
	I0806 08:34:13.811302   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHKeyPath
	I0806 08:34:13.811497   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHKeyPath
	I0806 08:34:13.811627   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHUsername
	I0806 08:34:13.811896   79614 main.go:141] libmachine: Using SSH client type: native
	I0806 08:34:13.812124   79614 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.235 22 <nil> <nil>}
	I0806 08:34:13.812148   79614 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0806 08:34:13.525425   79219 addons.go:510] duration metric: took 1.623751418s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0806 08:34:14.104863   79219 node_ready.go:53] node "embed-certs-324808" has status "Ready":"False"
	I0806 08:34:14.353523   79927 start.go:364] duration metric: took 3m15.487259208s to acquireMachinesLock for "old-k8s-version-409903"
	I0806 08:34:14.353614   79927 start.go:96] Skipping create...Using existing machine configuration
	I0806 08:34:14.353629   79927 fix.go:54] fixHost starting: 
	I0806 08:34:14.354059   79927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:34:14.354096   79927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:34:14.373696   79927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33985
	I0806 08:34:14.374148   79927 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:34:14.374646   79927 main.go:141] libmachine: Using API Version  1
	I0806 08:34:14.374672   79927 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:34:14.374995   79927 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:34:14.375277   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .DriverName
	I0806 08:34:14.375472   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetState
	I0806 08:34:14.377209   79927 fix.go:112] recreateIfNeeded on old-k8s-version-409903: state=Stopped err=<nil>
	I0806 08:34:14.377251   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .DriverName
	W0806 08:34:14.377387   79927 fix.go:138] unexpected machine state, will restart: <nil>
	I0806 08:34:14.379029   79927 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-409903" ...
	I0806 08:34:14.113926   79614 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0806 08:34:14.113953   79614 machine.go:97] duration metric: took 943.50747ms to provisionDockerMachine
	I0806 08:34:14.113967   79614 start.go:293] postStartSetup for "default-k8s-diff-port-990751" (driver="kvm2")
	I0806 08:34:14.113982   79614 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0806 08:34:14.114004   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .DriverName
	I0806 08:34:14.114313   79614 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0806 08:34:14.114339   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHHostname
	I0806 08:34:14.116618   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:14.117028   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:14.117067   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:14.117162   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHPort
	I0806 08:34:14.117371   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHKeyPath
	I0806 08:34:14.117545   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHUsername
	I0806 08:34:14.117738   79614 sshutil.go:53] new ssh client: &{IP:192.168.50.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/default-k8s-diff-port-990751/id_rsa Username:docker}
	I0806 08:34:14.199514   79614 ssh_runner.go:195] Run: cat /etc/os-release
	I0806 08:34:14.204063   79614 info.go:137] Remote host: Buildroot 2023.02.9
	I0806 08:34:14.204084   79614 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-13057/.minikube/addons for local assets ...
	I0806 08:34:14.204139   79614 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-13057/.minikube/files for local assets ...
	I0806 08:34:14.204219   79614 filesync.go:149] local asset: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem -> 202462.pem in /etc/ssl/certs
	I0806 08:34:14.204308   79614 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0806 08:34:14.213847   79614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem --> /etc/ssl/certs/202462.pem (1708 bytes)
	I0806 08:34:14.241524   79614 start.go:296] duration metric: took 127.543182ms for postStartSetup
	I0806 08:34:14.241570   79614 fix.go:56] duration metric: took 19.588116399s for fixHost
	I0806 08:34:14.241590   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHHostname
	I0806 08:34:14.244333   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:14.244702   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:14.244733   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:14.244936   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHPort
	I0806 08:34:14.245178   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHKeyPath
	I0806 08:34:14.245338   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHKeyPath
	I0806 08:34:14.245497   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHUsername
	I0806 08:34:14.245637   79614 main.go:141] libmachine: Using SSH client type: native
	I0806 08:34:14.245783   79614 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.235 22 <nil> <nil>}
	I0806 08:34:14.245793   79614 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0806 08:34:14.353301   79614 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722933254.326161993
	
	I0806 08:34:14.353325   79614 fix.go:216] guest clock: 1722933254.326161993
	I0806 08:34:14.353339   79614 fix.go:229] Guest: 2024-08-06 08:34:14.326161993 +0000 UTC Remote: 2024-08-06 08:34:14.2415744 +0000 UTC m=+235.446489438 (delta=84.587593ms)
	I0806 08:34:14.353389   79614 fix.go:200] guest clock delta is within tolerance: 84.587593ms
	I0806 08:34:14.353398   79614 start.go:83] releasing machines lock for "default-k8s-diff-port-990751", held for 19.699973637s
	I0806 08:34:14.353432   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .DriverName
	I0806 08:34:14.353751   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetIP
	I0806 08:34:14.356590   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:14.357031   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:14.357065   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:14.357349   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .DriverName
	I0806 08:34:14.357896   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .DriverName
	I0806 08:34:14.358125   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .DriverName
	I0806 08:34:14.358222   79614 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0806 08:34:14.358265   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHHostname
	I0806 08:34:14.358387   79614 ssh_runner.go:195] Run: cat /version.json
	I0806 08:34:14.358415   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHHostname
	I0806 08:34:14.361131   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:14.361444   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:14.361590   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:14.361629   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:14.361801   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHPort
	I0806 08:34:14.361905   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:14.361940   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:14.362025   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHKeyPath
	I0806 08:34:14.362228   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHUsername
	I0806 08:34:14.362242   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHPort
	I0806 08:34:14.362420   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHKeyPath
	I0806 08:34:14.362424   79614 sshutil.go:53] new ssh client: &{IP:192.168.50.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/default-k8s-diff-port-990751/id_rsa Username:docker}
	I0806 08:34:14.362600   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHUsername
	I0806 08:34:14.362717   79614 sshutil.go:53] new ssh client: &{IP:192.168.50.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/default-k8s-diff-port-990751/id_rsa Username:docker}
	I0806 08:34:14.469232   79614 ssh_runner.go:195] Run: systemctl --version
	I0806 08:34:14.476929   79614 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0806 08:34:14.629487   79614 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0806 08:34:14.637276   79614 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0806 08:34:14.637349   79614 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0806 08:34:14.654612   79614 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0806 08:34:14.654634   79614 start.go:495] detecting cgroup driver to use...
	I0806 08:34:14.654699   79614 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0806 08:34:14.672255   79614 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 08:34:14.689135   79614 docker.go:217] disabling cri-docker service (if available) ...
	I0806 08:34:14.689203   79614 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0806 08:34:14.705745   79614 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0806 08:34:14.721267   79614 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0806 08:34:14.841773   79614 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0806 08:34:14.982757   79614 docker.go:233] disabling docker service ...
	I0806 08:34:14.982837   79614 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0806 08:34:14.997637   79614 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0806 08:34:15.011404   79614 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0806 08:34:15.167108   79614 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0806 08:34:15.286749   79614 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0806 08:34:15.301818   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 08:34:15.320848   79614 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0806 08:34:15.320920   79614 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:15.331331   79614 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0806 08:34:15.331414   79614 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:15.342763   79614 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:15.353534   79614 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:15.365593   79614 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0806 08:34:15.379804   79614 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:15.390200   79614 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:15.410545   79614 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:15.423050   79614 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0806 08:34:15.432810   79614 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0806 08:34:15.432878   79614 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0806 08:34:15.447298   79614 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0806 08:34:15.457992   79614 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 08:34:15.606288   79614 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0806 08:34:15.792322   79614 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0806 08:34:15.792411   79614 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0806 08:34:15.798556   79614 start.go:563] Will wait 60s for crictl version
	I0806 08:34:15.798615   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:34:15.802732   79614 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0806 08:34:15.856880   79614 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0806 08:34:15.856963   79614 ssh_runner.go:195] Run: crio --version
	I0806 08:34:15.890496   79614 ssh_runner.go:195] Run: crio --version
	I0806 08:34:15.924486   79614 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0806 08:34:14.380306   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .Start
	I0806 08:34:14.380486   79927 main.go:141] libmachine: (old-k8s-version-409903) Ensuring networks are active...
	I0806 08:34:14.381155   79927 main.go:141] libmachine: (old-k8s-version-409903) Ensuring network default is active
	I0806 08:34:14.381514   79927 main.go:141] libmachine: (old-k8s-version-409903) Ensuring network mk-old-k8s-version-409903 is active
	I0806 08:34:14.381820   79927 main.go:141] libmachine: (old-k8s-version-409903) Getting domain xml...
	I0806 08:34:14.382679   79927 main.go:141] libmachine: (old-k8s-version-409903) Creating domain...
	I0806 08:34:15.725416   79927 main.go:141] libmachine: (old-k8s-version-409903) Waiting to get IP...
	I0806 08:34:15.726492   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:15.727078   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:34:15.727276   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:34:15.727179   80849 retry.go:31] will retry after 303.235731ms: waiting for machine to come up
	I0806 08:34:16.031996   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:16.032592   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:34:16.032621   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:34:16.032542   80849 retry.go:31] will retry after 355.466914ms: waiting for machine to come up
	I0806 08:34:16.390212   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:16.390736   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:34:16.390768   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:34:16.390674   80849 retry.go:31] will retry after 432.942387ms: waiting for machine to come up
	I0806 08:34:16.825739   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:16.826366   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:34:16.826390   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:34:16.826317   80849 retry.go:31] will retry after 481.17687ms: waiting for machine to come up
	I0806 08:34:17.309005   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:17.309768   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:34:17.309799   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:34:17.309745   80849 retry.go:31] will retry after 631.184181ms: waiting for machine to come up
	I0806 08:34:17.942220   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:17.942742   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:34:17.942766   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:34:17.942701   80849 retry.go:31] will retry after 602.253362ms: waiting for machine to come up
	I0806 08:34:18.546333   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:18.546918   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:34:18.546944   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:34:18.546855   80849 retry.go:31] will retry after 1.061939923s: waiting for machine to come up
	I0806 08:34:15.925709   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetIP
	I0806 08:34:15.929420   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:15.929901   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:15.929931   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:15.930159   79614 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0806 08:34:15.934916   79614 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 08:34:15.949815   79614 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-990751 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-990751 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.235 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0806 08:34:15.949971   79614 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0806 08:34:15.950054   79614 ssh_runner.go:195] Run: sudo crictl images --output json
	I0806 08:34:16.004551   79614 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0806 08:34:16.004631   79614 ssh_runner.go:195] Run: which lz4
	I0806 08:34:16.011388   79614 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0806 08:34:16.017475   79614 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0806 08:34:16.017511   79614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0806 08:34:17.564750   79614 crio.go:462] duration metric: took 1.553401521s to copy over tarball
	I0806 08:34:17.564840   79614 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0806 08:34:16.106922   79219 node_ready.go:53] node "embed-certs-324808" has status "Ready":"False"
	I0806 08:34:17.104735   79219 node_ready.go:49] node "embed-certs-324808" has status "Ready":"True"
	I0806 08:34:17.104778   79219 node_ready.go:38] duration metric: took 5.004030652s for node "embed-certs-324808" to be "Ready" ...
	I0806 08:34:17.104790   79219 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 08:34:17.113668   79219 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-4xxwm" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:17.120777   79219 pod_ready.go:92] pod "coredns-7db6d8ff4d-4xxwm" in "kube-system" namespace has status "Ready":"True"
	I0806 08:34:17.120801   79219 pod_ready.go:81] duration metric: took 7.097875ms for pod "coredns-7db6d8ff4d-4xxwm" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:17.120811   79219 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-324808" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:17.126581   79219 pod_ready.go:92] pod "etcd-embed-certs-324808" in "kube-system" namespace has status "Ready":"True"
	I0806 08:34:17.126606   79219 pod_ready.go:81] duration metric: took 5.788426ms for pod "etcd-embed-certs-324808" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:17.126619   79219 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-324808" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:18.136739   79219 pod_ready.go:92] pod "kube-apiserver-embed-certs-324808" in "kube-system" namespace has status "Ready":"True"
	I0806 08:34:18.136767   79219 pod_ready.go:81] duration metric: took 1.010139212s for pod "kube-apiserver-embed-certs-324808" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:18.136781   79219 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-324808" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:19.610165   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:19.610567   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:34:19.610598   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:34:19.610542   80849 retry.go:31] will retry after 1.339464672s: waiting for machine to come up
	I0806 08:34:20.951505   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:20.952049   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:34:20.952080   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:34:20.951975   80849 retry.go:31] will retry after 1.622574331s: waiting for machine to come up
	I0806 08:34:22.576150   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:22.576559   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:34:22.576582   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:34:22.576508   80849 retry.go:31] will retry after 1.408827574s: waiting for machine to come up
	I0806 08:34:19.920581   79614 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.355705846s)
	I0806 08:34:19.920609   79614 crio.go:469] duration metric: took 2.355825462s to extract the tarball
	I0806 08:34:19.920619   79614 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0806 08:34:19.959118   79614 ssh_runner.go:195] Run: sudo crictl images --output json
	I0806 08:34:20.002413   79614 crio.go:514] all images are preloaded for cri-o runtime.
	I0806 08:34:20.002449   79614 cache_images.go:84] Images are preloaded, skipping loading
	I0806 08:34:20.002461   79614 kubeadm.go:934] updating node { 192.168.50.235 8444 v1.30.3 crio true true} ...
	I0806 08:34:20.002588   79614 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-990751 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.235
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-990751 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0806 08:34:20.002655   79614 ssh_runner.go:195] Run: crio config
	I0806 08:34:20.058943   79614 cni.go:84] Creating CNI manager for ""
	I0806 08:34:20.058970   79614 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0806 08:34:20.058981   79614 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0806 08:34:20.059010   79614 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.235 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-990751 NodeName:default-k8s-diff-port-990751 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.235"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.235 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0806 08:34:20.059247   79614 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.235
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-990751"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.235
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.235"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0806 08:34:20.059337   79614 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0806 08:34:20.072766   79614 binaries.go:44] Found k8s binaries, skipping transfer
	I0806 08:34:20.072876   79614 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0806 08:34:20.085730   79614 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0806 08:34:20.105230   79614 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0806 08:34:20.124800   79614 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0806 08:34:20.146547   79614 ssh_runner.go:195] Run: grep 192.168.50.235	control-plane.minikube.internal$ /etc/hosts
	I0806 08:34:20.150692   79614 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.235	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 08:34:20.164006   79614 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 08:34:20.286820   79614 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 08:34:20.304141   79614 certs.go:68] Setting up /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/default-k8s-diff-port-990751 for IP: 192.168.50.235
	I0806 08:34:20.304166   79614 certs.go:194] generating shared ca certs ...
	I0806 08:34:20.304187   79614 certs.go:226] acquiring lock for ca certs: {Name:mkf5c5aed91f4108054d9f2c742cad66c86afbed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:34:20.304386   79614 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.key
	I0806 08:34:20.304448   79614 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.key
	I0806 08:34:20.304464   79614 certs.go:256] generating profile certs ...
	I0806 08:34:20.304567   79614 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/default-k8s-diff-port-990751/client.key
	I0806 08:34:20.304648   79614 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/default-k8s-diff-port-990751/apiserver.key.aa7eccb7
	I0806 08:34:20.304715   79614 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/default-k8s-diff-port-990751/proxy-client.key
	I0806 08:34:20.304872   79614 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246.pem (1338 bytes)
	W0806 08:34:20.304919   79614 certs.go:480] ignoring /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246_empty.pem, impossibly tiny 0 bytes
	I0806 08:34:20.304929   79614 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem (1675 bytes)
	I0806 08:34:20.304960   79614 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem (1078 bytes)
	I0806 08:34:20.304989   79614 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem (1123 bytes)
	I0806 08:34:20.305019   79614 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem (1675 bytes)
	I0806 08:34:20.305089   79614 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem (1708 bytes)
	I0806 08:34:20.305821   79614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0806 08:34:20.341201   79614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0806 08:34:20.375406   79614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0806 08:34:20.407409   79614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0806 08:34:20.449555   79614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/default-k8s-diff-port-990751/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0806 08:34:20.479479   79614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/default-k8s-diff-port-990751/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0806 08:34:20.518719   79614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/default-k8s-diff-port-990751/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0806 08:34:20.544212   79614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/default-k8s-diff-port-990751/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0806 08:34:20.569413   79614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0806 08:34:20.594573   79614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246.pem --> /usr/share/ca-certificates/20246.pem (1338 bytes)
	I0806 08:34:20.620183   79614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem --> /usr/share/ca-certificates/202462.pem (1708 bytes)
	I0806 08:34:20.648311   79614 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0806 08:34:20.670845   79614 ssh_runner.go:195] Run: openssl version
	I0806 08:34:20.677437   79614 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202462.pem && ln -fs /usr/share/ca-certificates/202462.pem /etc/ssl/certs/202462.pem"
	I0806 08:34:20.688990   79614 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202462.pem
	I0806 08:34:20.694512   79614 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  6 07:17 /usr/share/ca-certificates/202462.pem
	I0806 08:34:20.694580   79614 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202462.pem
	I0806 08:34:20.701362   79614 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202462.pem /etc/ssl/certs/3ec20f2e.0"
	I0806 08:34:20.712966   79614 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0806 08:34:20.724455   79614 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0806 08:34:20.729111   79614 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  6 07:07 /usr/share/ca-certificates/minikubeCA.pem
	I0806 08:34:20.729165   79614 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0806 08:34:20.734973   79614 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0806 08:34:20.746387   79614 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20246.pem && ln -fs /usr/share/ca-certificates/20246.pem /etc/ssl/certs/20246.pem"
	I0806 08:34:20.757763   79614 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20246.pem
	I0806 08:34:20.762396   79614 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  6 07:17 /usr/share/ca-certificates/20246.pem
	I0806 08:34:20.762454   79614 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20246.pem
	I0806 08:34:20.768261   79614 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20246.pem /etc/ssl/certs/51391683.0"
	I0806 08:34:20.779553   79614 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0806 08:34:20.785854   79614 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0806 08:34:20.794054   79614 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0806 08:34:20.802652   79614 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0806 08:34:20.809929   79614 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0806 08:34:20.816886   79614 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0806 08:34:20.823667   79614 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0806 08:34:20.830355   79614 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-990751 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-990751 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.235 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 08:34:20.830456   79614 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0806 08:34:20.830527   79614 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0806 08:34:20.878368   79614 cri.go:89] found id: ""
	I0806 08:34:20.878455   79614 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0806 08:34:20.889175   79614 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0806 08:34:20.889196   79614 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0806 08:34:20.889250   79614 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0806 08:34:20.899842   79614 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0806 08:34:20.901037   79614 kubeconfig.go:125] found "default-k8s-diff-port-990751" server: "https://192.168.50.235:8444"
	I0806 08:34:20.902781   79614 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0806 08:34:20.914810   79614 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.235
	I0806 08:34:20.914847   79614 kubeadm.go:1160] stopping kube-system containers ...
	I0806 08:34:20.914859   79614 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0806 08:34:20.914905   79614 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0806 08:34:20.956719   79614 cri.go:89] found id: ""
	I0806 08:34:20.956772   79614 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0806 08:34:20.977367   79614 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0806 08:34:20.989491   79614 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0806 08:34:20.989513   79614 kubeadm.go:157] found existing configuration files:
	
	I0806 08:34:20.989565   79614 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0806 08:34:21.000759   79614 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0806 08:34:21.000820   79614 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0806 08:34:21.012404   79614 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0806 08:34:21.023449   79614 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0806 08:34:21.023536   79614 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0806 08:34:21.035354   79614 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0806 08:34:21.045285   79614 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0806 08:34:21.045347   79614 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0806 08:34:21.057182   79614 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0806 08:34:21.067431   79614 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0806 08:34:21.067497   79614 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0806 08:34:21.077955   79614 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0806 08:34:21.088824   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:21.209627   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:21.772624   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:22.022615   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:22.107829   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:22.235446   79614 api_server.go:52] waiting for apiserver process to appear ...
	I0806 08:34:22.235552   79614 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:22.735843   79614 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:23.236385   79614 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:23.314994   79614 api_server.go:72] duration metric: took 1.079547638s to wait for apiserver process to appear ...
	I0806 08:34:23.315025   79614 api_server.go:88] waiting for apiserver healthz status ...
	I0806 08:34:23.315071   79614 api_server.go:253] Checking apiserver healthz at https://192.168.50.235:8444/healthz ...
	I0806 08:34:23.315581   79614 api_server.go:269] stopped: https://192.168.50.235:8444/healthz: Get "https://192.168.50.235:8444/healthz": dial tcp 192.168.50.235:8444: connect: connection refused
	I0806 08:34:23.815428   79614 api_server.go:253] Checking apiserver healthz at https://192.168.50.235:8444/healthz ...
	I0806 08:34:20.145118   79219 pod_ready.go:102] pod "kube-controller-manager-embed-certs-324808" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:23.051556   79219 pod_ready.go:102] pod "kube-controller-manager-embed-certs-324808" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:23.077556   79219 pod_ready.go:92] pod "kube-controller-manager-embed-certs-324808" in "kube-system" namespace has status "Ready":"True"
	I0806 08:34:23.077581   79219 pod_ready.go:81] duration metric: took 4.94079164s for pod "kube-controller-manager-embed-certs-324808" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:23.077594   79219 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8lbzv" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:23.091745   79219 pod_ready.go:92] pod "kube-proxy-8lbzv" in "kube-system" namespace has status "Ready":"True"
	I0806 08:34:23.091773   79219 pod_ready.go:81] duration metric: took 14.170363ms for pod "kube-proxy-8lbzv" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:23.091789   79219 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-324808" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:23.105567   79219 pod_ready.go:92] pod "kube-scheduler-embed-certs-324808" in "kube-system" namespace has status "Ready":"True"
	I0806 08:34:23.105597   79219 pod_ready.go:81] duration metric: took 13.798762ms for pod "kube-scheduler-embed-certs-324808" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:23.105612   79219 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:23.987256   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:23.987658   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:34:23.987683   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:34:23.987617   80849 retry.go:31] will retry after 2.000071561s: waiting for machine to come up
	I0806 08:34:25.989501   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:25.989880   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:34:25.989906   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:34:25.989838   80849 retry.go:31] will retry after 2.508269501s: waiting for machine to come up
	I0806 08:34:28.501449   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:28.501842   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:34:28.501874   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:34:28.501802   80849 retry.go:31] will retry after 2.7583106s: waiting for machine to come up
	I0806 08:34:26.696273   79614 api_server.go:279] https://192.168.50.235:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0806 08:34:26.696325   79614 api_server.go:103] status: https://192.168.50.235:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0806 08:34:26.696342   79614 api_server.go:253] Checking apiserver healthz at https://192.168.50.235:8444/healthz ...
	I0806 08:34:26.704047   79614 api_server.go:279] https://192.168.50.235:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0806 08:34:26.704078   79614 api_server.go:103] status: https://192.168.50.235:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0806 08:34:26.815341   79614 api_server.go:253] Checking apiserver healthz at https://192.168.50.235:8444/healthz ...
	I0806 08:34:26.819686   79614 api_server.go:279] https://192.168.50.235:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0806 08:34:26.819718   79614 api_server.go:103] status: https://192.168.50.235:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0806 08:34:27.315242   79614 api_server.go:253] Checking apiserver healthz at https://192.168.50.235:8444/healthz ...
	I0806 08:34:27.320753   79614 api_server.go:279] https://192.168.50.235:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0806 08:34:27.320789   79614 api_server.go:103] status: https://192.168.50.235:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0806 08:34:27.815164   79614 api_server.go:253] Checking apiserver healthz at https://192.168.50.235:8444/healthz ...
	I0806 08:34:27.821943   79614 api_server.go:279] https://192.168.50.235:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0806 08:34:27.821967   79614 api_server.go:103] status: https://192.168.50.235:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0806 08:34:28.315169   79614 api_server.go:253] Checking apiserver healthz at https://192.168.50.235:8444/healthz ...
	I0806 08:34:28.321921   79614 api_server.go:279] https://192.168.50.235:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0806 08:34:28.321945   79614 api_server.go:103] status: https://192.168.50.235:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0806 08:34:28.815717   79614 api_server.go:253] Checking apiserver healthz at https://192.168.50.235:8444/healthz ...
	I0806 08:34:28.819946   79614 api_server.go:279] https://192.168.50.235:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0806 08:34:28.819973   79614 api_server.go:103] status: https://192.168.50.235:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0806 08:34:25.112333   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:27.112488   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:29.315643   79614 api_server.go:253] Checking apiserver healthz at https://192.168.50.235:8444/healthz ...
	I0806 08:34:29.319784   79614 api_server.go:279] https://192.168.50.235:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0806 08:34:29.319813   79614 api_server.go:103] status: https://192.168.50.235:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0806 08:34:29.815945   79614 api_server.go:253] Checking apiserver healthz at https://192.168.50.235:8444/healthz ...
	I0806 08:34:29.821441   79614 api_server.go:279] https://192.168.50.235:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0806 08:34:29.821476   79614 api_server.go:103] status: https://192.168.50.235:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0806 08:34:30.315997   79614 api_server.go:253] Checking apiserver healthz at https://192.168.50.235:8444/healthz ...
	I0806 08:34:30.321179   79614 api_server.go:279] https://192.168.50.235:8444/healthz returned 200:
	ok
	I0806 08:34:30.327083   79614 api_server.go:141] control plane version: v1.30.3
	I0806 08:34:30.327113   79614 api_server.go:131] duration metric: took 7.012080775s to wait for apiserver health ...
	I0806 08:34:30.327122   79614 cni.go:84] Creating CNI manager for ""
	I0806 08:34:30.327128   79614 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0806 08:34:30.328860   79614 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0806 08:34:30.330232   79614 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0806 08:34:30.341069   79614 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0806 08:34:30.359470   79614 system_pods.go:43] waiting for kube-system pods to appear ...
	I0806 08:34:30.369165   79614 system_pods.go:59] 8 kube-system pods found
	I0806 08:34:30.369209   79614 system_pods.go:61] "coredns-7db6d8ff4d-gx8tt" [48f83ba8-a238-4baf-94af-88370fefcb02] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0806 08:34:30.369227   79614 system_pods.go:61] "etcd-default-k8s-diff-port-990751" [bf342ba4-1e11-40bf-80fd-02b402043ecf] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0806 08:34:30.369246   79614 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-990751" [11ee42c4-c9e9-41cf-ace2-deac0f30153a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0806 08:34:30.369256   79614 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-990751" [f17fca30-0afc-4ed8-a8cc-cb64e69723f3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0806 08:34:30.369265   79614 system_pods.go:61] "kube-proxy-lpf88" [866c10f0-2c44-4c03-b460-ff2194bd1ec6] Running
	I0806 08:34:30.369272   79614 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-990751" [2e591ea7-07bd-464f-bf0e-bc24bfcca016] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0806 08:34:30.369282   79614 system_pods.go:61] "metrics-server-569cc877fc-hnxd4" [f369ac70-49b7-448a-9852-89232fb66da9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0806 08:34:30.369290   79614 system_pods.go:61] "storage-provisioner" [0faff3fc-d34b-418e-972b-95a2e36b577a] Running
	I0806 08:34:30.369299   79614 system_pods.go:74] duration metric: took 9.80217ms to wait for pod list to return data ...
	I0806 08:34:30.369311   79614 node_conditions.go:102] verifying NodePressure condition ...
	I0806 08:34:30.372222   79614 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0806 08:34:30.372250   79614 node_conditions.go:123] node cpu capacity is 2
	I0806 08:34:30.372261   79614 node_conditions.go:105] duration metric: took 2.944552ms to run NodePressure ...
	I0806 08:34:30.372279   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:30.639677   79614 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0806 08:34:30.643923   79614 kubeadm.go:739] kubelet initialised
	I0806 08:34:30.643952   79614 kubeadm.go:740] duration metric: took 4.248408ms waiting for restarted kubelet to initialise ...
	I0806 08:34:30.643962   79614 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 08:34:30.648766   79614 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-gx8tt" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:30.653672   79614 pod_ready.go:97] node "default-k8s-diff-port-990751" hosting pod "coredns-7db6d8ff4d-gx8tt" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-990751" has status "Ready":"False"
	I0806 08:34:30.653703   79614 pod_ready.go:81] duration metric: took 4.90961ms for pod "coredns-7db6d8ff4d-gx8tt" in "kube-system" namespace to be "Ready" ...
	E0806 08:34:30.653714   79614 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-990751" hosting pod "coredns-7db6d8ff4d-gx8tt" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-990751" has status "Ready":"False"
	I0806 08:34:30.653723   79614 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-990751" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:30.658080   79614 pod_ready.go:97] node "default-k8s-diff-port-990751" hosting pod "etcd-default-k8s-diff-port-990751" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-990751" has status "Ready":"False"
	I0806 08:34:30.658106   79614 pod_ready.go:81] duration metric: took 4.370083ms for pod "etcd-default-k8s-diff-port-990751" in "kube-system" namespace to be "Ready" ...
	E0806 08:34:30.658119   79614 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-990751" hosting pod "etcd-default-k8s-diff-port-990751" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-990751" has status "Ready":"False"
	I0806 08:34:30.658125   79614 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-990751" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:30.662246   79614 pod_ready.go:97] node "default-k8s-diff-port-990751" hosting pod "kube-apiserver-default-k8s-diff-port-990751" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-990751" has status "Ready":"False"
	I0806 08:34:30.662271   79614 pod_ready.go:81] duration metric: took 4.139553ms for pod "kube-apiserver-default-k8s-diff-port-990751" in "kube-system" namespace to be "Ready" ...
	E0806 08:34:30.662280   79614 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-990751" hosting pod "kube-apiserver-default-k8s-diff-port-990751" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-990751" has status "Ready":"False"
	I0806 08:34:30.662286   79614 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-990751" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:30.763246   79614 pod_ready.go:97] node "default-k8s-diff-port-990751" hosting pod "kube-controller-manager-default-k8s-diff-port-990751" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-990751" has status "Ready":"False"
	I0806 08:34:30.763286   79614 pod_ready.go:81] duration metric: took 100.991212ms for pod "kube-controller-manager-default-k8s-diff-port-990751" in "kube-system" namespace to be "Ready" ...
	E0806 08:34:30.763302   79614 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-990751" hosting pod "kube-controller-manager-default-k8s-diff-port-990751" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-990751" has status "Ready":"False"
	I0806 08:34:30.763311   79614 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-lpf88" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:31.162972   79614 pod_ready.go:97] node "default-k8s-diff-port-990751" hosting pod "kube-proxy-lpf88" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-990751" has status "Ready":"False"
	I0806 08:34:31.162998   79614 pod_ready.go:81] duration metric: took 399.678401ms for pod "kube-proxy-lpf88" in "kube-system" namespace to be "Ready" ...
	E0806 08:34:31.163006   79614 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-990751" hosting pod "kube-proxy-lpf88" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-990751" has status "Ready":"False"
	I0806 08:34:31.163012   79614 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-990751" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:31.562538   79614 pod_ready.go:97] node "default-k8s-diff-port-990751" hosting pod "kube-scheduler-default-k8s-diff-port-990751" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-990751" has status "Ready":"False"
	I0806 08:34:31.562564   79614 pod_ready.go:81] duration metric: took 399.544055ms for pod "kube-scheduler-default-k8s-diff-port-990751" in "kube-system" namespace to be "Ready" ...
	E0806 08:34:31.562575   79614 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-990751" hosting pod "kube-scheduler-default-k8s-diff-port-990751" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-990751" has status "Ready":"False"
	I0806 08:34:31.562582   79614 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:31.963410   79614 pod_ready.go:97] node "default-k8s-diff-port-990751" hosting pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-990751" has status "Ready":"False"
	I0806 08:34:31.963438   79614 pod_ready.go:81] duration metric: took 400.849931ms for pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace to be "Ready" ...
	E0806 08:34:31.963449   79614 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-990751" hosting pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-990751" has status "Ready":"False"
	I0806 08:34:31.963456   79614 pod_ready.go:38] duration metric: took 1.319483278s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 08:34:31.963472   79614 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0806 08:34:31.975982   79614 ops.go:34] apiserver oom_adj: -16
	I0806 08:34:31.976006   79614 kubeadm.go:597] duration metric: took 11.086801084s to restartPrimaryControlPlane
	I0806 08:34:31.976016   79614 kubeadm.go:394] duration metric: took 11.145668347s to StartCluster
	I0806 08:34:31.976035   79614 settings.go:142] acquiring lock: {Name:mkb0bfbcae3b4205ef43487a9de84cfd8afd2e5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:34:31.976131   79614 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19370-13057/kubeconfig
	I0806 08:34:31.977626   79614 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/kubeconfig: {Name:mk5c066914acf8e36556b29792f75b98076ca34a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:34:31.977871   79614 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.235 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0806 08:34:31.977927   79614 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0806 08:34:31.978004   79614 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-990751"
	I0806 08:34:31.978058   79614 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-990751"
	W0806 08:34:31.978070   79614 addons.go:243] addon storage-provisioner should already be in state true
	I0806 08:34:31.978060   79614 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-990751"
	I0806 08:34:31.978092   79614 config.go:182] Loaded profile config "default-k8s-diff-port-990751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 08:34:31.978076   79614 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-990751"
	I0806 08:34:31.978124   79614 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-990751"
	I0806 08:34:31.978154   79614 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-990751"
	I0806 08:34:31.978111   79614 host.go:66] Checking if "default-k8s-diff-port-990751" exists ...
	W0806 08:34:31.978172   79614 addons.go:243] addon metrics-server should already be in state true
	I0806 08:34:31.978250   79614 host.go:66] Checking if "default-k8s-diff-port-990751" exists ...
	I0806 08:34:31.978595   79614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:34:31.978651   79614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:34:31.978602   79614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:34:31.978691   79614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:34:31.978650   79614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:34:31.978806   79614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:34:31.979749   79614 out.go:177] * Verifying Kubernetes components...
	I0806 08:34:31.981485   79614 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 08:34:31.994362   79614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36289
	I0806 08:34:31.995043   79614 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:34:31.995594   79614 main.go:141] libmachine: Using API Version  1
	I0806 08:34:31.995610   79614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:34:31.995940   79614 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:34:31.996152   79614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36601
	I0806 08:34:31.996485   79614 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:34:31.996569   79614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:34:31.996613   79614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:34:31.996785   79614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42273
	I0806 08:34:31.997043   79614 main.go:141] libmachine: Using API Version  1
	I0806 08:34:31.997062   79614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:34:31.997335   79614 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:34:31.997424   79614 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:34:31.997601   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetState
	I0806 08:34:31.997759   79614 main.go:141] libmachine: Using API Version  1
	I0806 08:34:31.997775   79614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:34:31.998047   79614 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:34:31.998511   79614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:34:31.998546   79614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:34:32.001381   79614 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-990751"
	W0806 08:34:32.001397   79614 addons.go:243] addon default-storageclass should already be in state true
	I0806 08:34:32.001427   79614 host.go:66] Checking if "default-k8s-diff-port-990751" exists ...
	I0806 08:34:32.001690   79614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:34:32.001721   79614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:34:32.015095   79614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41633
	I0806 08:34:32.015597   79614 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:34:32.016197   79614 main.go:141] libmachine: Using API Version  1
	I0806 08:34:32.016220   79614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:34:32.016245   79614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33731
	I0806 08:34:32.016951   79614 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:34:32.016953   79614 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:34:32.017491   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetState
	I0806 08:34:32.017572   79614 main.go:141] libmachine: Using API Version  1
	I0806 08:34:32.017601   79614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:34:32.017951   79614 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:34:32.018222   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetState
	I0806 08:34:32.020339   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .DriverName
	I0806 08:34:32.020347   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .DriverName
	I0806 08:34:32.021417   79614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40949
	I0806 08:34:32.021814   79614 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:34:32.022232   79614 main.go:141] libmachine: Using API Version  1
	I0806 08:34:32.022254   79614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:34:32.022609   79614 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:34:32.022690   79614 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 08:34:32.022690   79614 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0806 08:34:32.023365   79614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:34:32.023410   79614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:34:32.024304   79614 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0806 08:34:32.024326   79614 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0806 08:34:32.024343   79614 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0806 08:34:32.024358   79614 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0806 08:34:32.024387   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHHostname
	I0806 08:34:32.024346   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHHostname
	I0806 08:34:32.029046   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:32.029292   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:32.029490   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:32.029516   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:32.029672   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:32.029676   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHPort
	I0806 08:34:32.029694   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:32.029840   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHPort
	I0806 08:34:32.029866   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHKeyPath
	I0806 08:34:32.030000   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHKeyPath
	I0806 08:34:32.030122   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHUsername
	I0806 08:34:32.030124   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHUsername
	I0806 08:34:32.030265   79614 sshutil.go:53] new ssh client: &{IP:192.168.50.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/default-k8s-diff-port-990751/id_rsa Username:docker}
	I0806 08:34:32.030265   79614 sshutil.go:53] new ssh client: &{IP:192.168.50.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/default-k8s-diff-port-990751/id_rsa Username:docker}
	I0806 08:34:32.043523   79614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40021
	I0806 08:34:32.043915   79614 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:34:32.044560   79614 main.go:141] libmachine: Using API Version  1
	I0806 08:34:32.044583   79614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:34:32.044969   79614 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:34:32.045214   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetState
	I0806 08:34:32.046980   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .DriverName
	I0806 08:34:32.047261   79614 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0806 08:34:32.047277   79614 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0806 08:34:32.047296   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHHostname
	I0806 08:34:32.050654   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:32.050991   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:32.051020   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:32.051152   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHPort
	I0806 08:34:32.051329   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHKeyPath
	I0806 08:34:32.051495   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHUsername
	I0806 08:34:32.051613   79614 sshutil.go:53] new ssh client: &{IP:192.168.50.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/default-k8s-diff-port-990751/id_rsa Username:docker}
	I0806 08:34:32.192056   79614 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 08:34:32.231428   79614 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-990751" to be "Ready" ...
	I0806 08:34:32.278049   79614 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0806 08:34:32.378290   79614 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0806 08:34:32.378317   79614 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0806 08:34:32.399728   79614 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0806 08:34:32.419412   79614 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0806 08:34:32.419441   79614 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0806 08:34:32.470691   79614 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0806 08:34:32.470726   79614 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0806 08:34:32.514191   79614 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0806 08:34:32.590737   79614 main.go:141] libmachine: Making call to close driver server
	I0806 08:34:32.590772   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .Close
	I0806 08:34:32.591110   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | Closing plugin on server side
	I0806 08:34:32.591177   79614 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:34:32.591190   79614 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:34:32.591202   79614 main.go:141] libmachine: Making call to close driver server
	I0806 08:34:32.591212   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .Close
	I0806 08:34:32.591465   79614 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:34:32.591492   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | Closing plugin on server side
	I0806 08:34:32.591505   79614 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:34:32.597276   79614 main.go:141] libmachine: Making call to close driver server
	I0806 08:34:32.597297   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .Close
	I0806 08:34:32.597547   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | Closing plugin on server side
	I0806 08:34:32.597564   79614 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:34:32.597579   79614 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:34:33.401000   79614 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.001229332s)
	I0806 08:34:33.401045   79614 main.go:141] libmachine: Making call to close driver server
	I0806 08:34:33.401062   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .Close
	I0806 08:34:33.401346   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | Closing plugin on server side
	I0806 08:34:33.401402   79614 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:34:33.401426   79614 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:34:33.401443   79614 main.go:141] libmachine: Making call to close driver server
	I0806 08:34:33.401452   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .Close
	I0806 08:34:33.401692   79614 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:34:33.401720   79614 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:34:33.455633   79614 main.go:141] libmachine: Making call to close driver server
	I0806 08:34:33.455663   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .Close
	I0806 08:34:33.455947   79614 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:34:33.455964   79614 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:34:33.455964   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | Closing plugin on server side
	I0806 08:34:33.455972   79614 main.go:141] libmachine: Making call to close driver server
	I0806 08:34:33.455980   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .Close
	I0806 08:34:33.456288   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | Closing plugin on server side
	I0806 08:34:33.456286   79614 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:34:33.456328   79614 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:34:33.456347   79614 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-990751"
	I0806 08:34:33.459411   79614 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0806 08:34:31.262960   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:31.263282   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:34:31.263304   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:34:31.263238   80849 retry.go:31] will retry after 4.76596973s: waiting for machine to come up
	I0806 08:34:33.460780   79614 addons.go:510] duration metric: took 1.482853001s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0806 08:34:29.611714   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:31.612385   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:34.111936   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:37.353538   78922 start.go:364] duration metric: took 58.142260904s to acquireMachinesLock for "no-preload-703340"
	I0806 08:34:37.353587   78922 start.go:96] Skipping create...Using existing machine configuration
	I0806 08:34:37.353598   78922 fix.go:54] fixHost starting: 
	I0806 08:34:37.354016   78922 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:34:37.354050   78922 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:34:37.373838   78922 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37209
	I0806 08:34:37.374283   78922 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:34:37.374874   78922 main.go:141] libmachine: Using API Version  1
	I0806 08:34:37.374910   78922 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:34:37.375250   78922 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:34:37.375448   78922 main.go:141] libmachine: (no-preload-703340) Calling .DriverName
	I0806 08:34:37.375584   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetState
	I0806 08:34:37.377251   78922 fix.go:112] recreateIfNeeded on no-preload-703340: state=Stopped err=<nil>
	I0806 08:34:37.377275   78922 main.go:141] libmachine: (no-preload-703340) Calling .DriverName
	W0806 08:34:37.377421   78922 fix.go:138] unexpected machine state, will restart: <nil>
	I0806 08:34:37.379343   78922 out.go:177] * Restarting existing kvm2 VM for "no-preload-703340" ...
	I0806 08:34:36.030512   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.030950   79927 main.go:141] libmachine: (old-k8s-version-409903) Found IP for machine: 192.168.61.158
	I0806 08:34:36.030972   79927 main.go:141] libmachine: (old-k8s-version-409903) Reserving static IP address...
	I0806 08:34:36.030989   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has current primary IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.031349   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "old-k8s-version-409903", mac: "52:54:00:12:30:88", ip: "192.168.61.158"} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:36.031376   79927 main.go:141] libmachine: (old-k8s-version-409903) Reserved static IP address: 192.168.61.158
	I0806 08:34:36.031398   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | skip adding static IP to network mk-old-k8s-version-409903 - found existing host DHCP lease matching {name: "old-k8s-version-409903", mac: "52:54:00:12:30:88", ip: "192.168.61.158"}
	I0806 08:34:36.031416   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | Getting to WaitForSSH function...
	I0806 08:34:36.031428   79927 main.go:141] libmachine: (old-k8s-version-409903) Waiting for SSH to be available...
	I0806 08:34:36.033684   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.034094   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:36.034124   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.034327   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | Using SSH client type: external
	I0806 08:34:36.034366   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | Using SSH private key: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/old-k8s-version-409903/id_rsa (-rw-------)
	I0806 08:34:36.034395   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.158 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19370-13057/.minikube/machines/old-k8s-version-409903/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0806 08:34:36.034407   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | About to run SSH command:
	I0806 08:34:36.034415   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | exit 0
	I0806 08:34:36.164605   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | SSH cmd err, output: <nil>: 
	I0806 08:34:36.164952   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetConfigRaw
	I0806 08:34:36.165509   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetIP
	I0806 08:34:36.168135   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.168471   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:36.168499   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.168756   79927 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/old-k8s-version-409903/config.json ...
	I0806 08:34:36.168956   79927 machine.go:94] provisionDockerMachine start ...
	I0806 08:34:36.168974   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .DriverName
	I0806 08:34:36.169186   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHHostname
	I0806 08:34:36.171367   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.171702   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:36.171723   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.171902   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHPort
	I0806 08:34:36.172080   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:34:36.172240   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:34:36.172400   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHUsername
	I0806 08:34:36.172569   79927 main.go:141] libmachine: Using SSH client type: native
	I0806 08:34:36.172798   79927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.158 22 <nil> <nil>}
	I0806 08:34:36.172812   79927 main.go:141] libmachine: About to run SSH command:
	hostname
	I0806 08:34:36.288738   79927 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0806 08:34:36.288770   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetMachineName
	I0806 08:34:36.289073   79927 buildroot.go:166] provisioning hostname "old-k8s-version-409903"
	I0806 08:34:36.289088   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetMachineName
	I0806 08:34:36.289272   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHHostname
	I0806 08:34:36.291958   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.292326   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:36.292354   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.292462   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHPort
	I0806 08:34:36.292662   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:34:36.292807   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:34:36.292953   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHUsername
	I0806 08:34:36.293122   79927 main.go:141] libmachine: Using SSH client type: native
	I0806 08:34:36.293287   79927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.158 22 <nil> <nil>}
	I0806 08:34:36.293300   79927 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-409903 && echo "old-k8s-version-409903" | sudo tee /etc/hostname
	I0806 08:34:36.427591   79927 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-409903
	
	I0806 08:34:36.427620   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHHostname
	I0806 08:34:36.430395   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.430724   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:36.430758   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.430900   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHPort
	I0806 08:34:36.431133   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:34:36.431311   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:34:36.431476   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHUsername
	I0806 08:34:36.431662   79927 main.go:141] libmachine: Using SSH client type: native
	I0806 08:34:36.431876   79927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.158 22 <nil> <nil>}
	I0806 08:34:36.431895   79927 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-409903' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-409903/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-409903' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0806 08:34:36.554542   79927 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 08:34:36.554574   79927 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19370-13057/.minikube CaCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19370-13057/.minikube}
	I0806 08:34:36.554613   79927 buildroot.go:174] setting up certificates
	I0806 08:34:36.554629   79927 provision.go:84] configureAuth start
	I0806 08:34:36.554646   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetMachineName
	I0806 08:34:36.554932   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetIP
	I0806 08:34:36.557899   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.558351   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:36.558373   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.558530   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHHostname
	I0806 08:34:36.560878   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.561153   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:36.561182   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.561339   79927 provision.go:143] copyHostCerts
	I0806 08:34:36.561400   79927 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem, removing ...
	I0806 08:34:36.561410   79927 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem
	I0806 08:34:36.561462   79927 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem (1078 bytes)
	I0806 08:34:36.561563   79927 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem, removing ...
	I0806 08:34:36.561572   79927 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem
	I0806 08:34:36.561594   79927 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem (1123 bytes)
	I0806 08:34:36.561647   79927 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem, removing ...
	I0806 08:34:36.561653   79927 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem
	I0806 08:34:36.561670   79927 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem (1675 bytes)
	I0806 08:34:36.561752   79927 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-409903 san=[127.0.0.1 192.168.61.158 localhost minikube old-k8s-version-409903]
	I0806 08:34:36.646180   79927 provision.go:177] copyRemoteCerts
	I0806 08:34:36.646233   79927 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0806 08:34:36.646257   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHHostname
	I0806 08:34:36.649020   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.649380   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:36.649405   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.649564   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHPort
	I0806 08:34:36.649791   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:34:36.649962   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHUsername
	I0806 08:34:36.650092   79927 sshutil.go:53] new ssh client: &{IP:192.168.61.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/old-k8s-version-409903/id_rsa Username:docker}
	I0806 08:34:36.738988   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0806 08:34:36.765104   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0806 08:34:36.791946   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0806 08:34:36.817128   79927 provision.go:87] duration metric: took 262.482885ms to configureAuth
	I0806 08:34:36.817157   79927 buildroot.go:189] setting minikube options for container-runtime
	I0806 08:34:36.817352   79927 config.go:182] Loaded profile config "old-k8s-version-409903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0806 08:34:36.817435   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHHostname
	I0806 08:34:36.820142   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.820480   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:36.820520   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.820659   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHPort
	I0806 08:34:36.820892   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:34:36.821095   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:34:36.821289   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHUsername
	I0806 08:34:36.821459   79927 main.go:141] libmachine: Using SSH client type: native
	I0806 08:34:36.821703   79927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.158 22 <nil> <nil>}
	I0806 08:34:36.821722   79927 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0806 08:34:37.099949   79927 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0806 08:34:37.099979   79927 machine.go:97] duration metric: took 931.010538ms to provisionDockerMachine
	I0806 08:34:37.099993   79927 start.go:293] postStartSetup for "old-k8s-version-409903" (driver="kvm2")
	I0806 08:34:37.100007   79927 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0806 08:34:37.100028   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .DriverName
	I0806 08:34:37.100379   79927 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0806 08:34:37.100414   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHHostname
	I0806 08:34:37.103297   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:37.103691   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:37.103717   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:37.103887   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHPort
	I0806 08:34:37.104106   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:34:37.104263   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHUsername
	I0806 08:34:37.104402   79927 sshutil.go:53] new ssh client: &{IP:192.168.61.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/old-k8s-version-409903/id_rsa Username:docker}
	I0806 08:34:37.192035   79927 ssh_runner.go:195] Run: cat /etc/os-release
	I0806 08:34:37.196693   79927 info.go:137] Remote host: Buildroot 2023.02.9
	I0806 08:34:37.196712   79927 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-13057/.minikube/addons for local assets ...
	I0806 08:34:37.196776   79927 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-13057/.minikube/files for local assets ...
	I0806 08:34:37.196861   79927 filesync.go:149] local asset: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem -> 202462.pem in /etc/ssl/certs
	I0806 08:34:37.196957   79927 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0806 08:34:37.206906   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem --> /etc/ssl/certs/202462.pem (1708 bytes)
	I0806 08:34:37.232270   79927 start.go:296] duration metric: took 132.262362ms for postStartSetup
	I0806 08:34:37.232311   79927 fix.go:56] duration metric: took 22.878683419s for fixHost
	I0806 08:34:37.232332   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHHostname
	I0806 08:34:37.235278   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:37.235721   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:37.235751   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:37.235946   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHPort
	I0806 08:34:37.236168   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:34:37.236337   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:34:37.236491   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHUsername
	I0806 08:34:37.236640   79927 main.go:141] libmachine: Using SSH client type: native
	I0806 08:34:37.236793   79927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.158 22 <nil> <nil>}
	I0806 08:34:37.236803   79927 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0806 08:34:37.353370   79927 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722933277.327882532
	
	I0806 08:34:37.353397   79927 fix.go:216] guest clock: 1722933277.327882532
	I0806 08:34:37.353408   79927 fix.go:229] Guest: 2024-08-06 08:34:37.327882532 +0000 UTC Remote: 2024-08-06 08:34:37.232315481 +0000 UTC m=+218.506631326 (delta=95.567051ms)
	I0806 08:34:37.353439   79927 fix.go:200] guest clock delta is within tolerance: 95.567051ms
	I0806 08:34:37.353451   79927 start.go:83] releasing machines lock for "old-k8s-version-409903", held for 22.999876259s
	I0806 08:34:37.353490   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .DriverName
	I0806 08:34:37.353775   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetIP
	I0806 08:34:37.356788   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:37.357197   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:37.357224   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:37.357434   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .DriverName
	I0806 08:34:37.358035   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .DriverName
	I0806 08:34:37.358238   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .DriverName
	I0806 08:34:37.358320   79927 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0806 08:34:37.358360   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHHostname
	I0806 08:34:37.358478   79927 ssh_runner.go:195] Run: cat /version.json
	I0806 08:34:37.358505   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHHostname
	I0806 08:34:37.361125   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:37.361429   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:37.361464   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:37.361490   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:37.361681   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHPort
	I0806 08:34:37.361846   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:34:37.361889   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:37.361913   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:37.362018   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHUsername
	I0806 08:34:37.362078   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHPort
	I0806 08:34:37.362188   79927 sshutil.go:53] new ssh client: &{IP:192.168.61.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/old-k8s-version-409903/id_rsa Username:docker}
	I0806 08:34:37.362319   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:34:37.362452   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHUsername
	I0806 08:34:37.362596   79927 sshutil.go:53] new ssh client: &{IP:192.168.61.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/old-k8s-version-409903/id_rsa Username:docker}
	I0806 08:34:37.468465   79927 ssh_runner.go:195] Run: systemctl --version
	I0806 08:34:37.475436   79927 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0806 08:34:37.627725   79927 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0806 08:34:37.635346   79927 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0806 08:34:37.635432   79927 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0806 08:34:37.652546   79927 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0806 08:34:37.652571   79927 start.go:495] detecting cgroup driver to use...
	I0806 08:34:37.652642   79927 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0806 08:34:37.671386   79927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 08:34:37.686408   79927 docker.go:217] disabling cri-docker service (if available) ...
	I0806 08:34:37.686473   79927 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0806 08:34:37.701731   79927 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0806 08:34:37.717528   79927 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0806 08:34:37.845160   79927 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0806 08:34:37.999278   79927 docker.go:233] disabling docker service ...
	I0806 08:34:37.999361   79927 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0806 08:34:38.017101   79927 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0806 08:34:38.033189   79927 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0806 08:34:38.213620   79927 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0806 08:34:38.374790   79927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0806 08:34:38.393394   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 08:34:38.419807   79927 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0806 08:34:38.419878   79927 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:38.434870   79927 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0806 08:34:38.434946   79927 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:38.449029   79927 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:38.460440   79927 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:38.473654   79927 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0806 08:34:38.486032   79927 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0806 08:34:38.495861   79927 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0806 08:34:38.495927   79927 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0806 08:34:38.511021   79927 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0806 08:34:38.521985   79927 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 08:34:38.651919   79927 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0806 08:34:34.235598   79614 node_ready.go:53] node "default-k8s-diff-port-990751" has status "Ready":"False"
	I0806 08:34:36.734741   79614 node_ready.go:53] node "default-k8s-diff-port-990751" has status "Ready":"False"
	I0806 08:34:37.235469   79614 node_ready.go:49] node "default-k8s-diff-port-990751" has status "Ready":"True"
	I0806 08:34:37.235496   79614 node_ready.go:38] duration metric: took 5.004029432s for node "default-k8s-diff-port-990751" to be "Ready" ...
	I0806 08:34:37.235506   79614 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 08:34:37.242299   79614 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-gx8tt" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:37.247917   79614 pod_ready.go:92] pod "coredns-7db6d8ff4d-gx8tt" in "kube-system" namespace has status "Ready":"True"
	I0806 08:34:37.247936   79614 pod_ready.go:81] duration metric: took 5.613775ms for pod "coredns-7db6d8ff4d-gx8tt" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:37.247944   79614 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-990751" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:38.822367   79927 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0806 08:34:38.822444   79927 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0806 08:34:38.828148   79927 start.go:563] Will wait 60s for crictl version
	I0806 08:34:38.828211   79927 ssh_runner.go:195] Run: which crictl
	I0806 08:34:38.832084   79927 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0806 08:34:38.872571   79927 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0806 08:34:38.872658   79927 ssh_runner.go:195] Run: crio --version
	I0806 08:34:38.903507   79927 ssh_runner.go:195] Run: crio --version
	I0806 08:34:38.934955   79927 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0806 08:34:36.612552   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:38.614644   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:37.380808   78922 main.go:141] libmachine: (no-preload-703340) Calling .Start
	I0806 08:34:37.381005   78922 main.go:141] libmachine: (no-preload-703340) Ensuring networks are active...
	I0806 08:34:37.381749   78922 main.go:141] libmachine: (no-preload-703340) Ensuring network default is active
	I0806 08:34:37.382052   78922 main.go:141] libmachine: (no-preload-703340) Ensuring network mk-no-preload-703340 is active
	I0806 08:34:37.382556   78922 main.go:141] libmachine: (no-preload-703340) Getting domain xml...
	I0806 08:34:37.383386   78922 main.go:141] libmachine: (no-preload-703340) Creating domain...
	I0806 08:34:38.779397   78922 main.go:141] libmachine: (no-preload-703340) Waiting to get IP...
	I0806 08:34:38.780262   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:38.780737   78922 main.go:141] libmachine: (no-preload-703340) DBG | unable to find current IP address of domain no-preload-703340 in network mk-no-preload-703340
	I0806 08:34:38.780809   78922 main.go:141] libmachine: (no-preload-703340) DBG | I0806 08:34:38.780723   81046 retry.go:31] will retry after 194.059215ms: waiting for machine to come up
	I0806 08:34:38.976254   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:38.976734   78922 main.go:141] libmachine: (no-preload-703340) DBG | unable to find current IP address of domain no-preload-703340 in network mk-no-preload-703340
	I0806 08:34:38.976769   78922 main.go:141] libmachine: (no-preload-703340) DBG | I0806 08:34:38.976653   81046 retry.go:31] will retry after 288.140112ms: waiting for machine to come up
	I0806 08:34:39.266297   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:39.267031   78922 main.go:141] libmachine: (no-preload-703340) DBG | unable to find current IP address of domain no-preload-703340 in network mk-no-preload-703340
	I0806 08:34:39.267060   78922 main.go:141] libmachine: (no-preload-703340) DBG | I0806 08:34:39.266948   81046 retry.go:31] will retry after 354.804756ms: waiting for machine to come up
	I0806 08:34:39.623548   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:39.624083   78922 main.go:141] libmachine: (no-preload-703340) DBG | unable to find current IP address of domain no-preload-703340 in network mk-no-preload-703340
	I0806 08:34:39.624111   78922 main.go:141] libmachine: (no-preload-703340) DBG | I0806 08:34:39.623994   81046 retry.go:31] will retry after 418.851823ms: waiting for machine to come up
	I0806 08:34:40.044734   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:40.045142   78922 main.go:141] libmachine: (no-preload-703340) DBG | unable to find current IP address of domain no-preload-703340 in network mk-no-preload-703340
	I0806 08:34:40.045165   78922 main.go:141] libmachine: (no-preload-703340) DBG | I0806 08:34:40.045103   81046 retry.go:31] will retry after 525.29917ms: waiting for machine to come up
	I0806 08:34:40.571928   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:40.572395   78922 main.go:141] libmachine: (no-preload-703340) DBG | unable to find current IP address of domain no-preload-703340 in network mk-no-preload-703340
	I0806 08:34:40.572431   78922 main.go:141] libmachine: (no-preload-703340) DBG | I0806 08:34:40.572332   81046 retry.go:31] will retry after 609.965809ms: waiting for machine to come up
	I0806 08:34:41.184209   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:41.184793   78922 main.go:141] libmachine: (no-preload-703340) DBG | unable to find current IP address of domain no-preload-703340 in network mk-no-preload-703340
	I0806 08:34:41.184820   78922 main.go:141] libmachine: (no-preload-703340) DBG | I0806 08:34:41.184739   81046 retry.go:31] will retry after 1.120596s: waiting for machine to come up
	I0806 08:34:38.936206   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetIP
	I0806 08:34:38.939581   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:38.940049   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:38.940076   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:38.940336   79927 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0806 08:34:38.944801   79927 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 08:34:38.959454   79927 kubeadm.go:883] updating cluster {Name:old-k8s-version-409903 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-409903 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.158 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0806 08:34:38.959581   79927 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0806 08:34:38.959647   79927 ssh_runner.go:195] Run: sudo crictl images --output json
	I0806 08:34:39.014798   79927 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0806 08:34:39.014908   79927 ssh_runner.go:195] Run: which lz4
	I0806 08:34:39.019642   79927 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0806 08:34:39.025125   79927 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0806 08:34:39.025165   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0806 08:34:40.772902   79927 crio.go:462] duration metric: took 1.753273519s to copy over tarball
	I0806 08:34:40.772994   79927 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0806 08:34:39.256627   79614 pod_ready.go:102] pod "etcd-default-k8s-diff-port-990751" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:41.257158   79614 pod_ready.go:102] pod "etcd-default-k8s-diff-port-990751" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:42.754699   79614 pod_ready.go:92] pod "etcd-default-k8s-diff-port-990751" in "kube-system" namespace has status "Ready":"True"
	I0806 08:34:42.754729   79614 pod_ready.go:81] duration metric: took 5.506777292s for pod "etcd-default-k8s-diff-port-990751" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:42.754742   79614 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-990751" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:42.761260   79614 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-990751" in "kube-system" namespace has status "Ready":"True"
	I0806 08:34:42.761282   79614 pod_ready.go:81] duration metric: took 6.531767ms for pod "kube-apiserver-default-k8s-diff-port-990751" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:42.761292   79614 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-990751" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:42.769860   79614 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-990751" in "kube-system" namespace has status "Ready":"True"
	I0806 08:34:42.769884   79614 pod_ready.go:81] duration metric: took 8.585216ms for pod "kube-controller-manager-default-k8s-diff-port-990751" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:42.769898   79614 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lpf88" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:42.775420   79614 pod_ready.go:92] pod "kube-proxy-lpf88" in "kube-system" namespace has status "Ready":"True"
	I0806 08:34:42.775445   79614 pod_ready.go:81] duration metric: took 5.540587ms for pod "kube-proxy-lpf88" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:42.775453   79614 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-990751" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:42.781317   79614 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-990751" in "kube-system" namespace has status "Ready":"True"
	I0806 08:34:42.781341   79614 pod_ready.go:81] duration metric: took 5.88019ms for pod "kube-scheduler-default-k8s-diff-port-990751" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:42.781353   79614 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:40.614686   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:43.114736   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:42.306869   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:42.307378   78922 main.go:141] libmachine: (no-preload-703340) DBG | unable to find current IP address of domain no-preload-703340 in network mk-no-preload-703340
	I0806 08:34:42.307408   78922 main.go:141] libmachine: (no-preload-703340) DBG | I0806 08:34:42.307341   81046 retry.go:31] will retry after 1.26489185s: waiting for machine to come up
	I0806 08:34:43.574072   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:43.574494   78922 main.go:141] libmachine: (no-preload-703340) DBG | unable to find current IP address of domain no-preload-703340 in network mk-no-preload-703340
	I0806 08:34:43.574517   78922 main.go:141] libmachine: (no-preload-703340) DBG | I0806 08:34:43.574440   81046 retry.go:31] will retry after 1.547737608s: waiting for machine to come up
	I0806 08:34:45.124236   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:45.124794   78922 main.go:141] libmachine: (no-preload-703340) DBG | unable to find current IP address of domain no-preload-703340 in network mk-no-preload-703340
	I0806 08:34:45.124822   78922 main.go:141] libmachine: (no-preload-703340) DBG | I0806 08:34:45.124745   81046 retry.go:31] will retry after 1.491615644s: waiting for machine to come up
	I0806 08:34:46.617663   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:46.618159   78922 main.go:141] libmachine: (no-preload-703340) DBG | unable to find current IP address of domain no-preload-703340 in network mk-no-preload-703340
	I0806 08:34:46.618183   78922 main.go:141] libmachine: (no-preload-703340) DBG | I0806 08:34:46.618117   81046 retry.go:31] will retry after 2.284217057s: waiting for machine to come up
	I0806 08:34:43.977375   79927 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.204351244s)
	I0806 08:34:43.977404   79927 crio.go:469] duration metric: took 3.204457213s to extract the tarball
	I0806 08:34:43.977412   79927 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0806 08:34:44.023819   79927 ssh_runner.go:195] Run: sudo crictl images --output json
	I0806 08:34:44.070991   79927 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0806 08:34:44.071016   79927 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0806 08:34:44.071089   79927 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0806 08:34:44.071129   79927 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0806 08:34:44.071138   79927 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0806 08:34:44.071090   79927 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 08:34:44.071184   79927 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0806 08:34:44.071192   79927 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0806 08:34:44.071210   79927 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0806 08:34:44.071255   79927 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0806 08:34:44.072641   79927 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0806 08:34:44.072728   79927 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 08:34:44.072743   79927 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0806 08:34:44.072642   79927 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0806 08:34:44.072749   79927 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0806 08:34:44.072811   79927 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0806 08:34:44.073085   79927 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0806 08:34:44.073298   79927 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0806 08:34:44.239872   79927 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0806 08:34:44.287412   79927 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0806 08:34:44.287458   79927 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0806 08:34:44.287504   79927 ssh_runner.go:195] Run: which crictl
	I0806 08:34:44.292050   79927 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0806 08:34:44.309229   79927 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0806 08:34:44.336611   79927 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0806 08:34:44.371389   79927 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0806 08:34:44.371447   79927 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0806 08:34:44.371500   79927 ssh_runner.go:195] Run: which crictl
	I0806 08:34:44.375821   79927 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0806 08:34:44.412139   79927 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0806 08:34:44.415159   79927 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0806 08:34:44.415339   79927 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0806 08:34:44.418296   79927 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0806 08:34:44.423718   79927 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0806 08:34:44.437866   79927 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0806 08:34:44.494716   79927 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0806 08:34:44.494765   79927 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0806 08:34:44.494819   79927 ssh_runner.go:195] Run: which crictl
	I0806 08:34:44.526849   79927 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0806 08:34:44.526898   79927 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0806 08:34:44.526961   79927 ssh_runner.go:195] Run: which crictl
	I0806 08:34:44.555530   79927 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0806 08:34:44.555587   79927 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0806 08:34:44.555620   79927 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0806 08:34:44.555634   79927 ssh_runner.go:195] Run: which crictl
	I0806 08:34:44.555655   79927 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0806 08:34:44.555692   79927 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0806 08:34:44.555707   79927 ssh_runner.go:195] Run: which crictl
	I0806 08:34:44.555714   79927 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0806 08:34:44.555743   79927 ssh_runner.go:195] Run: which crictl
	I0806 08:34:44.555748   79927 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0806 08:34:44.555792   79927 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0806 08:34:44.561072   79927 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0806 08:34:44.624216   79927 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0806 08:34:44.639877   79927 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0806 08:34:44.639895   79927 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0806 08:34:44.639979   79927 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0806 08:34:44.653684   79927 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0806 08:34:44.711774   79927 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0806 08:34:44.711804   79927 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0806 08:34:44.964733   79927 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 08:34:45.112727   79927 cache_images.go:92] duration metric: took 1.041682516s to LoadCachedImages
	W0806 08:34:45.112813   79927 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0806 08:34:45.112837   79927 kubeadm.go:934] updating node { 192.168.61.158 8443 v1.20.0 crio true true} ...
	I0806 08:34:45.112964   79927 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-409903 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.158
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-409903 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0806 08:34:45.113047   79927 ssh_runner.go:195] Run: crio config
	I0806 08:34:45.165383   79927 cni.go:84] Creating CNI manager for ""
	I0806 08:34:45.165413   79927 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0806 08:34:45.165429   79927 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0806 08:34:45.165455   79927 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.158 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-409903 NodeName:old-k8s-version-409903 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.158"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.158 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0806 08:34:45.165687   79927 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.158
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-409903"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.158
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.158"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0806 08:34:45.165760   79927 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0806 08:34:45.177296   79927 binaries.go:44] Found k8s binaries, skipping transfer
	I0806 08:34:45.177371   79927 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0806 08:34:45.187299   79927 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0806 08:34:45.205986   79927 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0806 08:34:45.225072   79927 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0806 08:34:45.244222   79927 ssh_runner.go:195] Run: grep 192.168.61.158	control-plane.minikube.internal$ /etc/hosts
	I0806 08:34:45.248728   79927 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.158	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 08:34:45.263094   79927 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 08:34:45.399106   79927 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 08:34:45.417589   79927 certs.go:68] Setting up /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/old-k8s-version-409903 for IP: 192.168.61.158
	I0806 08:34:45.417611   79927 certs.go:194] generating shared ca certs ...
	I0806 08:34:45.417630   79927 certs.go:226] acquiring lock for ca certs: {Name:mkf5c5aed91f4108054d9f2c742cad66c86afbed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:34:45.417779   79927 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.key
	I0806 08:34:45.417820   79927 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.key
	I0806 08:34:45.417829   79927 certs.go:256] generating profile certs ...
	I0806 08:34:45.417913   79927 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/old-k8s-version-409903/client.key
	I0806 08:34:45.417970   79927 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/old-k8s-version-409903/apiserver.key.1ffd6785
	I0806 08:34:45.418060   79927 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/old-k8s-version-409903/proxy-client.key
	I0806 08:34:45.418242   79927 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246.pem (1338 bytes)
	W0806 08:34:45.418273   79927 certs.go:480] ignoring /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246_empty.pem, impossibly tiny 0 bytes
	I0806 08:34:45.418282   79927 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem (1675 bytes)
	I0806 08:34:45.418302   79927 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem (1078 bytes)
	I0806 08:34:45.418322   79927 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem (1123 bytes)
	I0806 08:34:45.418350   79927 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem (1675 bytes)
	I0806 08:34:45.418389   79927 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem (1708 bytes)
	I0806 08:34:45.419018   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0806 08:34:45.464550   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0806 08:34:45.500764   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0806 08:34:45.549225   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0806 08:34:45.597480   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/old-k8s-version-409903/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0806 08:34:45.639834   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/old-k8s-version-409903/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0806 08:34:45.680789   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/old-k8s-version-409903/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0806 08:34:45.737986   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/old-k8s-version-409903/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0806 08:34:45.766471   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem --> /usr/share/ca-certificates/202462.pem (1708 bytes)
	I0806 08:34:45.797047   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0806 08:34:45.825789   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246.pem --> /usr/share/ca-certificates/20246.pem (1338 bytes)
	I0806 08:34:45.860217   79927 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0806 08:34:45.883841   79927 ssh_runner.go:195] Run: openssl version
	I0806 08:34:45.891103   79927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202462.pem && ln -fs /usr/share/ca-certificates/202462.pem /etc/ssl/certs/202462.pem"
	I0806 08:34:45.905260   79927 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202462.pem
	I0806 08:34:45.910759   79927 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  6 07:17 /usr/share/ca-certificates/202462.pem
	I0806 08:34:45.910837   79927 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202462.pem
	I0806 08:34:45.918440   79927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202462.pem /etc/ssl/certs/3ec20f2e.0"
	I0806 08:34:45.932479   79927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0806 08:34:45.945678   79927 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0806 08:34:45.951612   79927 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  6 07:07 /usr/share/ca-certificates/minikubeCA.pem
	I0806 08:34:45.951691   79927 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0806 08:34:45.957749   79927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0806 08:34:45.969987   79927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20246.pem && ln -fs /usr/share/ca-certificates/20246.pem /etc/ssl/certs/20246.pem"
	I0806 08:34:45.982878   79927 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20246.pem
	I0806 08:34:45.987953   79927 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  6 07:17 /usr/share/ca-certificates/20246.pem
	I0806 08:34:45.988032   79927 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20246.pem
	I0806 08:34:45.994219   79927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20246.pem /etc/ssl/certs/51391683.0"
	I0806 08:34:46.006666   79927 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0806 08:34:46.011624   79927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0806 08:34:46.018416   79927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0806 08:34:46.025220   79927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0806 08:34:46.031854   79927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0806 08:34:46.038570   79927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0806 08:34:46.045142   79927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0806 08:34:46.052198   79927 kubeadm.go:392] StartCluster: {Name:old-k8s-version-409903 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-409903 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.158 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 08:34:46.052313   79927 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0806 08:34:46.052372   79927 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0806 08:34:46.098382   79927 cri.go:89] found id: ""
	I0806 08:34:46.098454   79927 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0806 08:34:46.110861   79927 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0806 08:34:46.110888   79927 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0806 08:34:46.110956   79927 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0806 08:34:46.123420   79927 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0806 08:34:46.124617   79927 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-409903" does not appear in /home/jenkins/minikube-integration/19370-13057/kubeconfig
	I0806 08:34:46.125562   79927 kubeconfig.go:62] /home/jenkins/minikube-integration/19370-13057/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-409903" cluster setting kubeconfig missing "old-k8s-version-409903" context setting]
	I0806 08:34:46.127017   79927 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/kubeconfig: {Name:mk5c066914acf8e36556b29792f75b98076ca34a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:34:46.173767   79927 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0806 08:34:46.186472   79927 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.158
	I0806 08:34:46.186517   79927 kubeadm.go:1160] stopping kube-system containers ...
	I0806 08:34:46.186532   79927 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0806 08:34:46.186592   79927 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0806 08:34:46.227817   79927 cri.go:89] found id: ""
	I0806 08:34:46.227914   79927 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0806 08:34:46.248248   79927 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0806 08:34:46.259555   79927 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0806 08:34:46.259578   79927 kubeadm.go:157] found existing configuration files:
	
	I0806 08:34:46.259647   79927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0806 08:34:46.271246   79927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0806 08:34:46.271311   79927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0806 08:34:46.283559   79927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0806 08:34:46.295584   79927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0806 08:34:46.295660   79927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0806 08:34:46.307223   79927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0806 08:34:46.318046   79927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0806 08:34:46.318136   79927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0806 08:34:46.328827   79927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0806 08:34:46.339577   79927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0806 08:34:46.339665   79927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0806 08:34:46.352107   79927 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0806 08:34:46.369776   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:46.505082   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:47.036627   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:47.285779   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:47.397777   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:47.517913   79927 api_server.go:52] waiting for apiserver process to appear ...
	I0806 08:34:47.518028   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:48.018399   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:48.518708   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:44.788404   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:46.789746   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:45.115046   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:47.611343   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:48.905056   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:48.905522   78922 main.go:141] libmachine: (no-preload-703340) DBG | unable to find current IP address of domain no-preload-703340 in network mk-no-preload-703340
	I0806 08:34:48.905548   78922 main.go:141] libmachine: (no-preload-703340) DBG | I0806 08:34:48.905484   81046 retry.go:31] will retry after 2.647234658s: waiting for machine to come up
	I0806 08:34:51.556304   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:51.556689   78922 main.go:141] libmachine: (no-preload-703340) DBG | unable to find current IP address of domain no-preload-703340 in network mk-no-preload-703340
	I0806 08:34:51.556712   78922 main.go:141] libmachine: (no-preload-703340) DBG | I0806 08:34:51.556663   81046 retry.go:31] will retry after 4.275379711s: waiting for machine to come up
	I0806 08:34:49.018572   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:49.518975   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:50.018355   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:50.518866   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:51.018156   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:51.518855   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:52.018270   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:52.518584   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:53.018606   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:53.518118   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:49.288691   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:51.787435   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:53.787824   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:49.612621   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:52.112274   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:55.834423   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:55.834961   78922 main.go:141] libmachine: (no-preload-703340) Found IP for machine: 192.168.72.126
	I0806 08:34:55.834980   78922 main.go:141] libmachine: (no-preload-703340) Reserving static IP address...
	I0806 08:34:55.834991   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has current primary IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:55.835314   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "no-preload-703340", mac: "52:54:00:a1:fa:d3", ip: "192.168.72.126"} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:34:55.835348   78922 main.go:141] libmachine: (no-preload-703340) Reserved static IP address: 192.168.72.126
	I0806 08:34:55.835363   78922 main.go:141] libmachine: (no-preload-703340) DBG | skip adding static IP to network mk-no-preload-703340 - found existing host DHCP lease matching {name: "no-preload-703340", mac: "52:54:00:a1:fa:d3", ip: "192.168.72.126"}
	I0806 08:34:55.835376   78922 main.go:141] libmachine: (no-preload-703340) DBG | Getting to WaitForSSH function...
	I0806 08:34:55.835388   78922 main.go:141] libmachine: (no-preload-703340) Waiting for SSH to be available...
	I0806 08:34:55.837645   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:55.838001   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:34:55.838030   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:55.838167   78922 main.go:141] libmachine: (no-preload-703340) DBG | Using SSH client type: external
	I0806 08:34:55.838193   78922 main.go:141] libmachine: (no-preload-703340) DBG | Using SSH private key: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/no-preload-703340/id_rsa (-rw-------)
	I0806 08:34:55.838234   78922 main.go:141] libmachine: (no-preload-703340) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.126 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19370-13057/.minikube/machines/no-preload-703340/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0806 08:34:55.838256   78922 main.go:141] libmachine: (no-preload-703340) DBG | About to run SSH command:
	I0806 08:34:55.838272   78922 main.go:141] libmachine: (no-preload-703340) DBG | exit 0
	I0806 08:34:55.968557   78922 main.go:141] libmachine: (no-preload-703340) DBG | SSH cmd err, output: <nil>: 
	I0806 08:34:55.968893   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetConfigRaw
	I0806 08:34:55.969485   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetIP
	I0806 08:34:55.971998   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:55.972457   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:34:55.972487   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:55.972694   78922 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/no-preload-703340/config.json ...
	I0806 08:34:55.972889   78922 machine.go:94] provisionDockerMachine start ...
	I0806 08:34:55.972908   78922 main.go:141] libmachine: (no-preload-703340) Calling .DriverName
	I0806 08:34:55.973115   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHHostname
	I0806 08:34:55.975361   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:55.975734   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:34:55.975753   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:55.975905   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHPort
	I0806 08:34:55.976051   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHKeyPath
	I0806 08:34:55.976265   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHKeyPath
	I0806 08:34:55.976424   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHUsername
	I0806 08:34:55.976635   78922 main.go:141] libmachine: Using SSH client type: native
	I0806 08:34:55.976891   78922 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.126 22 <nil> <nil>}
	I0806 08:34:55.976905   78922 main.go:141] libmachine: About to run SSH command:
	hostname
	I0806 08:34:56.084844   78922 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0806 08:34:56.084905   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetMachineName
	I0806 08:34:56.085226   78922 buildroot.go:166] provisioning hostname "no-preload-703340"
	I0806 08:34:56.085253   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetMachineName
	I0806 08:34:56.085477   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHHostname
	I0806 08:34:56.088358   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:56.088775   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:34:56.088806   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:56.088904   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHPort
	I0806 08:34:56.089070   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHKeyPath
	I0806 08:34:56.089262   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHKeyPath
	I0806 08:34:56.089417   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHUsername
	I0806 08:34:56.089600   78922 main.go:141] libmachine: Using SSH client type: native
	I0806 08:34:56.089810   78922 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.126 22 <nil> <nil>}
	I0806 08:34:56.089840   78922 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-703340 && echo "no-preload-703340" | sudo tee /etc/hostname
	I0806 08:34:56.216217   78922 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-703340
	
	I0806 08:34:56.216257   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHHostname
	I0806 08:34:56.218949   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:56.219274   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:34:56.219309   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:56.219488   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHPort
	I0806 08:34:56.219699   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHKeyPath
	I0806 08:34:56.219899   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHKeyPath
	I0806 08:34:56.220054   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHUsername
	I0806 08:34:56.220231   78922 main.go:141] libmachine: Using SSH client type: native
	I0806 08:34:56.220513   78922 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.126 22 <nil> <nil>}
	I0806 08:34:56.220544   78922 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-703340' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-703340/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-703340' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0806 08:34:56.337357   78922 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 08:34:56.337390   78922 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19370-13057/.minikube CaCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19370-13057/.minikube}
	I0806 08:34:56.337428   78922 buildroot.go:174] setting up certificates
	I0806 08:34:56.337439   78922 provision.go:84] configureAuth start
	I0806 08:34:56.337454   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetMachineName
	I0806 08:34:56.337736   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetIP
	I0806 08:34:56.340297   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:56.340655   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:34:56.340680   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:56.340898   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHHostname
	I0806 08:34:56.343013   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:56.343356   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:34:56.343395   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:56.343544   78922 provision.go:143] copyHostCerts
	I0806 08:34:56.343613   78922 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem, removing ...
	I0806 08:34:56.343626   78922 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem
	I0806 08:34:56.343692   78922 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem (1078 bytes)
	I0806 08:34:56.343806   78922 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem, removing ...
	I0806 08:34:56.343815   78922 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem
	I0806 08:34:56.343838   78922 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem (1123 bytes)
	I0806 08:34:56.343915   78922 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem, removing ...
	I0806 08:34:56.343929   78922 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem
	I0806 08:34:56.343948   78922 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem (1675 bytes)
	I0806 08:34:56.344044   78922 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem org=jenkins.no-preload-703340 san=[127.0.0.1 192.168.72.126 localhost minikube no-preload-703340]
	I0806 08:34:56.441531   78922 provision.go:177] copyRemoteCerts
	I0806 08:34:56.441588   78922 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0806 08:34:56.441610   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHHostname
	I0806 08:34:56.444202   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:56.444549   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:34:56.444574   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:56.444786   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHPort
	I0806 08:34:56.444977   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHKeyPath
	I0806 08:34:56.445125   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHUsername
	I0806 08:34:56.445327   78922 sshutil.go:53] new ssh client: &{IP:192.168.72.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/no-preload-703340/id_rsa Username:docker}
	I0806 08:34:56.530980   78922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0806 08:34:56.557880   78922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0806 08:34:56.584108   78922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0806 08:34:56.608415   78922 provision.go:87] duration metric: took 270.961494ms to configureAuth
	I0806 08:34:56.608449   78922 buildroot.go:189] setting minikube options for container-runtime
	I0806 08:34:56.608675   78922 config.go:182] Loaded profile config "no-preload-703340": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-rc.0
	I0806 08:34:56.608794   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHHostname
	I0806 08:34:56.611937   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:56.612383   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:34:56.612409   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:56.612585   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHPort
	I0806 08:34:56.612774   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHKeyPath
	I0806 08:34:56.612953   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHKeyPath
	I0806 08:34:56.613134   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHUsername
	I0806 08:34:56.613282   78922 main.go:141] libmachine: Using SSH client type: native
	I0806 08:34:56.613436   78922 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.126 22 <nil> <nil>}
	I0806 08:34:56.613450   78922 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0806 08:34:56.904446   78922 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0806 08:34:56.904474   78922 machine.go:97] duration metric: took 931.572346ms to provisionDockerMachine
	I0806 08:34:56.904484   78922 start.go:293] postStartSetup for "no-preload-703340" (driver="kvm2")
	I0806 08:34:56.904496   78922 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0806 08:34:56.904517   78922 main.go:141] libmachine: (no-preload-703340) Calling .DriverName
	I0806 08:34:56.905043   78922 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0806 08:34:56.905076   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHHostname
	I0806 08:34:56.907707   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:56.908101   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:34:56.908124   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:56.908356   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHPort
	I0806 08:34:56.908586   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHKeyPath
	I0806 08:34:56.908753   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHUsername
	I0806 08:34:56.908924   78922 sshutil.go:53] new ssh client: &{IP:192.168.72.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/no-preload-703340/id_rsa Username:docker}
	I0806 08:34:56.995555   78922 ssh_runner.go:195] Run: cat /etc/os-release
	I0806 08:34:56.999971   78922 info.go:137] Remote host: Buildroot 2023.02.9
	I0806 08:34:57.000004   78922 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-13057/.minikube/addons for local assets ...
	I0806 08:34:57.000079   78922 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-13057/.minikube/files for local assets ...
	I0806 08:34:57.000150   78922 filesync.go:149] local asset: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem -> 202462.pem in /etc/ssl/certs
	I0806 08:34:57.000243   78922 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0806 08:34:57.010178   78922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem --> /etc/ssl/certs/202462.pem (1708 bytes)
	I0806 08:34:57.035704   78922 start.go:296] duration metric: took 131.206204ms for postStartSetup
	I0806 08:34:57.035751   78922 fix.go:56] duration metric: took 19.682153204s for fixHost
	I0806 08:34:57.035775   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHHostname
	I0806 08:34:57.038230   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:57.038565   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:34:57.038596   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:57.038809   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHPort
	I0806 08:34:57.039016   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHKeyPath
	I0806 08:34:57.039197   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHKeyPath
	I0806 08:34:57.039308   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHUsername
	I0806 08:34:57.039486   78922 main.go:141] libmachine: Using SSH client type: native
	I0806 08:34:57.039697   78922 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.126 22 <nil> <nil>}
	I0806 08:34:57.039712   78922 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0806 08:34:57.149272   78922 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722933297.121300330
	
	I0806 08:34:57.149291   78922 fix.go:216] guest clock: 1722933297.121300330
	I0806 08:34:57.149298   78922 fix.go:229] Guest: 2024-08-06 08:34:57.12130033 +0000 UTC Remote: 2024-08-06 08:34:57.035756139 +0000 UTC m=+360.415626547 (delta=85.544191ms)
	I0806 08:34:57.149319   78922 fix.go:200] guest clock delta is within tolerance: 85.544191ms
	I0806 08:34:57.149326   78922 start.go:83] releasing machines lock for "no-preload-703340", held for 19.795759576s
	I0806 08:34:57.149348   78922 main.go:141] libmachine: (no-preload-703340) Calling .DriverName
	I0806 08:34:57.149624   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetIP
	I0806 08:34:57.152286   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:57.152699   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:34:57.152731   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:57.152905   78922 main.go:141] libmachine: (no-preload-703340) Calling .DriverName
	I0806 08:34:57.153488   78922 main.go:141] libmachine: (no-preload-703340) Calling .DriverName
	I0806 08:34:57.153680   78922 main.go:141] libmachine: (no-preload-703340) Calling .DriverName
	I0806 08:34:57.153764   78922 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0806 08:34:57.153803   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHHostname
	I0806 08:34:57.154068   78922 ssh_runner.go:195] Run: cat /version.json
	I0806 08:34:57.154090   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHHostname
	I0806 08:34:57.156678   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:57.157026   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:34:57.157053   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:57.157123   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:57.157327   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHPort
	I0806 08:34:57.157492   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHKeyPath
	I0806 08:34:57.157580   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:34:57.157621   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHUsername
	I0806 08:34:57.157602   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:57.157701   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHPort
	I0806 08:34:57.157919   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHKeyPath
	I0806 08:34:57.157933   78922 sshutil.go:53] new ssh client: &{IP:192.168.72.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/no-preload-703340/id_rsa Username:docker}
	I0806 08:34:57.158089   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHUsername
	I0806 08:34:57.158243   78922 sshutil.go:53] new ssh client: &{IP:192.168.72.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/no-preload-703340/id_rsa Username:docker}
	I0806 08:34:57.237574   78922 ssh_runner.go:195] Run: systemctl --version
	I0806 08:34:57.260498   78922 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0806 08:34:57.403619   78922 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0806 08:34:57.410198   78922 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0806 08:34:57.410267   78922 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0806 08:34:57.426321   78922 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0806 08:34:57.426345   78922 start.go:495] detecting cgroup driver to use...
	I0806 08:34:57.426414   78922 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0806 08:34:57.442340   78922 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 08:34:57.458008   78922 docker.go:217] disabling cri-docker service (if available) ...
	I0806 08:34:57.458061   78922 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0806 08:34:57.471516   78922 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0806 08:34:57.484996   78922 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0806 08:34:57.604605   78922 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0806 08:34:57.753590   78922 docker.go:233] disabling docker service ...
	I0806 08:34:57.753651   78922 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0806 08:34:57.768534   78922 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0806 08:34:57.782187   78922 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0806 08:34:57.936046   78922 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0806 08:34:58.076402   78922 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0806 08:34:58.091386   78922 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 08:34:58.112810   78922 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0806 08:34:58.112873   78922 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:58.123376   78922 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0806 08:34:58.123438   78922 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:58.133981   78922 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:58.144891   78922 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:58.156276   78922 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0806 08:34:58.166927   78922 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:58.176956   78922 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:58.193969   78922 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:58.204122   78922 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0806 08:34:58.213745   78922 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0806 08:34:58.213859   78922 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0806 08:34:58.226556   78922 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0806 08:34:58.236413   78922 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 08:34:58.359776   78922 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0806 08:34:58.505947   78922 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0806 08:34:58.506016   78922 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0806 08:34:58.511591   78922 start.go:563] Will wait 60s for crictl version
	I0806 08:34:58.511649   78922 ssh_runner.go:195] Run: which crictl
	I0806 08:34:58.516259   78922 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0806 08:34:58.568580   78922 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0806 08:34:58.568658   78922 ssh_runner.go:195] Run: crio --version
	I0806 08:34:58.607964   78922 ssh_runner.go:195] Run: crio --version
	I0806 08:34:58.642564   78922 out.go:177] * Preparing Kubernetes v1.31.0-rc.0 on CRI-O 1.29.1 ...
	I0806 08:34:54.018818   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:54.518987   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:55.018584   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:55.518848   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:56.018094   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:56.518052   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:57.018765   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:57.518128   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:58.018977   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:58.518233   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:55.789183   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:58.288863   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:54.612520   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:56.614634   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:59.115966   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:58.643802   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetIP
	I0806 08:34:58.646395   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:58.646714   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:34:58.646744   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:58.646948   78922 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0806 08:34:58.651136   78922 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 08:34:58.664347   78922 kubeadm.go:883] updating cluster {Name:no-preload-703340 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-rc.0 ClusterName:no-preload-703340 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.126 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m
0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0806 08:34:58.664479   78922 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime crio
	I0806 08:34:58.664511   78922 ssh_runner.go:195] Run: sudo crictl images --output json
	I0806 08:34:58.705043   78922 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-rc.0". assuming images are not preloaded.
	I0806 08:34:58.705080   78922 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-rc.0 registry.k8s.io/kube-controller-manager:v1.31.0-rc.0 registry.k8s.io/kube-scheduler:v1.31.0-rc.0 registry.k8s.io/kube-proxy:v1.31.0-rc.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0806 08:34:58.705157   78922 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 08:34:58.705170   78922 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0806 08:34:58.705177   78922 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0806 08:34:58.705245   78922 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0806 08:34:58.705272   78922 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0806 08:34:58.705281   78922 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0806 08:34:58.705160   78922 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0806 08:34:58.705469   78922 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0806 08:34:58.706962   78922 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0806 08:34:58.706993   78922 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 08:34:58.706986   78922 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0806 08:34:58.707018   78922 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0806 08:34:58.706968   78922 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0806 08:34:58.706989   78922 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0806 08:34:58.707066   78922 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0806 08:34:58.707315   78922 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0806 08:34:58.835957   78922 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0806 08:34:58.839572   78922 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0806 08:34:58.846955   78922 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0806 08:34:58.865707   78922 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0806 08:34:58.872729   78922 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0806 08:34:58.878484   78922 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0806 08:34:58.898935   78922 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0806 08:34:58.919621   78922 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-rc.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-rc.0" does not exist at hash "fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c" in container runtime
	I0806 08:34:58.919654   78922 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0806 08:34:58.919691   78922 ssh_runner.go:195] Run: which crictl
	I0806 08:34:58.949493   78922 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0806 08:34:58.949544   78922 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0806 08:34:58.949597   78922 ssh_runner.go:195] Run: which crictl
	I0806 08:34:58.971429   78922 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0806 08:34:58.971476   78922 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0806 08:34:58.971522   78922 ssh_runner.go:195] Run: which crictl
	I0806 08:34:59.015321   78922 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-rc.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-rc.0" does not exist at hash "0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c" in container runtime
	I0806 08:34:59.015359   78922 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0806 08:34:59.015397   78922 ssh_runner.go:195] Run: which crictl
	I0806 08:34:59.107718   78922 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-rc.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-rc.0" does not exist at hash "41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318" in container runtime
	I0806 08:34:59.107761   78922 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0806 08:34:59.107797   78922 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-rc.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-rc.0" does not exist at hash "c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0" in container runtime
	I0806 08:34:59.107867   78922 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0806 08:34:59.107904   78922 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0806 08:34:59.107987   78922 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0806 08:34:59.107809   78922 ssh_runner.go:195] Run: which crictl
	I0806 08:34:59.108037   78922 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0806 08:34:59.107994   78922 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0806 08:34:59.108037   78922 ssh_runner.go:195] Run: which crictl
	I0806 08:34:59.120323   78922 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0806 08:34:59.218864   78922 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0
	I0806 08:34:59.218982   78922 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-rc.0
	I0806 08:34:59.222696   78922 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0806 08:34:59.222735   78922 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0806 08:34:59.222794   78922 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0
	I0806 08:34:59.222820   78922 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0806 08:34:59.222854   78922 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0806 08:34:59.222882   78922 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-rc.0
	I0806 08:34:59.222900   78922 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-rc.0
	I0806 08:34:59.222933   78922 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.15-0
	I0806 08:34:59.222964   78922 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-rc.0
	I0806 08:34:59.227260   78922 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-rc.0 (exists)
	I0806 08:34:59.227280   78922 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-rc.0
	I0806 08:34:59.227319   78922 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-rc.0
	I0806 08:34:59.268220   78922 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-rc.0 (exists)
	I0806 08:34:59.268240   78922 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0
	I0806 08:34:59.268291   78922 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0806 08:34:59.268325   78922 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0806 08:34:59.268332   78922 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-rc.0
	I0806 08:34:59.268349   78922 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-rc.0 (exists)
	I0806 08:34:59.613734   78922 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 08:35:01.623115   78922 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-rc.0: (2.395772878s)
	I0806 08:35:01.623140   78922 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-rc.0: (2.354795538s)
	I0806 08:35:01.623152   78922 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0 from cache
	I0806 08:35:01.623161   78922 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-rc.0 (exists)
	I0806 08:35:01.623168   78922 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-rc.0
	I0806 08:35:01.623217   78922 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-rc.0
	I0806 08:35:01.623218   78922 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.009459709s)
	I0806 08:35:01.623310   78922 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0806 08:35:01.623340   78922 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 08:35:01.623366   78922 ssh_runner.go:195] Run: which crictl
	I0806 08:34:59.019117   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:59.518121   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:00.018102   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:00.518402   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:01.019125   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:01.518141   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:02.018063   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:02.518834   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:03.018074   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:03.518904   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:00.789181   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:02.790704   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:01.612449   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:03.655402   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:03.781865   78922 ssh_runner.go:235] Completed: which crictl: (2.158478864s)
	I0806 08:35:03.781942   78922 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 08:35:03.782003   78922 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-rc.0: (2.158736373s)
	I0806 08:35:03.782023   78922 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0 from cache
	I0806 08:35:03.782042   78922 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-rc.0
	I0806 08:35:03.782112   78922 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-rc.0
	I0806 08:35:05.247976   78922 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.466007397s)
	I0806 08:35:05.248028   78922 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-rc.0: (1.465889339s)
	I0806 08:35:05.248049   78922 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0806 08:35:05.248054   78922 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0 from cache
	I0806 08:35:05.248065   78922 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0806 08:35:05.248117   78922 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0806 08:35:05.248166   78922 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0806 08:35:04.018047   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:04.518891   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:05.018447   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:05.518506   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:06.019133   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:06.518070   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:07.018362   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:07.518320   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:08.019034   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:08.518740   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:05.288137   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:07.288969   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:06.113473   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:08.612266   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:09.135551   78922 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.887407478s)
	I0806 08:35:09.135591   78922 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0806 08:35:09.135592   78922 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (3.887405091s)
	I0806 08:35:09.135605   78922 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0806 08:35:09.135618   78922 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0806 08:35:09.135645   78922 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0806 08:35:11.131984   78922 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.996319459s)
	I0806 08:35:11.132013   78922 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0806 08:35:11.132024   78922 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-rc.0
	I0806 08:35:11.132070   78922 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-rc.0
	I0806 08:35:09.018349   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:09.518189   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:10.018824   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:10.518260   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:11.018914   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:11.518709   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:12.018349   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:12.518719   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:13.019022   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:13.518318   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:09.787576   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:11.789165   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:10.614185   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:13.112391   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:13.395512   78922 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-rc.0: (2.263417409s)
	I0806 08:35:13.395548   78922 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-rc.0 from cache
	I0806 08:35:13.395560   78922 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0806 08:35:13.395608   78922 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0806 08:35:14.150250   78922 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0806 08:35:14.150301   78922 cache_images.go:123] Successfully loaded all cached images
	I0806 08:35:14.150309   78922 cache_images.go:92] duration metric: took 15.445214072s to LoadCachedImages
	I0806 08:35:14.150322   78922 kubeadm.go:934] updating node { 192.168.72.126 8443 v1.31.0-rc.0 crio true true} ...
	I0806 08:35:14.150540   78922 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-703340 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.126
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-rc.0 ClusterName:no-preload-703340 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0806 08:35:14.150640   78922 ssh_runner.go:195] Run: crio config
	I0806 08:35:14.210763   78922 cni.go:84] Creating CNI manager for ""
	I0806 08:35:14.210794   78922 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0806 08:35:14.210806   78922 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0806 08:35:14.210842   78922 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.126 APIServerPort:8443 KubernetesVersion:v1.31.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-703340 NodeName:no-preload-703340 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.126"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.126 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0806 08:35:14.210979   78922 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.126
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-703340"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.126
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.126"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0806 08:35:14.211036   78922 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-rc.0
	I0806 08:35:14.222106   78922 binaries.go:44] Found k8s binaries, skipping transfer
	I0806 08:35:14.222189   78922 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0806 08:35:14.231944   78922 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0806 08:35:14.250491   78922 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0806 08:35:14.268955   78922 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0806 08:35:14.289661   78922 ssh_runner.go:195] Run: grep 192.168.72.126	control-plane.minikube.internal$ /etc/hosts
	I0806 08:35:14.294005   78922 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.126	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 08:35:14.307700   78922 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 08:35:14.451462   78922 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 08:35:14.478155   78922 certs.go:68] Setting up /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/no-preload-703340 for IP: 192.168.72.126
	I0806 08:35:14.478177   78922 certs.go:194] generating shared ca certs ...
	I0806 08:35:14.478191   78922 certs.go:226] acquiring lock for ca certs: {Name:mkf5c5aed91f4108054d9f2c742cad66c86afbed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:35:14.478365   78922 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.key
	I0806 08:35:14.478420   78922 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.key
	I0806 08:35:14.478434   78922 certs.go:256] generating profile certs ...
	I0806 08:35:14.478529   78922 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/no-preload-703340/client.key
	I0806 08:35:14.478615   78922 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/no-preload-703340/apiserver.key.0b170e14
	I0806 08:35:14.478665   78922 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/no-preload-703340/proxy-client.key
	I0806 08:35:14.478822   78922 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246.pem (1338 bytes)
	W0806 08:35:14.478871   78922 certs.go:480] ignoring /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246_empty.pem, impossibly tiny 0 bytes
	I0806 08:35:14.478885   78922 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem (1675 bytes)
	I0806 08:35:14.478941   78922 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem (1078 bytes)
	I0806 08:35:14.478978   78922 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem (1123 bytes)
	I0806 08:35:14.479014   78922 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem (1675 bytes)
	I0806 08:35:14.479078   78922 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem (1708 bytes)
	I0806 08:35:14.479773   78922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0806 08:35:14.527838   78922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0806 08:35:14.579788   78922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0806 08:35:14.614739   78922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0806 08:35:14.657559   78922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/no-preload-703340/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0806 08:35:14.699354   78922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/no-preload-703340/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0806 08:35:14.724875   78922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/no-preload-703340/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0806 08:35:14.751020   78922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/no-preload-703340/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0806 08:35:14.778300   78922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem --> /usr/share/ca-certificates/202462.pem (1708 bytes)
	I0806 08:35:14.805018   78922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0806 08:35:14.830791   78922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246.pem --> /usr/share/ca-certificates/20246.pem (1338 bytes)
	I0806 08:35:14.856120   78922 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0806 08:35:14.874909   78922 ssh_runner.go:195] Run: openssl version
	I0806 08:35:14.880931   78922 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202462.pem && ln -fs /usr/share/ca-certificates/202462.pem /etc/ssl/certs/202462.pem"
	I0806 08:35:14.892741   78922 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202462.pem
	I0806 08:35:14.897430   78922 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  6 07:17 /usr/share/ca-certificates/202462.pem
	I0806 08:35:14.897489   78922 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202462.pem
	I0806 08:35:14.903324   78922 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202462.pem /etc/ssl/certs/3ec20f2e.0"
	I0806 08:35:14.914505   78922 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0806 08:35:14.925465   78922 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0806 08:35:14.930490   78922 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  6 07:07 /usr/share/ca-certificates/minikubeCA.pem
	I0806 08:35:14.930559   78922 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0806 08:35:14.936384   78922 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0806 08:35:14.947836   78922 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20246.pem && ln -fs /usr/share/ca-certificates/20246.pem /etc/ssl/certs/20246.pem"
	I0806 08:35:14.959376   78922 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20246.pem
	I0806 08:35:14.964118   78922 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  6 07:17 /usr/share/ca-certificates/20246.pem
	I0806 08:35:14.964168   78922 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20246.pem
	I0806 08:35:14.970266   78922 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20246.pem /etc/ssl/certs/51391683.0"
	I0806 08:35:14.982244   78922 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0806 08:35:14.987367   78922 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0806 08:35:14.993508   78922 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0806 08:35:14.999400   78922 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0806 08:35:15.005751   78922 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0806 08:35:15.011588   78922 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0806 08:35:15.017766   78922 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0806 08:35:15.023853   78922 kubeadm.go:392] StartCluster: {Name:no-preload-703340 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-rc.0 ClusterName:no-preload-703340 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.126 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 08:35:15.024060   78922 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0806 08:35:15.024113   78922 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0806 08:35:15.072318   78922 cri.go:89] found id: ""
	I0806 08:35:15.072415   78922 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0806 08:35:15.083158   78922 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0806 08:35:15.083182   78922 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0806 08:35:15.083225   78922 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0806 08:35:15.093301   78922 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0806 08:35:15.094369   78922 kubeconfig.go:125] found "no-preload-703340" server: "https://192.168.72.126:8443"
	I0806 08:35:15.096545   78922 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0806 08:35:15.106190   78922 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.126
	I0806 08:35:15.106226   78922 kubeadm.go:1160] stopping kube-system containers ...
	I0806 08:35:15.106237   78922 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0806 08:35:15.106288   78922 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0806 08:35:15.145974   78922 cri.go:89] found id: ""
	I0806 08:35:15.146060   78922 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0806 08:35:15.163821   78922 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0806 08:35:15.174473   78922 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0806 08:35:15.174498   78922 kubeadm.go:157] found existing configuration files:
	
	I0806 08:35:15.174562   78922 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0806 08:35:15.185227   78922 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0806 08:35:15.185309   78922 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0806 08:35:15.195614   78922 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0806 08:35:15.205197   78922 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0806 08:35:15.205261   78922 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0806 08:35:15.215511   78922 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0806 08:35:15.225354   78922 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0806 08:35:15.225412   78922 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0806 08:35:15.234912   78922 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0806 08:35:15.244638   78922 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0806 08:35:15.244703   78922 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0806 08:35:15.255080   78922 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0806 08:35:15.265861   78922 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:35:15.399192   78922 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:35:16.182816   78922 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:35:16.397386   78922 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:35:16.463980   78922 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:35:16.550229   78922 api_server.go:52] waiting for apiserver process to appear ...
	I0806 08:35:16.550347   78922 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:14.018930   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:14.518251   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:15.018981   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:15.518764   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:16.018114   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:16.518157   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:17.018124   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:17.518604   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:18.018742   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:18.518755   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:14.290050   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:16.788472   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:18.788526   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:15.112664   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:17.113214   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:17.050447   78922 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:17.551220   78922 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:17.603538   78922 api_server.go:72] duration metric: took 1.053308623s to wait for apiserver process to appear ...
	I0806 08:35:17.603574   78922 api_server.go:88] waiting for apiserver healthz status ...
	I0806 08:35:17.603595   78922 api_server.go:253] Checking apiserver healthz at https://192.168.72.126:8443/healthz ...
	I0806 08:35:17.604064   78922 api_server.go:269] stopped: https://192.168.72.126:8443/healthz: Get "https://192.168.72.126:8443/healthz": dial tcp 192.168.72.126:8443: connect: connection refused
	I0806 08:35:18.104696   78922 api_server.go:253] Checking apiserver healthz at https://192.168.72.126:8443/healthz ...
	I0806 08:35:20.458841   78922 api_server.go:279] https://192.168.72.126:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0806 08:35:20.458876   78922 api_server.go:103] status: https://192.168.72.126:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0806 08:35:20.458893   78922 api_server.go:253] Checking apiserver healthz at https://192.168.72.126:8443/healthz ...
	I0806 08:35:20.495570   78922 api_server.go:279] https://192.168.72.126:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0806 08:35:20.495605   78922 api_server.go:103] status: https://192.168.72.126:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0806 08:35:20.603728   78922 api_server.go:253] Checking apiserver healthz at https://192.168.72.126:8443/healthz ...
	I0806 08:35:20.628351   78922 api_server.go:279] https://192.168.72.126:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0806 08:35:20.628409   78922 api_server.go:103] status: https://192.168.72.126:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0806 08:35:21.103717   78922 api_server.go:253] Checking apiserver healthz at https://192.168.72.126:8443/healthz ...
	I0806 08:35:21.109031   78922 api_server.go:279] https://192.168.72.126:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0806 08:35:21.109064   78922 api_server.go:103] status: https://192.168.72.126:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0806 08:35:21.604400   78922 api_server.go:253] Checking apiserver healthz at https://192.168.72.126:8443/healthz ...
	I0806 08:35:21.613667   78922 api_server.go:279] https://192.168.72.126:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0806 08:35:21.613706   78922 api_server.go:103] status: https://192.168.72.126:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0806 08:35:22.103921   78922 api_server.go:253] Checking apiserver healthz at https://192.168.72.126:8443/healthz ...
	I0806 08:35:22.111933   78922 api_server.go:279] https://192.168.72.126:8443/healthz returned 200:
	ok
	I0806 08:35:22.118406   78922 api_server.go:141] control plane version: v1.31.0-rc.0
	I0806 08:35:22.118430   78922 api_server.go:131] duration metric: took 4.514848902s to wait for apiserver health ...
	I0806 08:35:22.118438   78922 cni.go:84] Creating CNI manager for ""
	I0806 08:35:22.118444   78922 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0806 08:35:22.119689   78922 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0806 08:35:19.018302   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:19.518693   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:20.018804   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:20.518878   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:21.018974   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:21.518747   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:22.018530   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:22.519082   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:23.019088   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:23.518932   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:20.788766   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:23.288382   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:19.614878   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:22.113841   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:22.121321   78922 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0806 08:35:22.132760   78922 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0806 08:35:22.151770   78922 system_pods.go:43] waiting for kube-system pods to appear ...
	I0806 08:35:22.161775   78922 system_pods.go:59] 8 kube-system pods found
	I0806 08:35:22.161806   78922 system_pods.go:61] "coredns-6f6b679f8f-ft9vg" [0f9ea981-764b-4777-bf78-8798570b20d8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0806 08:35:22.161818   78922 system_pods.go:61] "etcd-no-preload-703340" [6adb80b2-0d33-4d53-a868-f83226cc7c26] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0806 08:35:22.161826   78922 system_pods.go:61] "kube-apiserver-no-preload-703340" [4a415305-2f94-42c5-901d-49f0b0114cf6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0806 08:35:22.161832   78922 system_pods.go:61] "kube-controller-manager-no-preload-703340" [0351f8f4-5046-4068-ada3-103394ee9ae2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0806 08:35:22.161837   78922 system_pods.go:61] "kube-proxy-7t2sd" [27f3aa37-eca2-450b-a98d-b3246a7e48ac] Running
	I0806 08:35:22.161841   78922 system_pods.go:61] "kube-scheduler-no-preload-703340" [ad4f6239-f5e1-4a76-91ea-8da5478e8992] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0806 08:35:22.161845   78922 system_pods.go:61] "metrics-server-6867b74b74-qz4vx" [800676b3-4940-4ae2-ad37-7adbd67ea8fa] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0806 08:35:22.161849   78922 system_pods.go:61] "storage-provisioner" [8fa414bf-3923-4c20-acd7-a48fba6f4044] Running
	I0806 08:35:22.161855   78922 system_pods.go:74] duration metric: took 10.062633ms to wait for pod list to return data ...
	I0806 08:35:22.161865   78922 node_conditions.go:102] verifying NodePressure condition ...
	I0806 08:35:22.165604   78922 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0806 08:35:22.165626   78922 node_conditions.go:123] node cpu capacity is 2
	I0806 08:35:22.165637   78922 node_conditions.go:105] duration metric: took 3.767899ms to run NodePressure ...
	I0806 08:35:22.165656   78922 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:35:22.445769   78922 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0806 08:35:22.451056   78922 kubeadm.go:739] kubelet initialised
	I0806 08:35:22.451078   78922 kubeadm.go:740] duration metric: took 5.279835ms waiting for restarted kubelet to initialise ...
	I0806 08:35:22.451085   78922 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 08:35:22.458199   78922 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6f6b679f8f-ft9vg" in "kube-system" namespace to be "Ready" ...
	I0806 08:35:22.463913   78922 pod_ready.go:97] node "no-preload-703340" hosting pod "coredns-6f6b679f8f-ft9vg" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-703340" has status "Ready":"False"
	I0806 08:35:22.463936   78922 pod_ready.go:81] duration metric: took 5.712258ms for pod "coredns-6f6b679f8f-ft9vg" in "kube-system" namespace to be "Ready" ...
	E0806 08:35:22.463945   78922 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-703340" hosting pod "coredns-6f6b679f8f-ft9vg" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-703340" has status "Ready":"False"
	I0806 08:35:22.463952   78922 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-703340" in "kube-system" namespace to be "Ready" ...
	I0806 08:35:22.471896   78922 pod_ready.go:97] node "no-preload-703340" hosting pod "etcd-no-preload-703340" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-703340" has status "Ready":"False"
	I0806 08:35:22.471921   78922 pod_ready.go:81] duration metric: took 7.962236ms for pod "etcd-no-preload-703340" in "kube-system" namespace to be "Ready" ...
	E0806 08:35:22.471930   78922 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-703340" hosting pod "etcd-no-preload-703340" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-703340" has status "Ready":"False"
	I0806 08:35:22.471936   78922 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-703340" in "kube-system" namespace to be "Ready" ...
	I0806 08:35:22.478365   78922 pod_ready.go:97] node "no-preload-703340" hosting pod "kube-apiserver-no-preload-703340" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-703340" has status "Ready":"False"
	I0806 08:35:22.478394   78922 pod_ready.go:81] duration metric: took 6.451038ms for pod "kube-apiserver-no-preload-703340" in "kube-system" namespace to be "Ready" ...
	E0806 08:35:22.478405   78922 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-703340" hosting pod "kube-apiserver-no-preload-703340" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-703340" has status "Ready":"False"
	I0806 08:35:22.478414   78922 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-703340" in "kube-system" namespace to be "Ready" ...
	I0806 08:35:22.559022   78922 pod_ready.go:97] node "no-preload-703340" hosting pod "kube-controller-manager-no-preload-703340" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-703340" has status "Ready":"False"
	I0806 08:35:22.559058   78922 pod_ready.go:81] duration metric: took 80.632686ms for pod "kube-controller-manager-no-preload-703340" in "kube-system" namespace to be "Ready" ...
	E0806 08:35:22.559071   78922 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-703340" hosting pod "kube-controller-manager-no-preload-703340" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-703340" has status "Ready":"False"
	I0806 08:35:22.559080   78922 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-7t2sd" in "kube-system" namespace to be "Ready" ...
	I0806 08:35:22.955121   78922 pod_ready.go:92] pod "kube-proxy-7t2sd" in "kube-system" namespace has status "Ready":"True"
	I0806 08:35:22.955142   78922 pod_ready.go:81] duration metric: took 396.053517ms for pod "kube-proxy-7t2sd" in "kube-system" namespace to be "Ready" ...
	I0806 08:35:22.955152   78922 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-703340" in "kube-system" namespace to be "Ready" ...
	I0806 08:35:24.961266   78922 pod_ready.go:102] pod "kube-scheduler-no-preload-703340" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:24.018764   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:24.519010   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:25.019090   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:25.518260   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:26.018784   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:26.518903   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:27.019011   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:27.518740   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:28.018299   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:28.518391   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:25.787771   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:27.788719   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:24.612790   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:26.612992   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:29.112518   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:26.962028   78922 pod_ready.go:102] pod "kube-scheduler-no-preload-703340" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:29.461941   78922 pod_ready.go:102] pod "kube-scheduler-no-preload-703340" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:29.018449   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:29.518446   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:30.018179   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:30.518270   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:31.018913   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:31.518140   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:32.018621   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:32.518573   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:33.018512   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:33.518604   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:29.790220   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:32.287668   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:31.112680   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:33.612484   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:31.961716   78922 pod_ready.go:102] pod "kube-scheduler-no-preload-703340" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:34.461368   78922 pod_ready.go:102] pod "kube-scheduler-no-preload-703340" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:34.961460   78922 pod_ready.go:92] pod "kube-scheduler-no-preload-703340" in "kube-system" namespace has status "Ready":"True"
	I0806 08:35:34.961483   78922 pod_ready.go:81] duration metric: took 12.006324097s for pod "kube-scheduler-no-preload-703340" in "kube-system" namespace to be "Ready" ...
	I0806 08:35:34.961495   78922 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace to be "Ready" ...
	I0806 08:35:34.018132   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:34.518476   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:35.018797   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:35.518507   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:36.018976   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:36.518062   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:37.018463   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:37.518648   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:38.018861   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:38.518236   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:34.288032   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:36.787530   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:38.789235   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:35.612530   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:38.114833   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:36.967646   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:38.969106   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:41.469056   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:39.018424   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:39.518065   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:40.018381   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:40.518979   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:41.018310   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:41.518277   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:42.018881   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:42.518157   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:43.018166   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:43.518740   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:41.287534   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:43.288666   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:40.115187   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:42.611895   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:43.967648   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:45.968277   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:44.018075   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:44.518545   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:45.018342   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:45.519066   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:46.018796   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:46.519031   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:47.018876   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:47.518649   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:35:47.518746   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:35:47.559141   79927 cri.go:89] found id: ""
	I0806 08:35:47.559175   79927 logs.go:276] 0 containers: []
	W0806 08:35:47.559188   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:35:47.559195   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:35:47.559253   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:35:47.596840   79927 cri.go:89] found id: ""
	I0806 08:35:47.596900   79927 logs.go:276] 0 containers: []
	W0806 08:35:47.596913   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:35:47.596921   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:35:47.596995   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:35:47.634932   79927 cri.go:89] found id: ""
	I0806 08:35:47.634961   79927 logs.go:276] 0 containers: []
	W0806 08:35:47.634969   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:35:47.634975   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:35:47.635031   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:35:47.672170   79927 cri.go:89] found id: ""
	I0806 08:35:47.672202   79927 logs.go:276] 0 containers: []
	W0806 08:35:47.672213   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:35:47.672220   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:35:47.672284   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:35:47.709643   79927 cri.go:89] found id: ""
	I0806 08:35:47.709672   79927 logs.go:276] 0 containers: []
	W0806 08:35:47.709683   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:35:47.709690   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:35:47.709752   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:35:47.750036   79927 cri.go:89] found id: ""
	I0806 08:35:47.750067   79927 logs.go:276] 0 containers: []
	W0806 08:35:47.750076   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:35:47.750082   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:35:47.750149   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:35:47.791837   79927 cri.go:89] found id: ""
	I0806 08:35:47.791862   79927 logs.go:276] 0 containers: []
	W0806 08:35:47.791872   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:35:47.791879   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:35:47.791938   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:35:47.831972   79927 cri.go:89] found id: ""
	I0806 08:35:47.831997   79927 logs.go:276] 0 containers: []
	W0806 08:35:47.832005   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:35:47.832014   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:35:47.832028   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:35:47.962228   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:35:47.962252   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:35:47.962267   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:35:48.031276   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:35:48.031314   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:35:48.075869   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:35:48.075912   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:35:48.131817   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:35:48.131859   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:35:45.788042   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:47.789501   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:44.612796   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:47.112009   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:48.468289   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:50.469101   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:50.647351   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:50.661916   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:35:50.662002   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:35:50.707294   79927 cri.go:89] found id: ""
	I0806 08:35:50.707323   79927 logs.go:276] 0 containers: []
	W0806 08:35:50.707335   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:35:50.707342   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:35:50.707396   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:35:50.744111   79927 cri.go:89] found id: ""
	I0806 08:35:50.744133   79927 logs.go:276] 0 containers: []
	W0806 08:35:50.744149   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:35:50.744160   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:35:50.744205   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:35:50.781482   79927 cri.go:89] found id: ""
	I0806 08:35:50.781513   79927 logs.go:276] 0 containers: []
	W0806 08:35:50.781524   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:35:50.781531   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:35:50.781591   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:35:50.828788   79927 cri.go:89] found id: ""
	I0806 08:35:50.828817   79927 logs.go:276] 0 containers: []
	W0806 08:35:50.828827   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:35:50.828834   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:35:50.828896   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:35:50.865168   79927 cri.go:89] found id: ""
	I0806 08:35:50.865199   79927 logs.go:276] 0 containers: []
	W0806 08:35:50.865211   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:35:50.865219   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:35:50.865273   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:35:50.902814   79927 cri.go:89] found id: ""
	I0806 08:35:50.902842   79927 logs.go:276] 0 containers: []
	W0806 08:35:50.902851   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:35:50.902856   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:35:50.902907   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:35:50.939861   79927 cri.go:89] found id: ""
	I0806 08:35:50.939891   79927 logs.go:276] 0 containers: []
	W0806 08:35:50.939899   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:35:50.939905   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:35:50.939961   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:35:50.984743   79927 cri.go:89] found id: ""
	I0806 08:35:50.984773   79927 logs.go:276] 0 containers: []
	W0806 08:35:50.984782   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:35:50.984793   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:35:50.984809   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:35:50.998476   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:35:50.998515   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:35:51.081100   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:35:51.081126   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:35:51.081139   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:35:51.157634   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:35:51.157679   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:35:51.210685   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:35:51.210716   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:35:50.288343   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:52.804477   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:49.612661   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:52.113950   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:54.114200   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:52.969251   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:55.468041   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:53.778620   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:53.793090   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:35:53.793154   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:35:53.837558   79927 cri.go:89] found id: ""
	I0806 08:35:53.837584   79927 logs.go:276] 0 containers: []
	W0806 08:35:53.837594   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:35:53.837599   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:35:53.837670   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:35:53.878855   79927 cri.go:89] found id: ""
	I0806 08:35:53.878884   79927 logs.go:276] 0 containers: []
	W0806 08:35:53.878893   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:35:53.878898   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:35:53.878945   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:35:53.918330   79927 cri.go:89] found id: ""
	I0806 08:35:53.918357   79927 logs.go:276] 0 containers: []
	W0806 08:35:53.918366   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:35:53.918371   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:35:53.918422   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:35:53.959156   79927 cri.go:89] found id: ""
	I0806 08:35:53.959186   79927 logs.go:276] 0 containers: []
	W0806 08:35:53.959196   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:35:53.959204   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:35:53.959309   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:35:53.996775   79927 cri.go:89] found id: ""
	I0806 08:35:53.996802   79927 logs.go:276] 0 containers: []
	W0806 08:35:53.996810   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:35:53.996816   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:35:53.996874   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:35:54.033047   79927 cri.go:89] found id: ""
	I0806 08:35:54.033072   79927 logs.go:276] 0 containers: []
	W0806 08:35:54.033082   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:35:54.033089   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:35:54.033152   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:35:54.069426   79927 cri.go:89] found id: ""
	I0806 08:35:54.069458   79927 logs.go:276] 0 containers: []
	W0806 08:35:54.069468   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:35:54.069475   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:35:54.069536   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:35:54.109114   79927 cri.go:89] found id: ""
	I0806 08:35:54.109143   79927 logs.go:276] 0 containers: []
	W0806 08:35:54.109153   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:35:54.109164   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:35:54.109179   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:35:54.161264   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:35:54.161298   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:35:54.175216   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:35:54.175247   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:35:54.254970   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:35:54.254999   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:35:54.255014   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:35:54.331561   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:35:54.331596   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:35:56.873110   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:56.889302   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:35:56.889378   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:35:56.931875   79927 cri.go:89] found id: ""
	I0806 08:35:56.931904   79927 logs.go:276] 0 containers: []
	W0806 08:35:56.931913   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:35:56.931919   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:35:56.931975   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:35:56.995549   79927 cri.go:89] found id: ""
	I0806 08:35:56.995587   79927 logs.go:276] 0 containers: []
	W0806 08:35:56.995601   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:35:56.995608   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:35:56.995675   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:35:57.042708   79927 cri.go:89] found id: ""
	I0806 08:35:57.042735   79927 logs.go:276] 0 containers: []
	W0806 08:35:57.042743   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:35:57.042749   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:35:57.042833   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:35:57.079160   79927 cri.go:89] found id: ""
	I0806 08:35:57.079187   79927 logs.go:276] 0 containers: []
	W0806 08:35:57.079197   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:35:57.079203   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:35:57.079260   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:35:57.116358   79927 cri.go:89] found id: ""
	I0806 08:35:57.116409   79927 logs.go:276] 0 containers: []
	W0806 08:35:57.116421   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:35:57.116429   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:35:57.116489   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:35:57.151051   79927 cri.go:89] found id: ""
	I0806 08:35:57.151079   79927 logs.go:276] 0 containers: []
	W0806 08:35:57.151087   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:35:57.151096   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:35:57.151148   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:35:57.189747   79927 cri.go:89] found id: ""
	I0806 08:35:57.189791   79927 logs.go:276] 0 containers: []
	W0806 08:35:57.189800   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:35:57.189806   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:35:57.189871   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:35:57.225251   79927 cri.go:89] found id: ""
	I0806 08:35:57.225277   79927 logs.go:276] 0 containers: []
	W0806 08:35:57.225287   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:35:57.225298   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:35:57.225314   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:35:57.238746   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:35:57.238773   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:35:57.312931   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:35:57.312959   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:35:57.312976   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:35:57.394104   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:35:57.394139   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:35:57.435993   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:35:57.436028   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:35:55.288712   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:57.789512   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:56.612875   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:58.613571   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:57.468429   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:59.969618   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:59.989556   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:00.004129   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:00.004205   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:00.042192   79927 cri.go:89] found id: ""
	I0806 08:36:00.042215   79927 logs.go:276] 0 containers: []
	W0806 08:36:00.042223   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:00.042228   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:00.042273   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:00.080921   79927 cri.go:89] found id: ""
	I0806 08:36:00.080956   79927 logs.go:276] 0 containers: []
	W0806 08:36:00.080966   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:00.080973   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:00.081033   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:00.123186   79927 cri.go:89] found id: ""
	I0806 08:36:00.123206   79927 logs.go:276] 0 containers: []
	W0806 08:36:00.123216   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:00.123223   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:00.123274   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:00.165252   79927 cri.go:89] found id: ""
	I0806 08:36:00.165281   79927 logs.go:276] 0 containers: []
	W0806 08:36:00.165289   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:00.165294   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:00.165340   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:00.201601   79927 cri.go:89] found id: ""
	I0806 08:36:00.201639   79927 logs.go:276] 0 containers: []
	W0806 08:36:00.201653   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:00.201662   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:00.201730   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:00.236933   79927 cri.go:89] found id: ""
	I0806 08:36:00.236964   79927 logs.go:276] 0 containers: []
	W0806 08:36:00.236975   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:00.236983   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:00.237043   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:00.273215   79927 cri.go:89] found id: ""
	I0806 08:36:00.273247   79927 logs.go:276] 0 containers: []
	W0806 08:36:00.273256   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:00.273261   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:00.273319   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:00.308075   79927 cri.go:89] found id: ""
	I0806 08:36:00.308105   79927 logs.go:276] 0 containers: []
	W0806 08:36:00.308116   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:00.308126   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:00.308141   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:00.360096   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:00.360146   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:00.374484   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:00.374521   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:00.454297   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:00.454318   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:00.454334   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:00.542665   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:00.542702   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:03.084297   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:03.098661   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:03.098737   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:03.134751   79927 cri.go:89] found id: ""
	I0806 08:36:03.134786   79927 logs.go:276] 0 containers: []
	W0806 08:36:03.134797   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:03.134805   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:03.134873   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:03.177412   79927 cri.go:89] found id: ""
	I0806 08:36:03.177442   79927 logs.go:276] 0 containers: []
	W0806 08:36:03.177453   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:03.177461   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:03.177516   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:03.213177   79927 cri.go:89] found id: ""
	I0806 08:36:03.213209   79927 logs.go:276] 0 containers: []
	W0806 08:36:03.213221   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:03.213229   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:03.213291   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:03.254569   79927 cri.go:89] found id: ""
	I0806 08:36:03.254594   79927 logs.go:276] 0 containers: []
	W0806 08:36:03.254603   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:03.254608   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:03.254656   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:03.292564   79927 cri.go:89] found id: ""
	I0806 08:36:03.292590   79927 logs.go:276] 0 containers: []
	W0806 08:36:03.292597   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:03.292610   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:03.292671   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:03.331572   79927 cri.go:89] found id: ""
	I0806 08:36:03.331596   79927 logs.go:276] 0 containers: []
	W0806 08:36:03.331605   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:03.331610   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:03.331658   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:03.366368   79927 cri.go:89] found id: ""
	I0806 08:36:03.366398   79927 logs.go:276] 0 containers: []
	W0806 08:36:03.366409   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:03.366417   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:03.366469   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:03.404345   79927 cri.go:89] found id: ""
	I0806 08:36:03.404387   79927 logs.go:276] 0 containers: []
	W0806 08:36:03.404399   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:03.404411   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:03.404425   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:03.454825   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:03.454864   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:03.469437   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:03.469466   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:03.553247   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:03.553274   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:03.553290   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:03.635461   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:03.635509   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:00.287469   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:02.787634   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:01.111573   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:03.613750   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:02.468084   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:04.468169   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:06.469036   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:06.175216   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:06.190083   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:06.190158   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:06.226428   79927 cri.go:89] found id: ""
	I0806 08:36:06.226459   79927 logs.go:276] 0 containers: []
	W0806 08:36:06.226468   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:06.226473   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:06.226529   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:06.263781   79927 cri.go:89] found id: ""
	I0806 08:36:06.263805   79927 logs.go:276] 0 containers: []
	W0806 08:36:06.263813   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:06.263819   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:06.263895   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:06.304335   79927 cri.go:89] found id: ""
	I0806 08:36:06.304386   79927 logs.go:276] 0 containers: []
	W0806 08:36:06.304397   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:06.304404   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:06.304466   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:06.341138   79927 cri.go:89] found id: ""
	I0806 08:36:06.341166   79927 logs.go:276] 0 containers: []
	W0806 08:36:06.341177   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:06.341184   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:06.341248   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:06.381547   79927 cri.go:89] found id: ""
	I0806 08:36:06.381577   79927 logs.go:276] 0 containers: []
	W0806 08:36:06.381587   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:06.381594   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:06.381662   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:06.417606   79927 cri.go:89] found id: ""
	I0806 08:36:06.417630   79927 logs.go:276] 0 containers: []
	W0806 08:36:06.417637   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:06.417643   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:06.417700   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:06.456258   79927 cri.go:89] found id: ""
	I0806 08:36:06.456287   79927 logs.go:276] 0 containers: []
	W0806 08:36:06.456298   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:06.456307   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:06.456404   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:06.494268   79927 cri.go:89] found id: ""
	I0806 08:36:06.494309   79927 logs.go:276] 0 containers: []
	W0806 08:36:06.494321   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:06.494332   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:06.494346   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:06.547670   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:06.547710   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:06.562020   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:06.562049   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:06.632691   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:06.632717   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:06.632729   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:06.721737   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:06.721780   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:04.787931   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:07.287463   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:06.111545   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:08.113637   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:08.968767   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:11.468847   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:09.265154   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:09.278664   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:09.278743   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:09.313464   79927 cri.go:89] found id: ""
	I0806 08:36:09.313494   79927 logs.go:276] 0 containers: []
	W0806 08:36:09.313506   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:09.313513   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:09.313561   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:09.352107   79927 cri.go:89] found id: ""
	I0806 08:36:09.352133   79927 logs.go:276] 0 containers: []
	W0806 08:36:09.352146   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:09.352153   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:09.352215   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:09.387547   79927 cri.go:89] found id: ""
	I0806 08:36:09.387581   79927 logs.go:276] 0 containers: []
	W0806 08:36:09.387594   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:09.387601   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:09.387684   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:09.422157   79927 cri.go:89] found id: ""
	I0806 08:36:09.422193   79927 logs.go:276] 0 containers: []
	W0806 08:36:09.422202   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:09.422207   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:09.422275   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:09.463065   79927 cri.go:89] found id: ""
	I0806 08:36:09.463106   79927 logs.go:276] 0 containers: []
	W0806 08:36:09.463122   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:09.463145   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:09.463209   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:09.499450   79927 cri.go:89] found id: ""
	I0806 08:36:09.499474   79927 logs.go:276] 0 containers: []
	W0806 08:36:09.499484   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:09.499491   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:09.499550   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:09.536411   79927 cri.go:89] found id: ""
	I0806 08:36:09.536442   79927 logs.go:276] 0 containers: []
	W0806 08:36:09.536453   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:09.536460   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:09.536533   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:09.572044   79927 cri.go:89] found id: ""
	I0806 08:36:09.572070   79927 logs.go:276] 0 containers: []
	W0806 08:36:09.572079   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:09.572087   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:09.572138   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:09.612385   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:09.612418   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:09.666790   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:09.666852   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:09.680707   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:09.680753   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:09.753446   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:09.753473   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:09.753491   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:12.336824   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:12.352545   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:12.352626   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:12.393158   79927 cri.go:89] found id: ""
	I0806 08:36:12.393188   79927 logs.go:276] 0 containers: []
	W0806 08:36:12.393199   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:12.393206   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:12.393255   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:12.428501   79927 cri.go:89] found id: ""
	I0806 08:36:12.428525   79927 logs.go:276] 0 containers: []
	W0806 08:36:12.428534   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:12.428539   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:12.428596   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:12.463033   79927 cri.go:89] found id: ""
	I0806 08:36:12.463062   79927 logs.go:276] 0 containers: []
	W0806 08:36:12.463072   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:12.463079   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:12.463136   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:12.500350   79927 cri.go:89] found id: ""
	I0806 08:36:12.500402   79927 logs.go:276] 0 containers: []
	W0806 08:36:12.500413   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:12.500421   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:12.500477   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:12.537323   79927 cri.go:89] found id: ""
	I0806 08:36:12.537350   79927 logs.go:276] 0 containers: []
	W0806 08:36:12.537360   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:12.537366   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:12.537414   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:12.575525   79927 cri.go:89] found id: ""
	I0806 08:36:12.575551   79927 logs.go:276] 0 containers: []
	W0806 08:36:12.575561   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:12.575568   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:12.575629   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:12.615375   79927 cri.go:89] found id: ""
	I0806 08:36:12.615436   79927 logs.go:276] 0 containers: []
	W0806 08:36:12.615451   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:12.615459   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:12.615531   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:12.655269   79927 cri.go:89] found id: ""
	I0806 08:36:12.655300   79927 logs.go:276] 0 containers: []
	W0806 08:36:12.655311   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:12.655324   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:12.655342   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:12.709097   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:12.709134   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:12.724330   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:12.724380   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:12.801075   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:12.801099   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:12.801111   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:12.885670   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:12.885704   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:09.288192   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:11.789794   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:10.613121   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:12.613812   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:13.969367   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:16.468276   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:15.427222   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:15.442092   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:15.442206   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:15.485634   79927 cri.go:89] found id: ""
	I0806 08:36:15.485657   79927 logs.go:276] 0 containers: []
	W0806 08:36:15.485665   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:15.485671   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:15.485715   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:15.520791   79927 cri.go:89] found id: ""
	I0806 08:36:15.520820   79927 logs.go:276] 0 containers: []
	W0806 08:36:15.520830   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:15.520837   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:15.520897   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:15.554041   79927 cri.go:89] found id: ""
	I0806 08:36:15.554065   79927 logs.go:276] 0 containers: []
	W0806 08:36:15.554073   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:15.554079   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:15.554143   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:15.587550   79927 cri.go:89] found id: ""
	I0806 08:36:15.587579   79927 logs.go:276] 0 containers: []
	W0806 08:36:15.587591   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:15.587598   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:15.587659   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:15.628657   79927 cri.go:89] found id: ""
	I0806 08:36:15.628681   79927 logs.go:276] 0 containers: []
	W0806 08:36:15.628689   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:15.628695   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:15.628750   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:15.665212   79927 cri.go:89] found id: ""
	I0806 08:36:15.665239   79927 logs.go:276] 0 containers: []
	W0806 08:36:15.665247   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:15.665252   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:15.665298   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:15.705675   79927 cri.go:89] found id: ""
	I0806 08:36:15.705701   79927 logs.go:276] 0 containers: []
	W0806 08:36:15.705709   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:15.705714   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:15.705762   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:15.739835   79927 cri.go:89] found id: ""
	I0806 08:36:15.739862   79927 logs.go:276] 0 containers: []
	W0806 08:36:15.739871   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:15.739879   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:15.739892   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:15.795719   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:15.795749   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:15.809940   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:15.809967   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:15.888710   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:15.888737   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:15.888753   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:15.972728   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:15.972772   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:18.513565   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:18.527480   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:18.527538   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:18.562783   79927 cri.go:89] found id: ""
	I0806 08:36:18.562812   79927 logs.go:276] 0 containers: []
	W0806 08:36:18.562824   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:18.562832   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:18.562896   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:18.600613   79927 cri.go:89] found id: ""
	I0806 08:36:18.600645   79927 logs.go:276] 0 containers: []
	W0806 08:36:18.600659   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:18.600668   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:18.600735   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:18.638074   79927 cri.go:89] found id: ""
	I0806 08:36:18.638104   79927 logs.go:276] 0 containers: []
	W0806 08:36:18.638114   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:18.638119   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:18.638181   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:18.679795   79927 cri.go:89] found id: ""
	I0806 08:36:18.679825   79927 logs.go:276] 0 containers: []
	W0806 08:36:18.679836   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:18.679844   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:18.679907   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:18.715535   79927 cri.go:89] found id: ""
	I0806 08:36:18.715568   79927 logs.go:276] 0 containers: []
	W0806 08:36:18.715577   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:18.715585   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:18.715650   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:18.752705   79927 cri.go:89] found id: ""
	I0806 08:36:18.752736   79927 logs.go:276] 0 containers: []
	W0806 08:36:18.752748   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:18.752759   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:18.752821   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:14.288048   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:16.288422   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:18.788928   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:15.112857   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:17.114585   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:18.468697   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:20.967985   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:18.791653   79927 cri.go:89] found id: ""
	I0806 08:36:18.791678   79927 logs.go:276] 0 containers: []
	W0806 08:36:18.791689   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:18.791696   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:18.791763   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:18.828328   79927 cri.go:89] found id: ""
	I0806 08:36:18.828357   79927 logs.go:276] 0 containers: []
	W0806 08:36:18.828378   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:18.828389   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:18.828408   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:18.888111   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:18.888148   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:18.902718   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:18.902750   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:18.986327   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:18.986352   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:18.986365   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:19.072858   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:19.072899   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:21.615497   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:21.631434   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:21.631492   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:21.671446   79927 cri.go:89] found id: ""
	I0806 08:36:21.671471   79927 logs.go:276] 0 containers: []
	W0806 08:36:21.671479   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:21.671484   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:21.671540   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:21.707598   79927 cri.go:89] found id: ""
	I0806 08:36:21.707624   79927 logs.go:276] 0 containers: []
	W0806 08:36:21.707634   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:21.707640   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:21.707686   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:21.753647   79927 cri.go:89] found id: ""
	I0806 08:36:21.753675   79927 logs.go:276] 0 containers: []
	W0806 08:36:21.753686   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:21.753692   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:21.753749   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:21.795284   79927 cri.go:89] found id: ""
	I0806 08:36:21.795311   79927 logs.go:276] 0 containers: []
	W0806 08:36:21.795321   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:21.795329   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:21.795388   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:21.832436   79927 cri.go:89] found id: ""
	I0806 08:36:21.832466   79927 logs.go:276] 0 containers: []
	W0806 08:36:21.832476   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:21.832483   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:21.832550   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:21.870295   79927 cri.go:89] found id: ""
	I0806 08:36:21.870327   79927 logs.go:276] 0 containers: []
	W0806 08:36:21.870338   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:21.870345   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:21.870412   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:21.908210   79927 cri.go:89] found id: ""
	I0806 08:36:21.908242   79927 logs.go:276] 0 containers: []
	W0806 08:36:21.908253   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:21.908261   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:21.908322   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:21.944055   79927 cri.go:89] found id: ""
	I0806 08:36:21.944084   79927 logs.go:276] 0 containers: []
	W0806 08:36:21.944095   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:21.944104   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:21.944125   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:22.000467   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:22.000506   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:22.014590   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:22.014629   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:22.083873   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:22.083907   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:22.083927   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:22.175993   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:22.176056   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:21.288435   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:23.788424   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:19.612021   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:21.612092   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:23.612557   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:22.968059   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:24.968111   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:24.720476   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:24.734469   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:24.734551   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:24.770663   79927 cri.go:89] found id: ""
	I0806 08:36:24.770692   79927 logs.go:276] 0 containers: []
	W0806 08:36:24.770705   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:24.770712   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:24.770768   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:24.807861   79927 cri.go:89] found id: ""
	I0806 08:36:24.807889   79927 logs.go:276] 0 containers: []
	W0806 08:36:24.807897   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:24.807908   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:24.807957   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:24.849827   79927 cri.go:89] found id: ""
	I0806 08:36:24.849857   79927 logs.go:276] 0 containers: []
	W0806 08:36:24.849869   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:24.849877   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:24.849937   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:24.885260   79927 cri.go:89] found id: ""
	I0806 08:36:24.885291   79927 logs.go:276] 0 containers: []
	W0806 08:36:24.885303   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:24.885315   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:24.885381   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:24.919918   79927 cri.go:89] found id: ""
	I0806 08:36:24.919949   79927 logs.go:276] 0 containers: []
	W0806 08:36:24.919957   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:24.919964   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:24.920026   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:24.956010   79927 cri.go:89] found id: ""
	I0806 08:36:24.956036   79927 logs.go:276] 0 containers: []
	W0806 08:36:24.956045   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:24.956050   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:24.956109   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:24.993126   79927 cri.go:89] found id: ""
	I0806 08:36:24.993154   79927 logs.go:276] 0 containers: []
	W0806 08:36:24.993165   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:24.993173   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:24.993240   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:25.026732   79927 cri.go:89] found id: ""
	I0806 08:36:25.026756   79927 logs.go:276] 0 containers: []
	W0806 08:36:25.026763   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:25.026771   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:25.026781   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:25.064076   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:25.064113   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:25.118820   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:25.118856   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:25.133640   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:25.133671   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:25.201686   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:25.201714   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:25.201729   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:27.801947   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:27.815472   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:27.815539   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:27.851073   79927 cri.go:89] found id: ""
	I0806 08:36:27.851103   79927 logs.go:276] 0 containers: []
	W0806 08:36:27.851119   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:27.851127   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:27.851184   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:27.886121   79927 cri.go:89] found id: ""
	I0806 08:36:27.886151   79927 logs.go:276] 0 containers: []
	W0806 08:36:27.886162   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:27.886169   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:27.886231   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:27.920950   79927 cri.go:89] found id: ""
	I0806 08:36:27.920979   79927 logs.go:276] 0 containers: []
	W0806 08:36:27.920990   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:27.920997   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:27.921055   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:27.956529   79927 cri.go:89] found id: ""
	I0806 08:36:27.956562   79927 logs.go:276] 0 containers: []
	W0806 08:36:27.956574   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:27.956581   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:27.956631   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:27.992272   79927 cri.go:89] found id: ""
	I0806 08:36:27.992298   79927 logs.go:276] 0 containers: []
	W0806 08:36:27.992306   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:27.992322   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:27.992405   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:28.027242   79927 cri.go:89] found id: ""
	I0806 08:36:28.027269   79927 logs.go:276] 0 containers: []
	W0806 08:36:28.027277   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:28.027283   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:28.027335   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:28.063043   79927 cri.go:89] found id: ""
	I0806 08:36:28.063068   79927 logs.go:276] 0 containers: []
	W0806 08:36:28.063076   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:28.063083   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:28.063148   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:28.099743   79927 cri.go:89] found id: ""
	I0806 08:36:28.099769   79927 logs.go:276] 0 containers: []
	W0806 08:36:28.099780   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:28.099791   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:28.099806   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:28.151354   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:28.151392   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:28.166704   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:28.166731   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:28.242621   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:28.242642   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:28.242656   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:28.324352   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:28.324405   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:25.789745   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:28.290656   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:26.112264   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:28.112769   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:27.466228   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:29.469998   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:30.869063   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:30.882309   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:30.882377   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:30.916264   79927 cri.go:89] found id: ""
	I0806 08:36:30.916296   79927 logs.go:276] 0 containers: []
	W0806 08:36:30.916306   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:30.916313   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:30.916390   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:30.951995   79927 cri.go:89] found id: ""
	I0806 08:36:30.952021   79927 logs.go:276] 0 containers: []
	W0806 08:36:30.952028   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:30.952034   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:30.952111   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:31.005229   79927 cri.go:89] found id: ""
	I0806 08:36:31.005270   79927 logs.go:276] 0 containers: []
	W0806 08:36:31.005281   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:31.005288   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:31.005350   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:31.040439   79927 cri.go:89] found id: ""
	I0806 08:36:31.040465   79927 logs.go:276] 0 containers: []
	W0806 08:36:31.040476   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:31.040482   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:31.040534   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:31.078880   79927 cri.go:89] found id: ""
	I0806 08:36:31.078910   79927 logs.go:276] 0 containers: []
	W0806 08:36:31.078920   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:31.078934   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:31.078996   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:31.115520   79927 cri.go:89] found id: ""
	I0806 08:36:31.115551   79927 logs.go:276] 0 containers: []
	W0806 08:36:31.115562   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:31.115569   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:31.115615   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:31.150450   79927 cri.go:89] found id: ""
	I0806 08:36:31.150483   79927 logs.go:276] 0 containers: []
	W0806 08:36:31.150497   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:31.150506   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:31.150565   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:31.187449   79927 cri.go:89] found id: ""
	I0806 08:36:31.187476   79927 logs.go:276] 0 containers: []
	W0806 08:36:31.187485   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:31.187493   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:31.187507   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:31.246232   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:31.246272   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:31.260691   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:31.260720   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:31.342356   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:31.342383   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:31.342401   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:31.426820   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:31.426865   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:30.787370   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:32.787711   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:30.611466   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:32.611550   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:31.968048   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:34.469062   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:33.974768   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:33.988377   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:33.988442   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:34.023284   79927 cri.go:89] found id: ""
	I0806 08:36:34.023316   79927 logs.go:276] 0 containers: []
	W0806 08:36:34.023328   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:34.023336   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:34.023397   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:34.058633   79927 cri.go:89] found id: ""
	I0806 08:36:34.058676   79927 logs.go:276] 0 containers: []
	W0806 08:36:34.058695   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:34.058704   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:34.058767   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:34.098343   79927 cri.go:89] found id: ""
	I0806 08:36:34.098379   79927 logs.go:276] 0 containers: []
	W0806 08:36:34.098390   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:34.098398   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:34.098471   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:34.134638   79927 cri.go:89] found id: ""
	I0806 08:36:34.134667   79927 logs.go:276] 0 containers: []
	W0806 08:36:34.134677   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:34.134683   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:34.134729   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:34.170118   79927 cri.go:89] found id: ""
	I0806 08:36:34.170144   79927 logs.go:276] 0 containers: []
	W0806 08:36:34.170153   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:34.170159   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:34.170207   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:34.204789   79927 cri.go:89] found id: ""
	I0806 08:36:34.204815   79927 logs.go:276] 0 containers: []
	W0806 08:36:34.204823   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:34.204829   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:34.204877   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:34.245170   79927 cri.go:89] found id: ""
	I0806 08:36:34.245194   79927 logs.go:276] 0 containers: []
	W0806 08:36:34.245203   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:34.245208   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:34.245264   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:34.280026   79927 cri.go:89] found id: ""
	I0806 08:36:34.280063   79927 logs.go:276] 0 containers: []
	W0806 08:36:34.280076   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:34.280086   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:34.280101   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:34.335363   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:34.335408   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:34.350201   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:34.350226   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:34.425205   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:34.425231   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:34.425244   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:34.506385   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:34.506419   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:37.047263   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:37.060179   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:37.060254   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:37.093168   79927 cri.go:89] found id: ""
	I0806 08:36:37.093201   79927 logs.go:276] 0 containers: []
	W0806 08:36:37.093209   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:37.093216   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:37.093264   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:37.131315   79927 cri.go:89] found id: ""
	I0806 08:36:37.131343   79927 logs.go:276] 0 containers: []
	W0806 08:36:37.131353   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:37.131360   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:37.131426   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:37.169799   79927 cri.go:89] found id: ""
	I0806 08:36:37.169829   79927 logs.go:276] 0 containers: []
	W0806 08:36:37.169839   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:37.169845   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:37.169898   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:37.205852   79927 cri.go:89] found id: ""
	I0806 08:36:37.205878   79927 logs.go:276] 0 containers: []
	W0806 08:36:37.205885   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:37.205891   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:37.205938   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:37.242469   79927 cri.go:89] found id: ""
	I0806 08:36:37.242493   79927 logs.go:276] 0 containers: []
	W0806 08:36:37.242501   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:37.242507   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:37.242554   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:37.278515   79927 cri.go:89] found id: ""
	I0806 08:36:37.278545   79927 logs.go:276] 0 containers: []
	W0806 08:36:37.278556   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:37.278563   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:37.278627   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:37.316157   79927 cri.go:89] found id: ""
	I0806 08:36:37.316181   79927 logs.go:276] 0 containers: []
	W0806 08:36:37.316189   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:37.316195   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:37.316249   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:37.353752   79927 cri.go:89] found id: ""
	I0806 08:36:37.353778   79927 logs.go:276] 0 containers: []
	W0806 08:36:37.353785   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:37.353796   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:37.353807   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:37.439435   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:37.439471   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:37.488866   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:37.488903   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:37.540031   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:37.540072   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:37.556109   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:37.556140   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:37.629023   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:35.288565   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:37.788336   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:35.113182   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:37.615523   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:36.967665   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:38.973884   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:41.467445   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:40.129579   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:40.143971   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:40.144050   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:40.180490   79927 cri.go:89] found id: ""
	I0806 08:36:40.180515   79927 logs.go:276] 0 containers: []
	W0806 08:36:40.180524   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:40.180530   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:40.180586   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:40.219812   79927 cri.go:89] found id: ""
	I0806 08:36:40.219860   79927 logs.go:276] 0 containers: []
	W0806 08:36:40.219869   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:40.219877   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:40.219939   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:40.256048   79927 cri.go:89] found id: ""
	I0806 08:36:40.256076   79927 logs.go:276] 0 containers: []
	W0806 08:36:40.256088   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:40.256106   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:40.256166   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:40.294693   79927 cri.go:89] found id: ""
	I0806 08:36:40.294720   79927 logs.go:276] 0 containers: []
	W0806 08:36:40.294729   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:40.294734   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:40.294792   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:40.331246   79927 cri.go:89] found id: ""
	I0806 08:36:40.331276   79927 logs.go:276] 0 containers: []
	W0806 08:36:40.331287   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:40.331295   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:40.331351   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:40.373039   79927 cri.go:89] found id: ""
	I0806 08:36:40.373068   79927 logs.go:276] 0 containers: []
	W0806 08:36:40.373095   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:40.373103   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:40.373163   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:40.408137   79927 cri.go:89] found id: ""
	I0806 08:36:40.408167   79927 logs.go:276] 0 containers: []
	W0806 08:36:40.408177   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:40.408184   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:40.408249   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:40.442996   79927 cri.go:89] found id: ""
	I0806 08:36:40.443035   79927 logs.go:276] 0 containers: []
	W0806 08:36:40.443043   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:40.443051   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:40.443063   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:40.499559   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:40.499599   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:40.514687   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:40.514716   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:40.586034   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:40.586059   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:40.586071   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:40.669756   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:40.669796   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:43.213555   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:43.229683   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:43.229751   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:43.267107   79927 cri.go:89] found id: ""
	I0806 08:36:43.267140   79927 logs.go:276] 0 containers: []
	W0806 08:36:43.267153   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:43.267161   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:43.267235   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:43.303405   79927 cri.go:89] found id: ""
	I0806 08:36:43.303435   79927 logs.go:276] 0 containers: []
	W0806 08:36:43.303443   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:43.303449   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:43.303509   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:43.338583   79927 cri.go:89] found id: ""
	I0806 08:36:43.338610   79927 logs.go:276] 0 containers: []
	W0806 08:36:43.338619   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:43.338628   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:43.338692   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:43.377413   79927 cri.go:89] found id: ""
	I0806 08:36:43.377441   79927 logs.go:276] 0 containers: []
	W0806 08:36:43.377449   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:43.377454   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:43.377501   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:43.413036   79927 cri.go:89] found id: ""
	I0806 08:36:43.413133   79927 logs.go:276] 0 containers: []
	W0806 08:36:43.413152   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:43.413160   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:43.413221   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:43.450216   79927 cri.go:89] found id: ""
	I0806 08:36:43.450245   79927 logs.go:276] 0 containers: []
	W0806 08:36:43.450253   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:43.450259   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:43.450305   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:43.485811   79927 cri.go:89] found id: ""
	I0806 08:36:43.485856   79927 logs.go:276] 0 containers: []
	W0806 08:36:43.485864   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:43.485870   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:43.485919   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:43.521586   79927 cri.go:89] found id: ""
	I0806 08:36:43.521617   79927 logs.go:276] 0 containers: []
	W0806 08:36:43.521628   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:43.521650   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:43.521673   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:43.561842   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:43.561889   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:43.617652   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:43.617683   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:43.631239   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:43.631267   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:43.707297   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:43.707317   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:43.707330   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:39.789019   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:42.287817   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:40.111420   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:42.112503   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:43.468602   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:45.974699   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:46.287309   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:46.301917   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:46.301991   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:46.335040   79927 cri.go:89] found id: ""
	I0806 08:36:46.335069   79927 logs.go:276] 0 containers: []
	W0806 08:36:46.335080   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:46.335088   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:46.335177   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:46.371902   79927 cri.go:89] found id: ""
	I0806 08:36:46.371930   79927 logs.go:276] 0 containers: []
	W0806 08:36:46.371939   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:46.371944   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:46.372000   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:46.411945   79927 cri.go:89] found id: ""
	I0806 08:36:46.412119   79927 logs.go:276] 0 containers: []
	W0806 08:36:46.412134   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:46.412145   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:46.412248   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:46.448578   79927 cri.go:89] found id: ""
	I0806 08:36:46.448612   79927 logs.go:276] 0 containers: []
	W0806 08:36:46.448624   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:46.448631   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:46.448691   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:46.492231   79927 cri.go:89] found id: ""
	I0806 08:36:46.492263   79927 logs.go:276] 0 containers: []
	W0806 08:36:46.492276   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:46.492283   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:46.492338   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:46.538502   79927 cri.go:89] found id: ""
	I0806 08:36:46.538528   79927 logs.go:276] 0 containers: []
	W0806 08:36:46.538547   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:46.538555   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:46.538610   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:46.577750   79927 cri.go:89] found id: ""
	I0806 08:36:46.577778   79927 logs.go:276] 0 containers: []
	W0806 08:36:46.577785   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:46.577790   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:46.577872   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:46.611717   79927 cri.go:89] found id: ""
	I0806 08:36:46.611742   79927 logs.go:276] 0 containers: []
	W0806 08:36:46.611750   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:46.611758   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:46.611770   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:46.629144   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:46.629178   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:46.714703   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:46.714723   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:46.714736   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:46.798978   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:46.799025   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:46.837373   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:46.837407   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:44.288387   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:46.288470   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:48.788762   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:44.611739   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:46.615083   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:49.113041   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:48.469024   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:50.967906   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:49.393231   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:49.407372   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:49.407436   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:49.444210   79927 cri.go:89] found id: ""
	I0806 08:36:49.444240   79927 logs.go:276] 0 containers: []
	W0806 08:36:49.444249   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:49.444254   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:49.444313   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:49.484165   79927 cri.go:89] found id: ""
	I0806 08:36:49.484190   79927 logs.go:276] 0 containers: []
	W0806 08:36:49.484200   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:49.484206   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:49.484278   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:49.519880   79927 cri.go:89] found id: ""
	I0806 08:36:49.519921   79927 logs.go:276] 0 containers: []
	W0806 08:36:49.519932   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:49.519939   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:49.519999   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:49.554954   79927 cri.go:89] found id: ""
	I0806 08:36:49.554979   79927 logs.go:276] 0 containers: []
	W0806 08:36:49.554988   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:49.554995   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:49.555045   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:49.593332   79927 cri.go:89] found id: ""
	I0806 08:36:49.593359   79927 logs.go:276] 0 containers: []
	W0806 08:36:49.593367   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:49.593373   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:49.593432   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:49.630736   79927 cri.go:89] found id: ""
	I0806 08:36:49.630766   79927 logs.go:276] 0 containers: []
	W0806 08:36:49.630774   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:49.630780   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:49.630839   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:49.667460   79927 cri.go:89] found id: ""
	I0806 08:36:49.667487   79927 logs.go:276] 0 containers: []
	W0806 08:36:49.667499   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:49.667505   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:49.667566   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:49.700161   79927 cri.go:89] found id: ""
	I0806 08:36:49.700190   79927 logs.go:276] 0 containers: []
	W0806 08:36:49.700199   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:49.700207   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:49.700218   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:49.786656   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:49.786704   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:49.825711   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:49.825745   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:49.879696   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:49.879731   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:49.893336   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:49.893369   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:49.969462   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:52.469939   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:52.483906   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:52.483974   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:52.523575   79927 cri.go:89] found id: ""
	I0806 08:36:52.523603   79927 logs.go:276] 0 containers: []
	W0806 08:36:52.523615   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:52.523622   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:52.523684   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:52.561590   79927 cri.go:89] found id: ""
	I0806 08:36:52.561620   79927 logs.go:276] 0 containers: []
	W0806 08:36:52.561630   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:52.561639   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:52.561698   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:52.598204   79927 cri.go:89] found id: ""
	I0806 08:36:52.598227   79927 logs.go:276] 0 containers: []
	W0806 08:36:52.598236   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:52.598241   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:52.598302   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:52.636993   79927 cri.go:89] found id: ""
	I0806 08:36:52.637019   79927 logs.go:276] 0 containers: []
	W0806 08:36:52.637027   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:52.637034   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:52.637085   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:52.675090   79927 cri.go:89] found id: ""
	I0806 08:36:52.675118   79927 logs.go:276] 0 containers: []
	W0806 08:36:52.675130   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:52.675138   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:52.675199   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:52.711045   79927 cri.go:89] found id: ""
	I0806 08:36:52.711072   79927 logs.go:276] 0 containers: []
	W0806 08:36:52.711081   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:52.711086   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:52.711151   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:52.749001   79927 cri.go:89] found id: ""
	I0806 08:36:52.749029   79927 logs.go:276] 0 containers: []
	W0806 08:36:52.749039   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:52.749046   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:52.749113   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:52.783692   79927 cri.go:89] found id: ""
	I0806 08:36:52.783720   79927 logs.go:276] 0 containers: []
	W0806 08:36:52.783737   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:52.783748   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:52.783762   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:52.863240   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:52.863281   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:52.902471   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:52.902496   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:52.953299   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:52.953341   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:52.968786   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:52.968818   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:53.045904   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:51.288280   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:53.289276   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:51.612945   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:54.112574   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:52.968486   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:55.468458   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:55.546407   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:55.560166   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:55.560243   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:55.592961   79927 cri.go:89] found id: ""
	I0806 08:36:55.592992   79927 logs.go:276] 0 containers: []
	W0806 08:36:55.593004   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:55.593012   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:55.593075   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:55.629546   79927 cri.go:89] found id: ""
	I0806 08:36:55.629568   79927 logs.go:276] 0 containers: []
	W0806 08:36:55.629578   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:55.629585   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:55.629639   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:55.668677   79927 cri.go:89] found id: ""
	I0806 08:36:55.668707   79927 logs.go:276] 0 containers: []
	W0806 08:36:55.668718   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:55.668726   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:55.668784   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:55.704141   79927 cri.go:89] found id: ""
	I0806 08:36:55.704173   79927 logs.go:276] 0 containers: []
	W0806 08:36:55.704183   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:55.704188   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:55.704257   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:55.740125   79927 cri.go:89] found id: ""
	I0806 08:36:55.740152   79927 logs.go:276] 0 containers: []
	W0806 08:36:55.740160   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:55.740166   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:55.740210   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:55.773709   79927 cri.go:89] found id: ""
	I0806 08:36:55.773739   79927 logs.go:276] 0 containers: []
	W0806 08:36:55.773750   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:55.773759   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:55.773827   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:55.813913   79927 cri.go:89] found id: ""
	I0806 08:36:55.813940   79927 logs.go:276] 0 containers: []
	W0806 08:36:55.813953   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:55.813959   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:55.814004   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:55.850807   79927 cri.go:89] found id: ""
	I0806 08:36:55.850841   79927 logs.go:276] 0 containers: []
	W0806 08:36:55.850853   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:55.850864   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:55.850878   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:55.908202   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:55.908247   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:55.922278   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:55.922311   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:55.996448   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:55.996476   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:55.996489   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:56.076612   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:56.076647   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:58.625610   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:58.640939   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:58.641008   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:58.675638   79927 cri.go:89] found id: ""
	I0806 08:36:58.675666   79927 logs.go:276] 0 containers: []
	W0806 08:36:58.675674   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:58.675680   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:58.675728   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:58.712020   79927 cri.go:89] found id: ""
	I0806 08:36:58.712055   79927 logs.go:276] 0 containers: []
	W0806 08:36:58.712065   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:58.712072   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:58.712126   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:58.749683   79927 cri.go:89] found id: ""
	I0806 08:36:58.749712   79927 logs.go:276] 0 containers: []
	W0806 08:36:58.749724   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:58.749731   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:58.749792   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:55.788883   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:58.288825   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:56.613051   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:59.113176   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:57.968553   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:59.968580   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:58.790005   79927 cri.go:89] found id: ""
	I0806 08:36:58.790026   79927 logs.go:276] 0 containers: []
	W0806 08:36:58.790033   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:58.790039   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:58.790089   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:58.828781   79927 cri.go:89] found id: ""
	I0806 08:36:58.828806   79927 logs.go:276] 0 containers: []
	W0806 08:36:58.828815   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:58.828821   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:58.828873   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:58.867039   79927 cri.go:89] found id: ""
	I0806 08:36:58.867067   79927 logs.go:276] 0 containers: []
	W0806 08:36:58.867075   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:58.867081   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:58.867125   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:58.901362   79927 cri.go:89] found id: ""
	I0806 08:36:58.901396   79927 logs.go:276] 0 containers: []
	W0806 08:36:58.901415   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:58.901423   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:58.901472   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:58.940351   79927 cri.go:89] found id: ""
	I0806 08:36:58.940388   79927 logs.go:276] 0 containers: []
	W0806 08:36:58.940399   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:58.940410   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:58.940425   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:58.995737   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:58.995776   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:59.009191   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:59.009222   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:59.085109   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:59.085133   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:59.085150   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:59.164778   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:59.164819   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:01.712282   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:01.725701   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:01.725759   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:01.762005   79927 cri.go:89] found id: ""
	I0806 08:37:01.762029   79927 logs.go:276] 0 containers: []
	W0806 08:37:01.762038   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:01.762044   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:01.762092   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:01.799277   79927 cri.go:89] found id: ""
	I0806 08:37:01.799304   79927 logs.go:276] 0 containers: []
	W0806 08:37:01.799316   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:01.799322   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:01.799391   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:01.845100   79927 cri.go:89] found id: ""
	I0806 08:37:01.845133   79927 logs.go:276] 0 containers: []
	W0806 08:37:01.845147   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:01.845154   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:01.845214   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:01.882019   79927 cri.go:89] found id: ""
	I0806 08:37:01.882050   79927 logs.go:276] 0 containers: []
	W0806 08:37:01.882061   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:01.882069   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:01.882160   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:01.919224   79927 cri.go:89] found id: ""
	I0806 08:37:01.919251   79927 logs.go:276] 0 containers: []
	W0806 08:37:01.919262   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:01.919269   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:01.919327   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:01.957300   79927 cri.go:89] found id: ""
	I0806 08:37:01.957327   79927 logs.go:276] 0 containers: []
	W0806 08:37:01.957335   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:01.957342   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:01.957389   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:02.000717   79927 cri.go:89] found id: ""
	I0806 08:37:02.000740   79927 logs.go:276] 0 containers: []
	W0806 08:37:02.000749   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:02.000754   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:02.000801   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:02.034710   79927 cri.go:89] found id: ""
	I0806 08:37:02.034739   79927 logs.go:276] 0 containers: []
	W0806 08:37:02.034753   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:02.034763   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:02.034776   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:02.090553   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:02.090593   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:02.104247   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:02.104276   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:02.181990   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:02.182015   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:02.182030   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:02.268127   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:02.268167   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:00.789157   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:03.288510   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:01.611956   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:04.111772   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:02.467718   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:04.968898   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:04.814144   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:04.829120   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:04.829194   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:04.873503   79927 cri.go:89] found id: ""
	I0806 08:37:04.873537   79927 logs.go:276] 0 containers: []
	W0806 08:37:04.873548   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:04.873557   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:04.873614   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:04.913878   79927 cri.go:89] found id: ""
	I0806 08:37:04.913910   79927 logs.go:276] 0 containers: []
	W0806 08:37:04.913918   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:04.913924   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:04.913984   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:04.950273   79927 cri.go:89] found id: ""
	I0806 08:37:04.950303   79927 logs.go:276] 0 containers: []
	W0806 08:37:04.950316   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:04.950323   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:04.950388   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:04.985108   79927 cri.go:89] found id: ""
	I0806 08:37:04.985135   79927 logs.go:276] 0 containers: []
	W0806 08:37:04.985145   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:04.985151   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:04.985215   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:05.021033   79927 cri.go:89] found id: ""
	I0806 08:37:05.021056   79927 logs.go:276] 0 containers: []
	W0806 08:37:05.021064   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:05.021069   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:05.021120   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:05.058416   79927 cri.go:89] found id: ""
	I0806 08:37:05.058440   79927 logs.go:276] 0 containers: []
	W0806 08:37:05.058448   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:05.058454   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:05.058510   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:05.093709   79927 cri.go:89] found id: ""
	I0806 08:37:05.093737   79927 logs.go:276] 0 containers: []
	W0806 08:37:05.093745   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:05.093751   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:05.093815   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:05.129391   79927 cri.go:89] found id: ""
	I0806 08:37:05.129416   79927 logs.go:276] 0 containers: []
	W0806 08:37:05.129424   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:05.129432   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:05.129444   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:05.172149   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:05.172176   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:05.224745   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:05.224781   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:05.238967   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:05.238997   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:05.314834   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:05.314868   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:05.314883   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:07.903732   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:07.918551   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:07.918619   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:07.966068   79927 cri.go:89] found id: ""
	I0806 08:37:07.966093   79927 logs.go:276] 0 containers: []
	W0806 08:37:07.966104   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:07.966112   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:07.966180   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:08.001557   79927 cri.go:89] found id: ""
	I0806 08:37:08.001586   79927 logs.go:276] 0 containers: []
	W0806 08:37:08.001595   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:08.001602   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:08.001663   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:08.034214   79927 cri.go:89] found id: ""
	I0806 08:37:08.034239   79927 logs.go:276] 0 containers: []
	W0806 08:37:08.034249   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:08.034256   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:08.034316   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:08.070645   79927 cri.go:89] found id: ""
	I0806 08:37:08.070673   79927 logs.go:276] 0 containers: []
	W0806 08:37:08.070683   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:08.070689   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:08.070761   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:08.110537   79927 cri.go:89] found id: ""
	I0806 08:37:08.110565   79927 logs.go:276] 0 containers: []
	W0806 08:37:08.110574   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:08.110581   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:08.110635   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:08.146179   79927 cri.go:89] found id: ""
	I0806 08:37:08.146203   79927 logs.go:276] 0 containers: []
	W0806 08:37:08.146211   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:08.146224   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:08.146272   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:08.182924   79927 cri.go:89] found id: ""
	I0806 08:37:08.182953   79927 logs.go:276] 0 containers: []
	W0806 08:37:08.182962   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:08.182968   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:08.183024   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:08.219547   79927 cri.go:89] found id: ""
	I0806 08:37:08.219570   79927 logs.go:276] 0 containers: []
	W0806 08:37:08.219576   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:08.219585   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:08.219599   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:08.298919   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:08.298956   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:08.338475   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:08.338512   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:08.390859   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:08.390898   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:08.404854   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:08.404891   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:08.473553   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:05.788534   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:08.288635   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:06.112204   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:08.612960   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:06.969444   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:09.468423   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:11.469280   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:10.973824   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:10.987720   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:10.987777   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:11.022637   79927 cri.go:89] found id: ""
	I0806 08:37:11.022667   79927 logs.go:276] 0 containers: []
	W0806 08:37:11.022677   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:11.022683   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:11.022739   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:11.062625   79927 cri.go:89] found id: ""
	I0806 08:37:11.062655   79927 logs.go:276] 0 containers: []
	W0806 08:37:11.062673   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:11.062681   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:11.062736   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:11.099906   79927 cri.go:89] found id: ""
	I0806 08:37:11.099937   79927 logs.go:276] 0 containers: []
	W0806 08:37:11.099945   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:11.099951   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:11.100019   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:11.139461   79927 cri.go:89] found id: ""
	I0806 08:37:11.139485   79927 logs.go:276] 0 containers: []
	W0806 08:37:11.139493   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:11.139498   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:11.139541   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:11.177916   79927 cri.go:89] found id: ""
	I0806 08:37:11.177940   79927 logs.go:276] 0 containers: []
	W0806 08:37:11.177948   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:11.177954   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:11.178007   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:11.216335   79927 cri.go:89] found id: ""
	I0806 08:37:11.216377   79927 logs.go:276] 0 containers: []
	W0806 08:37:11.216389   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:11.216398   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:11.216453   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:11.278457   79927 cri.go:89] found id: ""
	I0806 08:37:11.278479   79927 logs.go:276] 0 containers: []
	W0806 08:37:11.278487   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:11.278493   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:11.278541   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:11.316325   79927 cri.go:89] found id: ""
	I0806 08:37:11.316348   79927 logs.go:276] 0 containers: []
	W0806 08:37:11.316356   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:11.316381   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:11.316397   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:11.357435   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:11.357469   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:11.413151   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:11.413184   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:11.427149   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:11.427182   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:11.500691   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:11.500723   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:11.500738   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:10.787717   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:12.788263   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:11.113471   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:13.612216   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:13.968959   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:16.468099   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:14.077817   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:14.090888   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:14.090960   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:14.126454   79927 cri.go:89] found id: ""
	I0806 08:37:14.126476   79927 logs.go:276] 0 containers: []
	W0806 08:37:14.126484   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:14.126489   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:14.126535   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:14.163910   79927 cri.go:89] found id: ""
	I0806 08:37:14.163939   79927 logs.go:276] 0 containers: []
	W0806 08:37:14.163950   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:14.163957   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:14.164025   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:14.202918   79927 cri.go:89] found id: ""
	I0806 08:37:14.202942   79927 logs.go:276] 0 containers: []
	W0806 08:37:14.202950   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:14.202956   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:14.202999   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:14.240997   79927 cri.go:89] found id: ""
	I0806 08:37:14.241021   79927 logs.go:276] 0 containers: []
	W0806 08:37:14.241030   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:14.241036   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:14.241083   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:14.277424   79927 cri.go:89] found id: ""
	I0806 08:37:14.277448   79927 logs.go:276] 0 containers: []
	W0806 08:37:14.277465   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:14.277473   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:14.277545   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:14.317402   79927 cri.go:89] found id: ""
	I0806 08:37:14.317426   79927 logs.go:276] 0 containers: []
	W0806 08:37:14.317435   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:14.317440   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:14.317487   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:14.364310   79927 cri.go:89] found id: ""
	I0806 08:37:14.364341   79927 logs.go:276] 0 containers: []
	W0806 08:37:14.364351   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:14.364358   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:14.364445   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:14.398836   79927 cri.go:89] found id: ""
	I0806 08:37:14.398866   79927 logs.go:276] 0 containers: []
	W0806 08:37:14.398874   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:14.398891   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:14.398904   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:14.450550   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:14.450586   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:14.467190   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:14.467220   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:14.538786   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:14.538809   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:14.538824   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:14.615049   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:14.615080   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:17.155906   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:17.170401   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:17.170479   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:17.208276   79927 cri.go:89] found id: ""
	I0806 08:37:17.208311   79927 logs.go:276] 0 containers: []
	W0806 08:37:17.208322   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:17.208330   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:17.208403   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:17.258671   79927 cri.go:89] found id: ""
	I0806 08:37:17.258696   79927 logs.go:276] 0 containers: []
	W0806 08:37:17.258705   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:17.258711   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:17.258771   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:17.296999   79927 cri.go:89] found id: ""
	I0806 08:37:17.297027   79927 logs.go:276] 0 containers: []
	W0806 08:37:17.297035   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:17.297041   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:17.297098   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:17.335818   79927 cri.go:89] found id: ""
	I0806 08:37:17.335841   79927 logs.go:276] 0 containers: []
	W0806 08:37:17.335849   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:17.335855   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:17.335903   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:17.371650   79927 cri.go:89] found id: ""
	I0806 08:37:17.371674   79927 logs.go:276] 0 containers: []
	W0806 08:37:17.371682   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:17.371688   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:17.371737   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:17.408413   79927 cri.go:89] found id: ""
	I0806 08:37:17.408440   79927 logs.go:276] 0 containers: []
	W0806 08:37:17.408448   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:17.408454   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:17.408502   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:17.443877   79927 cri.go:89] found id: ""
	I0806 08:37:17.443909   79927 logs.go:276] 0 containers: []
	W0806 08:37:17.443920   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:17.443928   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:17.443988   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:17.484292   79927 cri.go:89] found id: ""
	I0806 08:37:17.484321   79927 logs.go:276] 0 containers: []
	W0806 08:37:17.484331   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:17.484341   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:17.484356   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:17.541365   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:17.541418   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:17.555961   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:17.555993   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:17.625904   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:17.625923   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:17.625941   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:17.708997   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:17.709045   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:14.788822   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:17.288567   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:16.112034   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:18.612595   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:18.468580   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:20.967605   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:20.250351   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:20.264560   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:20.264625   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:20.301216   79927 cri.go:89] found id: ""
	I0806 08:37:20.301244   79927 logs.go:276] 0 containers: []
	W0806 08:37:20.301252   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:20.301257   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:20.301317   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:20.345820   79927 cri.go:89] found id: ""
	I0806 08:37:20.345849   79927 logs.go:276] 0 containers: []
	W0806 08:37:20.345860   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:20.345867   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:20.345923   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:20.382333   79927 cri.go:89] found id: ""
	I0806 08:37:20.382357   79927 logs.go:276] 0 containers: []
	W0806 08:37:20.382365   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:20.382371   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:20.382426   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:20.420193   79927 cri.go:89] found id: ""
	I0806 08:37:20.420228   79927 logs.go:276] 0 containers: []
	W0806 08:37:20.420238   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:20.420249   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:20.420316   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:20.456200   79927 cri.go:89] found id: ""
	I0806 08:37:20.456227   79927 logs.go:276] 0 containers: []
	W0806 08:37:20.456235   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:20.456241   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:20.456298   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:20.493321   79927 cri.go:89] found id: ""
	I0806 08:37:20.493349   79927 logs.go:276] 0 containers: []
	W0806 08:37:20.493356   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:20.493362   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:20.493414   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:20.531007   79927 cri.go:89] found id: ""
	I0806 08:37:20.531034   79927 logs.go:276] 0 containers: []
	W0806 08:37:20.531044   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:20.531051   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:20.531118   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:20.572014   79927 cri.go:89] found id: ""
	I0806 08:37:20.572044   79927 logs.go:276] 0 containers: []
	W0806 08:37:20.572062   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:20.572071   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:20.572083   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:20.623368   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:20.623399   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:20.638941   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:20.638984   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:20.717287   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:20.717313   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:20.717324   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:20.800157   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:20.800192   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:23.342061   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:23.356954   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:23.357026   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:23.391415   79927 cri.go:89] found id: ""
	I0806 08:37:23.391445   79927 logs.go:276] 0 containers: []
	W0806 08:37:23.391456   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:23.391463   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:23.391526   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:23.426460   79927 cri.go:89] found id: ""
	I0806 08:37:23.426482   79927 logs.go:276] 0 containers: []
	W0806 08:37:23.426491   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:23.426496   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:23.426541   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:23.461078   79927 cri.go:89] found id: ""
	I0806 08:37:23.461111   79927 logs.go:276] 0 containers: []
	W0806 08:37:23.461122   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:23.461129   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:23.461183   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:23.496566   79927 cri.go:89] found id: ""
	I0806 08:37:23.496594   79927 logs.go:276] 0 containers: []
	W0806 08:37:23.496602   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:23.496607   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:23.496665   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:23.529097   79927 cri.go:89] found id: ""
	I0806 08:37:23.529123   79927 logs.go:276] 0 containers: []
	W0806 08:37:23.529134   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:23.529153   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:23.529243   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:23.563690   79927 cri.go:89] found id: ""
	I0806 08:37:23.563720   79927 logs.go:276] 0 containers: []
	W0806 08:37:23.563730   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:23.563737   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:23.563796   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:23.598787   79927 cri.go:89] found id: ""
	I0806 08:37:23.598861   79927 logs.go:276] 0 containers: []
	W0806 08:37:23.598875   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:23.598884   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:23.598942   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:23.655675   79927 cri.go:89] found id: ""
	I0806 08:37:23.655705   79927 logs.go:276] 0 containers: []
	W0806 08:37:23.655715   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:23.655723   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:23.655734   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:23.715853   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:23.715887   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:23.735805   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:23.735844   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 08:37:19.288768   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:21.788292   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:23.788808   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:20.612865   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:22.613432   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:22.968064   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:24.968171   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	W0806 08:37:23.815238   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:23.815261   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:23.815274   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:23.899436   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:23.899479   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:26.441837   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:26.456410   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:26.456472   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:26.497707   79927 cri.go:89] found id: ""
	I0806 08:37:26.497733   79927 logs.go:276] 0 containers: []
	W0806 08:37:26.497741   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:26.497746   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:26.497795   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:26.536101   79927 cri.go:89] found id: ""
	I0806 08:37:26.536126   79927 logs.go:276] 0 containers: []
	W0806 08:37:26.536133   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:26.536145   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:26.536190   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:26.573067   79927 cri.go:89] found id: ""
	I0806 08:37:26.573091   79927 logs.go:276] 0 containers: []
	W0806 08:37:26.573102   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:26.573110   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:26.573162   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:26.609820   79927 cri.go:89] found id: ""
	I0806 08:37:26.609846   79927 logs.go:276] 0 containers: []
	W0806 08:37:26.609854   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:26.609860   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:26.609914   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:26.645721   79927 cri.go:89] found id: ""
	I0806 08:37:26.645748   79927 logs.go:276] 0 containers: []
	W0806 08:37:26.645758   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:26.645764   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:26.645825   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:26.679941   79927 cri.go:89] found id: ""
	I0806 08:37:26.679988   79927 logs.go:276] 0 containers: []
	W0806 08:37:26.679999   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:26.680006   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:26.680068   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:26.717247   79927 cri.go:89] found id: ""
	I0806 08:37:26.717277   79927 logs.go:276] 0 containers: []
	W0806 08:37:26.717288   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:26.717295   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:26.717382   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:26.760157   79927 cri.go:89] found id: ""
	I0806 08:37:26.760184   79927 logs.go:276] 0 containers: []
	W0806 08:37:26.760194   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:26.760205   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:26.760221   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:26.816034   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:26.816071   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:26.830141   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:26.830170   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:26.900078   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:26.900103   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:26.900121   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:26.984460   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:26.984501   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:26.287692   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:28.288990   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:25.112666   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:27.611170   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:27.466925   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:29.469989   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:29.527184   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:29.541974   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:29.542052   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:29.576062   79927 cri.go:89] found id: ""
	I0806 08:37:29.576095   79927 logs.go:276] 0 containers: []
	W0806 08:37:29.576107   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:29.576114   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:29.576176   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:29.610841   79927 cri.go:89] found id: ""
	I0806 08:37:29.610865   79927 logs.go:276] 0 containers: []
	W0806 08:37:29.610881   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:29.610892   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:29.610948   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:29.650326   79927 cri.go:89] found id: ""
	I0806 08:37:29.650350   79927 logs.go:276] 0 containers: []
	W0806 08:37:29.650360   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:29.650367   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:29.650434   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:29.689382   79927 cri.go:89] found id: ""
	I0806 08:37:29.689418   79927 logs.go:276] 0 containers: []
	W0806 08:37:29.689429   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:29.689436   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:29.689495   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:29.725893   79927 cri.go:89] found id: ""
	I0806 08:37:29.725918   79927 logs.go:276] 0 containers: []
	W0806 08:37:29.725926   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:29.725932   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:29.725980   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:29.767354   79927 cri.go:89] found id: ""
	I0806 08:37:29.767379   79927 logs.go:276] 0 containers: []
	W0806 08:37:29.767386   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:29.767392   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:29.767441   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:29.811139   79927 cri.go:89] found id: ""
	I0806 08:37:29.811161   79927 logs.go:276] 0 containers: []
	W0806 08:37:29.811169   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:29.811174   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:29.811219   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:29.848252   79927 cri.go:89] found id: ""
	I0806 08:37:29.848285   79927 logs.go:276] 0 containers: []
	W0806 08:37:29.848294   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:29.848303   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:29.848315   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:29.899850   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:29.899884   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:29.913744   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:29.913774   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:29.988016   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:29.988039   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:29.988054   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:30.069701   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:30.069735   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:32.606958   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:32.622621   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:32.622684   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:32.658816   79927 cri.go:89] found id: ""
	I0806 08:37:32.658844   79927 logs.go:276] 0 containers: []
	W0806 08:37:32.658855   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:32.658862   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:32.658928   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:32.697734   79927 cri.go:89] found id: ""
	I0806 08:37:32.697769   79927 logs.go:276] 0 containers: []
	W0806 08:37:32.697780   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:32.697786   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:32.697868   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:32.735540   79927 cri.go:89] found id: ""
	I0806 08:37:32.735572   79927 logs.go:276] 0 containers: []
	W0806 08:37:32.735584   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:32.735591   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:32.735658   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:32.771319   79927 cri.go:89] found id: ""
	I0806 08:37:32.771343   79927 logs.go:276] 0 containers: []
	W0806 08:37:32.771357   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:32.771365   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:32.771421   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:32.808137   79927 cri.go:89] found id: ""
	I0806 08:37:32.808162   79927 logs.go:276] 0 containers: []
	W0806 08:37:32.808172   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:32.808179   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:32.808234   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:32.843307   79927 cri.go:89] found id: ""
	I0806 08:37:32.843333   79927 logs.go:276] 0 containers: []
	W0806 08:37:32.843343   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:32.843350   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:32.843404   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:32.883855   79927 cri.go:89] found id: ""
	I0806 08:37:32.883922   79927 logs.go:276] 0 containers: []
	W0806 08:37:32.883935   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:32.883944   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:32.884014   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:32.920965   79927 cri.go:89] found id: ""
	I0806 08:37:32.921008   79927 logs.go:276] 0 containers: []
	W0806 08:37:32.921019   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:32.921027   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:32.921039   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:32.974933   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:32.974966   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:32.989326   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:32.989352   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:33.058274   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:33.058302   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:33.058314   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:33.136632   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:33.136670   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:30.789771   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:33.289001   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:29.613189   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:32.111086   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:34.112398   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:31.973697   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:34.469358   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:35.680315   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:35.694438   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:35.694497   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:35.729541   79927 cri.go:89] found id: ""
	I0806 08:37:35.729569   79927 logs.go:276] 0 containers: []
	W0806 08:37:35.729580   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:35.729587   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:35.729650   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:35.765783   79927 cri.go:89] found id: ""
	I0806 08:37:35.765813   79927 logs.go:276] 0 containers: []
	W0806 08:37:35.765822   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:35.765827   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:35.765885   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:35.802975   79927 cri.go:89] found id: ""
	I0806 08:37:35.803002   79927 logs.go:276] 0 containers: []
	W0806 08:37:35.803010   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:35.803016   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:35.803062   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:35.838588   79927 cri.go:89] found id: ""
	I0806 08:37:35.838614   79927 logs.go:276] 0 containers: []
	W0806 08:37:35.838623   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:35.838629   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:35.838675   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:35.874413   79927 cri.go:89] found id: ""
	I0806 08:37:35.874442   79927 logs.go:276] 0 containers: []
	W0806 08:37:35.874452   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:35.874460   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:35.874528   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:35.913816   79927 cri.go:89] found id: ""
	I0806 08:37:35.913852   79927 logs.go:276] 0 containers: []
	W0806 08:37:35.913871   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:35.913885   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:35.913963   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:35.951719   79927 cri.go:89] found id: ""
	I0806 08:37:35.951750   79927 logs.go:276] 0 containers: []
	W0806 08:37:35.951761   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:35.951770   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:35.951828   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:35.987720   79927 cri.go:89] found id: ""
	I0806 08:37:35.987749   79927 logs.go:276] 0 containers: []
	W0806 08:37:35.987760   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:35.987769   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:35.987780   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:36.043300   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:36.043339   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:36.057138   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:36.057169   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:36.126382   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:36.126399   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:36.126415   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:36.205243   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:36.205284   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:38.747978   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:35.289346   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:37.788785   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:36.114392   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:38.612174   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:36.967654   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:38.969141   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:40.969752   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:38.762123   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:38.762190   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:38.805074   79927 cri.go:89] found id: ""
	I0806 08:37:38.805101   79927 logs.go:276] 0 containers: []
	W0806 08:37:38.805111   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:38.805117   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:38.805185   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:38.839305   79927 cri.go:89] found id: ""
	I0806 08:37:38.839330   79927 logs.go:276] 0 containers: []
	W0806 08:37:38.839338   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:38.839344   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:38.839406   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:38.874174   79927 cri.go:89] found id: ""
	I0806 08:37:38.874203   79927 logs.go:276] 0 containers: []
	W0806 08:37:38.874214   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:38.874221   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:38.874277   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:38.909607   79927 cri.go:89] found id: ""
	I0806 08:37:38.909630   79927 logs.go:276] 0 containers: []
	W0806 08:37:38.909639   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:38.909645   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:38.909702   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:38.945584   79927 cri.go:89] found id: ""
	I0806 08:37:38.945615   79927 logs.go:276] 0 containers: []
	W0806 08:37:38.945626   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:38.945634   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:38.945696   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:38.988993   79927 cri.go:89] found id: ""
	I0806 08:37:38.989022   79927 logs.go:276] 0 containers: []
	W0806 08:37:38.989033   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:38.989041   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:38.989100   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:39.028902   79927 cri.go:89] found id: ""
	I0806 08:37:39.028934   79927 logs.go:276] 0 containers: []
	W0806 08:37:39.028942   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:39.028948   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:39.029005   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:39.070163   79927 cri.go:89] found id: ""
	I0806 08:37:39.070192   79927 logs.go:276] 0 containers: []
	W0806 08:37:39.070202   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:39.070213   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:39.070229   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:39.124891   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:39.124927   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:39.139898   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:39.139926   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:39.216157   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:39.216181   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:39.216196   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:39.301089   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:39.301136   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:41.840165   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:41.855659   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:41.855792   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:41.892620   79927 cri.go:89] found id: ""
	I0806 08:37:41.892648   79927 logs.go:276] 0 containers: []
	W0806 08:37:41.892660   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:41.892668   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:41.892728   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:41.929064   79927 cri.go:89] found id: ""
	I0806 08:37:41.929097   79927 logs.go:276] 0 containers: []
	W0806 08:37:41.929107   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:41.929114   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:41.929165   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:41.968334   79927 cri.go:89] found id: ""
	I0806 08:37:41.968371   79927 logs.go:276] 0 containers: []
	W0806 08:37:41.968382   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:41.968389   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:41.968450   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:42.008566   79927 cri.go:89] found id: ""
	I0806 08:37:42.008587   79927 logs.go:276] 0 containers: []
	W0806 08:37:42.008595   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:42.008601   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:42.008649   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:42.044281   79927 cri.go:89] found id: ""
	I0806 08:37:42.044308   79927 logs.go:276] 0 containers: []
	W0806 08:37:42.044315   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:42.044321   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:42.044392   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:42.081987   79927 cri.go:89] found id: ""
	I0806 08:37:42.082012   79927 logs.go:276] 0 containers: []
	W0806 08:37:42.082020   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:42.082026   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:42.082072   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:42.120818   79927 cri.go:89] found id: ""
	I0806 08:37:42.120847   79927 logs.go:276] 0 containers: []
	W0806 08:37:42.120857   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:42.120865   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:42.120926   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:42.160685   79927 cri.go:89] found id: ""
	I0806 08:37:42.160719   79927 logs.go:276] 0 containers: []
	W0806 08:37:42.160731   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:42.160742   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:42.160759   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:42.236322   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:42.236344   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:42.236357   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:42.328488   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:42.328526   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:42.369520   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:42.369545   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:42.425449   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:42.425490   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:39.788990   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:41.789236   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:43.789318   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:40.614326   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:43.112061   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:43.469284   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:45.469725   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:44.941557   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:44.954984   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:44.955043   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:44.992186   79927 cri.go:89] found id: ""
	I0806 08:37:44.992216   79927 logs.go:276] 0 containers: []
	W0806 08:37:44.992228   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:44.992234   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:44.992285   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:45.027114   79927 cri.go:89] found id: ""
	I0806 08:37:45.027143   79927 logs.go:276] 0 containers: []
	W0806 08:37:45.027154   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:45.027161   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:45.027222   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:45.062629   79927 cri.go:89] found id: ""
	I0806 08:37:45.062660   79927 logs.go:276] 0 containers: []
	W0806 08:37:45.062672   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:45.062679   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:45.062741   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:45.097499   79927 cri.go:89] found id: ""
	I0806 08:37:45.097530   79927 logs.go:276] 0 containers: []
	W0806 08:37:45.097541   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:45.097548   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:45.097605   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:45.137130   79927 cri.go:89] found id: ""
	I0806 08:37:45.137162   79927 logs.go:276] 0 containers: []
	W0806 08:37:45.137184   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:45.137191   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:45.137264   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:45.170617   79927 cri.go:89] found id: ""
	I0806 08:37:45.170642   79927 logs.go:276] 0 containers: []
	W0806 08:37:45.170659   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:45.170667   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:45.170726   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:45.205729   79927 cri.go:89] found id: ""
	I0806 08:37:45.205758   79927 logs.go:276] 0 containers: []
	W0806 08:37:45.205768   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:45.205774   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:45.205827   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:45.242345   79927 cri.go:89] found id: ""
	I0806 08:37:45.242373   79927 logs.go:276] 0 containers: []
	W0806 08:37:45.242384   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:45.242397   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:45.242414   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:45.283778   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:45.283810   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:45.341421   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:45.341457   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:45.356581   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:45.356606   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:45.428512   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:45.428541   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:45.428559   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:48.011744   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:48.025452   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:48.025533   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:48.064315   79927 cri.go:89] found id: ""
	I0806 08:37:48.064339   79927 logs.go:276] 0 containers: []
	W0806 08:37:48.064348   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:48.064353   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:48.064430   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:48.103855   79927 cri.go:89] found id: ""
	I0806 08:37:48.103891   79927 logs.go:276] 0 containers: []
	W0806 08:37:48.103902   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:48.103909   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:48.103968   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:48.141402   79927 cri.go:89] found id: ""
	I0806 08:37:48.141427   79927 logs.go:276] 0 containers: []
	W0806 08:37:48.141435   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:48.141440   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:48.141491   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:48.177449   79927 cri.go:89] found id: ""
	I0806 08:37:48.177479   79927 logs.go:276] 0 containers: []
	W0806 08:37:48.177490   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:48.177497   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:48.177566   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:48.214613   79927 cri.go:89] found id: ""
	I0806 08:37:48.214637   79927 logs.go:276] 0 containers: []
	W0806 08:37:48.214644   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:48.214650   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:48.214695   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:48.254049   79927 cri.go:89] found id: ""
	I0806 08:37:48.254075   79927 logs.go:276] 0 containers: []
	W0806 08:37:48.254083   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:48.254089   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:48.254147   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:48.296206   79927 cri.go:89] found id: ""
	I0806 08:37:48.296230   79927 logs.go:276] 0 containers: []
	W0806 08:37:48.296240   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:48.296246   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:48.296306   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:48.337614   79927 cri.go:89] found id: ""
	I0806 08:37:48.337641   79927 logs.go:276] 0 containers: []
	W0806 08:37:48.337651   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:48.337662   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:48.337679   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:48.391932   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:48.391971   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:48.405422   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:48.405450   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:48.477374   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:48.477402   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:48.477419   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:48.559252   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:48.559288   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:46.287167   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:48.288652   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:45.113308   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:47.611780   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:47.968109   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:50.469370   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:51.104881   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:51.120113   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:51.120181   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:51.155703   79927 cri.go:89] found id: ""
	I0806 08:37:51.155733   79927 logs.go:276] 0 containers: []
	W0806 08:37:51.155741   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:51.155747   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:51.155792   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:51.191211   79927 cri.go:89] found id: ""
	I0806 08:37:51.191240   79927 logs.go:276] 0 containers: []
	W0806 08:37:51.191250   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:51.191257   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:51.191324   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:51.226740   79927 cri.go:89] found id: ""
	I0806 08:37:51.226771   79927 logs.go:276] 0 containers: []
	W0806 08:37:51.226783   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:51.226790   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:51.226852   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:51.269101   79927 cri.go:89] found id: ""
	I0806 08:37:51.269132   79927 logs.go:276] 0 containers: []
	W0806 08:37:51.269141   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:51.269146   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:51.269196   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:51.307176   79927 cri.go:89] found id: ""
	I0806 08:37:51.307206   79927 logs.go:276] 0 containers: []
	W0806 08:37:51.307216   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:51.307224   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:51.307290   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:51.342824   79927 cri.go:89] found id: ""
	I0806 08:37:51.342854   79927 logs.go:276] 0 containers: []
	W0806 08:37:51.342864   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:51.342871   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:51.342935   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:51.383309   79927 cri.go:89] found id: ""
	I0806 08:37:51.383337   79927 logs.go:276] 0 containers: []
	W0806 08:37:51.383348   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:51.383355   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:51.383420   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:51.424851   79927 cri.go:89] found id: ""
	I0806 08:37:51.424896   79927 logs.go:276] 0 containers: []
	W0806 08:37:51.424908   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:51.424918   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:51.424950   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:51.507286   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:51.507307   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:51.507318   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:51.590512   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:51.590556   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:51.640453   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:51.640482   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:51.694699   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:51.694740   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:50.788163   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:52.788396   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:49.616347   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:52.112323   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:54.112911   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:52.470121   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:54.473264   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:54.211106   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:54.225858   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:54.225936   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:54.267219   79927 cri.go:89] found id: ""
	I0806 08:37:54.267247   79927 logs.go:276] 0 containers: []
	W0806 08:37:54.267258   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:54.267266   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:54.267323   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:54.306734   79927 cri.go:89] found id: ""
	I0806 08:37:54.306763   79927 logs.go:276] 0 containers: []
	W0806 08:37:54.306774   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:54.306781   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:54.306839   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:54.342257   79927 cri.go:89] found id: ""
	I0806 08:37:54.342294   79927 logs.go:276] 0 containers: []
	W0806 08:37:54.342306   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:54.342314   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:54.342383   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:54.380661   79927 cri.go:89] found id: ""
	I0806 08:37:54.380685   79927 logs.go:276] 0 containers: []
	W0806 08:37:54.380693   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:54.380699   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:54.380743   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:54.416503   79927 cri.go:89] found id: ""
	I0806 08:37:54.416525   79927 logs.go:276] 0 containers: []
	W0806 08:37:54.416533   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:54.416539   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:54.416589   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:54.453470   79927 cri.go:89] found id: ""
	I0806 08:37:54.453508   79927 logs.go:276] 0 containers: []
	W0806 08:37:54.453520   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:54.453528   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:54.453591   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:54.497630   79927 cri.go:89] found id: ""
	I0806 08:37:54.497671   79927 logs.go:276] 0 containers: []
	W0806 08:37:54.497682   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:54.497691   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:54.497750   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:54.536516   79927 cri.go:89] found id: ""
	I0806 08:37:54.536546   79927 logs.go:276] 0 containers: []
	W0806 08:37:54.536557   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:54.536568   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:54.536587   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:54.592997   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:54.593032   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:54.607568   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:54.607598   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:54.679369   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:54.679402   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:54.679423   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:54.760001   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:54.760041   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:57.305405   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:57.319427   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:57.319493   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:57.357465   79927 cri.go:89] found id: ""
	I0806 08:37:57.357493   79927 logs.go:276] 0 containers: []
	W0806 08:37:57.357504   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:57.357513   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:57.357571   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:57.390131   79927 cri.go:89] found id: ""
	I0806 08:37:57.390155   79927 logs.go:276] 0 containers: []
	W0806 08:37:57.390165   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:57.390172   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:57.390238   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:57.428571   79927 cri.go:89] found id: ""
	I0806 08:37:57.428600   79927 logs.go:276] 0 containers: []
	W0806 08:37:57.428608   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:57.428613   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:57.428675   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:57.462688   79927 cri.go:89] found id: ""
	I0806 08:37:57.462717   79927 logs.go:276] 0 containers: []
	W0806 08:37:57.462727   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:57.462733   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:57.462791   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:57.525008   79927 cri.go:89] found id: ""
	I0806 08:37:57.525033   79927 logs.go:276] 0 containers: []
	W0806 08:37:57.525041   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:57.525046   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:57.525108   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:57.565992   79927 cri.go:89] found id: ""
	I0806 08:37:57.566024   79927 logs.go:276] 0 containers: []
	W0806 08:37:57.566035   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:57.566045   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:57.566123   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:57.610170   79927 cri.go:89] found id: ""
	I0806 08:37:57.610198   79927 logs.go:276] 0 containers: []
	W0806 08:37:57.610208   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:57.610215   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:57.610262   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:57.647054   79927 cri.go:89] found id: ""
	I0806 08:37:57.647085   79927 logs.go:276] 0 containers: []
	W0806 08:37:57.647096   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:57.647106   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:57.647121   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:57.698177   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:57.698214   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:57.712244   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:57.712271   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:57.781683   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:57.781707   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:57.781723   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:57.861097   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:57.861133   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:54.789102   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:57.287564   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:56.613662   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:59.112248   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:56.967534   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:59.469122   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:00.401186   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:00.414546   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:00.414602   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:00.454163   79927 cri.go:89] found id: ""
	I0806 08:38:00.454190   79927 logs.go:276] 0 containers: []
	W0806 08:38:00.454201   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:00.454208   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:00.454269   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:00.497427   79927 cri.go:89] found id: ""
	I0806 08:38:00.497455   79927 logs.go:276] 0 containers: []
	W0806 08:38:00.497466   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:00.497473   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:00.497538   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:00.537590   79927 cri.go:89] found id: ""
	I0806 08:38:00.537619   79927 logs.go:276] 0 containers: []
	W0806 08:38:00.537627   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:00.537633   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:00.537695   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:00.572337   79927 cri.go:89] found id: ""
	I0806 08:38:00.572383   79927 logs.go:276] 0 containers: []
	W0806 08:38:00.572396   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:00.572403   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:00.572460   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:00.607729   79927 cri.go:89] found id: ""
	I0806 08:38:00.607772   79927 logs.go:276] 0 containers: []
	W0806 08:38:00.607783   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:00.607791   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:00.607852   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:00.644777   79927 cri.go:89] found id: ""
	I0806 08:38:00.644843   79927 logs.go:276] 0 containers: []
	W0806 08:38:00.644856   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:00.644864   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:00.644935   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:00.684698   79927 cri.go:89] found id: ""
	I0806 08:38:00.684727   79927 logs.go:276] 0 containers: []
	W0806 08:38:00.684739   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:00.684747   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:00.684863   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:00.718472   79927 cri.go:89] found id: ""
	I0806 08:38:00.718506   79927 logs.go:276] 0 containers: []
	W0806 08:38:00.718518   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:00.718529   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:00.718546   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:00.770265   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:00.770309   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:00.787438   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:00.787469   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:00.854538   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:00.854558   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:00.854570   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:00.936810   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:00.936847   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:03.477109   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:03.494379   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:03.494439   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:03.536166   79927 cri.go:89] found id: ""
	I0806 08:38:03.536193   79927 logs.go:276] 0 containers: []
	W0806 08:38:03.536202   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:03.536208   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:03.536255   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:03.577497   79927 cri.go:89] found id: ""
	I0806 08:38:03.577522   79927 logs.go:276] 0 containers: []
	W0806 08:38:03.577531   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:03.577536   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:03.577585   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:03.614534   79927 cri.go:89] found id: ""
	I0806 08:38:03.614560   79927 logs.go:276] 0 containers: []
	W0806 08:38:03.614569   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:03.614576   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:03.614634   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:03.655824   79927 cri.go:89] found id: ""
	I0806 08:38:03.655860   79927 logs.go:276] 0 containers: []
	W0806 08:38:03.655871   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:03.655878   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:03.655952   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:03.699843   79927 cri.go:89] found id: ""
	I0806 08:38:03.699875   79927 logs.go:276] 0 containers: []
	W0806 08:38:03.699888   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:03.699896   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:03.699957   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:03.740131   79927 cri.go:89] found id: ""
	I0806 08:38:03.740159   79927 logs.go:276] 0 containers: []
	W0806 08:38:03.740168   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:03.740174   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:03.740223   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:59.787634   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:01.788942   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:03.790134   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:01.112582   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:03.113004   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:01.972073   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:04.468917   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:03.774103   79927 cri.go:89] found id: ""
	I0806 08:38:03.774149   79927 logs.go:276] 0 containers: []
	W0806 08:38:03.774163   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:03.774170   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:03.774230   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:03.814051   79927 cri.go:89] found id: ""
	I0806 08:38:03.814075   79927 logs.go:276] 0 containers: []
	W0806 08:38:03.814083   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:03.814091   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:03.814102   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:03.854869   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:03.854898   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:03.908284   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:03.908318   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:03.923265   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:03.923303   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:04.001785   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:04.001806   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:04.001819   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:06.583809   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:06.596764   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:06.596827   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:06.636611   79927 cri.go:89] found id: ""
	I0806 08:38:06.636639   79927 logs.go:276] 0 containers: []
	W0806 08:38:06.636650   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:06.636657   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:06.636707   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:06.675392   79927 cri.go:89] found id: ""
	I0806 08:38:06.675419   79927 logs.go:276] 0 containers: []
	W0806 08:38:06.675430   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:06.675438   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:06.675492   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:06.711270   79927 cri.go:89] found id: ""
	I0806 08:38:06.711302   79927 logs.go:276] 0 containers: []
	W0806 08:38:06.711313   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:06.711320   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:06.711388   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:06.747252   79927 cri.go:89] found id: ""
	I0806 08:38:06.747280   79927 logs.go:276] 0 containers: []
	W0806 08:38:06.747291   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:06.747297   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:06.747360   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:06.781999   79927 cri.go:89] found id: ""
	I0806 08:38:06.782028   79927 logs.go:276] 0 containers: []
	W0806 08:38:06.782038   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:06.782043   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:06.782095   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:06.818810   79927 cri.go:89] found id: ""
	I0806 08:38:06.818847   79927 logs.go:276] 0 containers: []
	W0806 08:38:06.818860   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:06.818869   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:06.818935   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:06.857937   79927 cri.go:89] found id: ""
	I0806 08:38:06.857972   79927 logs.go:276] 0 containers: []
	W0806 08:38:06.857983   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:06.857989   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:06.858049   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:06.897920   79927 cri.go:89] found id: ""
	I0806 08:38:06.897949   79927 logs.go:276] 0 containers: []
	W0806 08:38:06.897957   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:06.897965   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:06.897979   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:06.912199   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:06.912228   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:06.983129   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:06.983157   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:06.983173   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:07.067783   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:07.067853   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:07.107756   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:07.107787   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:06.286856   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:08.287070   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:05.113664   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:07.611129   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:06.973447   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:09.468163   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:11.469075   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:09.657958   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:09.670726   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:09.670807   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:09.709184   79927 cri.go:89] found id: ""
	I0806 08:38:09.709220   79927 logs.go:276] 0 containers: []
	W0806 08:38:09.709232   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:09.709239   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:09.709299   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:09.747032   79927 cri.go:89] found id: ""
	I0806 08:38:09.747061   79927 logs.go:276] 0 containers: []
	W0806 08:38:09.747073   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:09.747080   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:09.747141   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:09.785134   79927 cri.go:89] found id: ""
	I0806 08:38:09.785170   79927 logs.go:276] 0 containers: []
	W0806 08:38:09.785180   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:09.785188   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:09.785248   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:09.825136   79927 cri.go:89] found id: ""
	I0806 08:38:09.825168   79927 logs.go:276] 0 containers: []
	W0806 08:38:09.825179   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:09.825186   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:09.825249   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:09.865292   79927 cri.go:89] found id: ""
	I0806 08:38:09.865327   79927 logs.go:276] 0 containers: []
	W0806 08:38:09.865336   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:09.865345   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:09.865415   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:09.911145   79927 cri.go:89] found id: ""
	I0806 08:38:09.911172   79927 logs.go:276] 0 containers: []
	W0806 08:38:09.911183   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:09.911190   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:09.911247   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:09.961259   79927 cri.go:89] found id: ""
	I0806 08:38:09.961289   79927 logs.go:276] 0 containers: []
	W0806 08:38:09.961299   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:09.961307   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:09.961364   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:10.009572   79927 cri.go:89] found id: ""
	I0806 08:38:10.009594   79927 logs.go:276] 0 containers: []
	W0806 08:38:10.009602   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:10.009610   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:10.009623   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:10.063733   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:10.063770   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:10.080004   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:10.080044   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:10.158792   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:10.158823   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:10.158840   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:10.242126   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:10.242162   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:12.781497   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:12.796715   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:12.796804   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:12.831223   79927 cri.go:89] found id: ""
	I0806 08:38:12.831254   79927 logs.go:276] 0 containers: []
	W0806 08:38:12.831265   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:12.831273   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:12.831332   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:12.872202   79927 cri.go:89] found id: ""
	I0806 08:38:12.872224   79927 logs.go:276] 0 containers: []
	W0806 08:38:12.872231   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:12.872236   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:12.872284   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:12.913457   79927 cri.go:89] found id: ""
	I0806 08:38:12.913486   79927 logs.go:276] 0 containers: []
	W0806 08:38:12.913497   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:12.913505   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:12.913563   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:12.950382   79927 cri.go:89] found id: ""
	I0806 08:38:12.950413   79927 logs.go:276] 0 containers: []
	W0806 08:38:12.950424   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:12.950432   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:12.950497   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:12.988956   79927 cri.go:89] found id: ""
	I0806 08:38:12.988984   79927 logs.go:276] 0 containers: []
	W0806 08:38:12.988992   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:12.988998   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:12.989054   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:13.025308   79927 cri.go:89] found id: ""
	I0806 08:38:13.025334   79927 logs.go:276] 0 containers: []
	W0806 08:38:13.025342   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:13.025348   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:13.025401   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:13.060156   79927 cri.go:89] found id: ""
	I0806 08:38:13.060184   79927 logs.go:276] 0 containers: []
	W0806 08:38:13.060196   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:13.060204   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:13.060268   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:13.096302   79927 cri.go:89] found id: ""
	I0806 08:38:13.096331   79927 logs.go:276] 0 containers: []
	W0806 08:38:13.096344   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:13.096360   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:13.096387   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:13.149567   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:13.149603   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:13.164680   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:13.164715   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:13.236695   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:13.236718   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:13.236733   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:13.323043   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:13.323081   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:10.288582   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:12.788556   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:09.611677   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:12.111364   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:14.112265   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:13.469940   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:15.969381   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:15.867128   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:15.885504   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:15.885579   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:15.929060   79927 cri.go:89] found id: ""
	I0806 08:38:15.929087   79927 logs.go:276] 0 containers: []
	W0806 08:38:15.929098   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:15.929106   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:15.929157   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:15.996919   79927 cri.go:89] found id: ""
	I0806 08:38:15.996946   79927 logs.go:276] 0 containers: []
	W0806 08:38:15.996954   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:15.996959   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:15.997013   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:16.033299   79927 cri.go:89] found id: ""
	I0806 08:38:16.033323   79927 logs.go:276] 0 containers: []
	W0806 08:38:16.033331   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:16.033338   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:16.033399   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:16.070754   79927 cri.go:89] found id: ""
	I0806 08:38:16.070781   79927 logs.go:276] 0 containers: []
	W0806 08:38:16.070789   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:16.070795   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:16.070853   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:16.107906   79927 cri.go:89] found id: ""
	I0806 08:38:16.107931   79927 logs.go:276] 0 containers: []
	W0806 08:38:16.107943   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:16.107951   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:16.108011   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:16.145814   79927 cri.go:89] found id: ""
	I0806 08:38:16.145844   79927 logs.go:276] 0 containers: []
	W0806 08:38:16.145853   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:16.145861   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:16.145921   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:16.183752   79927 cri.go:89] found id: ""
	I0806 08:38:16.183782   79927 logs.go:276] 0 containers: []
	W0806 08:38:16.183793   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:16.183828   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:16.183890   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:16.220973   79927 cri.go:89] found id: ""
	I0806 08:38:16.221000   79927 logs.go:276] 0 containers: []
	W0806 08:38:16.221010   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:16.221020   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:16.221034   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:16.274514   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:16.274555   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:16.291387   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:16.291417   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:16.365737   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:16.365769   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:16.365785   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:16.446578   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:16.446617   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:14.789440   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:17.288354   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:16.112334   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:18.612150   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:18.468700   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:20.469520   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:18.989406   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:19.003113   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:19.003174   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:19.038794   79927 cri.go:89] found id: ""
	I0806 08:38:19.038821   79927 logs.go:276] 0 containers: []
	W0806 08:38:19.038832   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:19.038839   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:19.038899   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:19.072521   79927 cri.go:89] found id: ""
	I0806 08:38:19.072546   79927 logs.go:276] 0 containers: []
	W0806 08:38:19.072554   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:19.072561   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:19.072606   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:19.105949   79927 cri.go:89] found id: ""
	I0806 08:38:19.105982   79927 logs.go:276] 0 containers: []
	W0806 08:38:19.105994   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:19.106002   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:19.106053   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:19.141277   79927 cri.go:89] found id: ""
	I0806 08:38:19.141303   79927 logs.go:276] 0 containers: []
	W0806 08:38:19.141311   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:19.141317   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:19.141373   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:19.179144   79927 cri.go:89] found id: ""
	I0806 08:38:19.179173   79927 logs.go:276] 0 containers: []
	W0806 08:38:19.179184   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:19.179191   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:19.179252   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:19.218970   79927 cri.go:89] found id: ""
	I0806 08:38:19.218998   79927 logs.go:276] 0 containers: []
	W0806 08:38:19.219010   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:19.219022   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:19.219084   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:19.257623   79927 cri.go:89] found id: ""
	I0806 08:38:19.257652   79927 logs.go:276] 0 containers: []
	W0806 08:38:19.257663   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:19.257671   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:19.257740   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:19.294222   79927 cri.go:89] found id: ""
	I0806 08:38:19.294250   79927 logs.go:276] 0 containers: []
	W0806 08:38:19.294258   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:19.294267   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:19.294279   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:19.374327   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:19.374352   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:19.374368   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:19.458570   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:19.458604   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:19.504329   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:19.504382   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:19.554199   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:19.554237   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:22.069574   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:22.082919   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:22.082981   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:22.115952   79927 cri.go:89] found id: ""
	I0806 08:38:22.115974   79927 logs.go:276] 0 containers: []
	W0806 08:38:22.115981   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:22.115987   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:22.116031   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:22.158322   79927 cri.go:89] found id: ""
	I0806 08:38:22.158349   79927 logs.go:276] 0 containers: []
	W0806 08:38:22.158359   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:22.158366   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:22.158425   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:22.203375   79927 cri.go:89] found id: ""
	I0806 08:38:22.203403   79927 logs.go:276] 0 containers: []
	W0806 08:38:22.203412   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:22.203418   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:22.203473   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:22.240808   79927 cri.go:89] found id: ""
	I0806 08:38:22.240837   79927 logs.go:276] 0 containers: []
	W0806 08:38:22.240849   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:22.240857   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:22.240913   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:22.277885   79927 cri.go:89] found id: ""
	I0806 08:38:22.277914   79927 logs.go:276] 0 containers: []
	W0806 08:38:22.277925   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:22.277932   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:22.277993   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:22.314990   79927 cri.go:89] found id: ""
	I0806 08:38:22.315020   79927 logs.go:276] 0 containers: []
	W0806 08:38:22.315031   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:22.315039   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:22.315098   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:22.352159   79927 cri.go:89] found id: ""
	I0806 08:38:22.352189   79927 logs.go:276] 0 containers: []
	W0806 08:38:22.352198   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:22.352204   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:22.352267   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:22.387349   79927 cri.go:89] found id: ""
	I0806 08:38:22.387382   79927 logs.go:276] 0 containers: []
	W0806 08:38:22.387393   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:22.387404   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:22.387416   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:22.425834   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:22.425871   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:22.479680   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:22.479712   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:22.495104   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:22.495140   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:22.567013   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:22.567030   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:22.567043   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:19.288896   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:21.291455   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:23.789282   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:21.111745   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:23.111513   79219 pod_ready.go:81] duration metric: took 4m0.005888372s for pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace to be "Ready" ...
	E0806 08:38:23.111545   79219 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0806 08:38:23.111555   79219 pod_ready.go:38] duration metric: took 4m6.006750463s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 08:38:23.111579   79219 api_server.go:52] waiting for apiserver process to appear ...
	I0806 08:38:23.111610   79219 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:23.111669   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:23.182880   79219 cri.go:89] found id: "f7edb6bece3347d88a263663a1d6549165dafdcb67ad723494b937766cba6da6"
	I0806 08:38:23.182907   79219 cri.go:89] found id: ""
	I0806 08:38:23.182918   79219 logs.go:276] 1 containers: [f7edb6bece3347d88a263663a1d6549165dafdcb67ad723494b937766cba6da6]
	I0806 08:38:23.182974   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:23.188039   79219 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:23.188103   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:23.235090   79219 cri.go:89] found id: "df2efdb62a4af10f89a1b37599d4258ac3b72660353795f8c2141f3ef70c1f65"
	I0806 08:38:23.235113   79219 cri.go:89] found id: ""
	I0806 08:38:23.235135   79219 logs.go:276] 1 containers: [df2efdb62a4af10f89a1b37599d4258ac3b72660353795f8c2141f3ef70c1f65]
	I0806 08:38:23.235195   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:23.240260   79219 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:23.240335   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:23.280619   79219 cri.go:89] found id: "9e1c91bc3a920b6f7db4477b8219f13fa092c68ff53abe2390325a68dad253e5"
	I0806 08:38:23.280645   79219 cri.go:89] found id: ""
	I0806 08:38:23.280655   79219 logs.go:276] 1 containers: [9e1c91bc3a920b6f7db4477b8219f13fa092c68ff53abe2390325a68dad253e5]
	I0806 08:38:23.280715   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:23.286155   79219 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:23.286234   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:23.329583   79219 cri.go:89] found id: "a25c5660a3a2b7226bf308b44307a3912efa7a9a73e79ec63f83133b80a44683"
	I0806 08:38:23.329609   79219 cri.go:89] found id: ""
	I0806 08:38:23.329618   79219 logs.go:276] 1 containers: [a25c5660a3a2b7226bf308b44307a3912efa7a9a73e79ec63f83133b80a44683]
	I0806 08:38:23.329678   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:23.334342   79219 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:23.334407   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:23.372955   79219 cri.go:89] found id: "612fc6a8f2c1208097a6d6287c5fecb20026608a278af5d0d4efc9662e065eb9"
	I0806 08:38:23.372974   79219 cri.go:89] found id: ""
	I0806 08:38:23.372981   79219 logs.go:276] 1 containers: [612fc6a8f2c1208097a6d6287c5fecb20026608a278af5d0d4efc9662e065eb9]
	I0806 08:38:23.373037   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:23.377637   79219 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:23.377692   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:23.416165   79219 cri.go:89] found id: "d8651a7ed6615e826b950eab003841c2ec872b48160f207e77dd32a684125a23"
	I0806 08:38:23.416188   79219 cri.go:89] found id: ""
	I0806 08:38:23.416197   79219 logs.go:276] 1 containers: [d8651a7ed6615e826b950eab003841c2ec872b48160f207e77dd32a684125a23]
	I0806 08:38:23.416248   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:23.420970   79219 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:23.421020   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:23.459247   79219 cri.go:89] found id: ""
	I0806 08:38:23.459277   79219 logs.go:276] 0 containers: []
	W0806 08:38:23.459287   79219 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:23.459295   79219 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0806 08:38:23.459354   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0806 08:38:23.503329   79219 cri.go:89] found id: "367eb6487187a37acafb1fa74364a2d9ec5d9241888ed2e871fb14f2af161975"
	I0806 08:38:23.503354   79219 cri.go:89] found id: "78abe2efa92d2d6c9171ed65bd4d941693be3207554f9318a7cefb91544eeb20"
	I0806 08:38:23.503360   79219 cri.go:89] found id: ""
	I0806 08:38:23.503369   79219 logs.go:276] 2 containers: [367eb6487187a37acafb1fa74364a2d9ec5d9241888ed2e871fb14f2af161975 78abe2efa92d2d6c9171ed65bd4d941693be3207554f9318a7cefb91544eeb20]
	I0806 08:38:23.503431   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:23.508375   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:23.512355   79219 logs.go:123] Gathering logs for kube-scheduler [a25c5660a3a2b7226bf308b44307a3912efa7a9a73e79ec63f83133b80a44683] ...
	I0806 08:38:23.512401   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a25c5660a3a2b7226bf308b44307a3912efa7a9a73e79ec63f83133b80a44683"
	I0806 08:38:23.548545   79219 logs.go:123] Gathering logs for kube-controller-manager [d8651a7ed6615e826b950eab003841c2ec872b48160f207e77dd32a684125a23] ...
	I0806 08:38:23.548574   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8651a7ed6615e826b950eab003841c2ec872b48160f207e77dd32a684125a23"
	I0806 08:38:23.607970   79219 logs.go:123] Gathering logs for storage-provisioner [78abe2efa92d2d6c9171ed65bd4d941693be3207554f9318a7cefb91544eeb20] ...
	I0806 08:38:23.608004   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78abe2efa92d2d6c9171ed65bd4d941693be3207554f9318a7cefb91544eeb20"
	I0806 08:38:23.646553   79219 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:23.646582   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:24.180521   79219 logs.go:123] Gathering logs for container status ...
	I0806 08:38:24.180558   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:24.228240   79219 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:24.228273   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:24.285975   79219 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:24.286007   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:24.301117   79219 logs.go:123] Gathering logs for etcd [df2efdb62a4af10f89a1b37599d4258ac3b72660353795f8c2141f3ef70c1f65] ...
	I0806 08:38:24.301152   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df2efdb62a4af10f89a1b37599d4258ac3b72660353795f8c2141f3ef70c1f65"
	I0806 08:38:22.968463   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:25.469138   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:25.146383   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:25.160013   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:25.160078   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:25.200981   79927 cri.go:89] found id: ""
	I0806 08:38:25.201006   79927 logs.go:276] 0 containers: []
	W0806 08:38:25.201018   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:25.201026   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:25.201091   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:25.239852   79927 cri.go:89] found id: ""
	I0806 08:38:25.239895   79927 logs.go:276] 0 containers: []
	W0806 08:38:25.239907   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:25.239914   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:25.239975   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:25.275521   79927 cri.go:89] found id: ""
	I0806 08:38:25.275549   79927 logs.go:276] 0 containers: []
	W0806 08:38:25.275561   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:25.275568   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:25.275630   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:25.311921   79927 cri.go:89] found id: ""
	I0806 08:38:25.311952   79927 logs.go:276] 0 containers: []
	W0806 08:38:25.311962   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:25.311969   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:25.312036   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:25.353047   79927 cri.go:89] found id: ""
	I0806 08:38:25.353081   79927 logs.go:276] 0 containers: []
	W0806 08:38:25.353093   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:25.353102   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:25.353177   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:25.389442   79927 cri.go:89] found id: ""
	I0806 08:38:25.389470   79927 logs.go:276] 0 containers: []
	W0806 08:38:25.389479   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:25.389485   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:25.389529   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:25.426435   79927 cri.go:89] found id: ""
	I0806 08:38:25.426464   79927 logs.go:276] 0 containers: []
	W0806 08:38:25.426474   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:25.426481   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:25.426541   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:25.460095   79927 cri.go:89] found id: ""
	I0806 08:38:25.460132   79927 logs.go:276] 0 containers: []
	W0806 08:38:25.460157   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:25.460169   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:25.460192   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:25.511363   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:25.511398   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:25.525953   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:25.525980   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:25.598690   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:25.598711   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:25.598723   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:25.674996   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:25.675034   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:28.216533   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:28.230080   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:28.230171   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:28.269239   79927 cri.go:89] found id: ""
	I0806 08:38:28.269274   79927 logs.go:276] 0 containers: []
	W0806 08:38:28.269288   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:28.269296   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:28.269361   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:28.311773   79927 cri.go:89] found id: ""
	I0806 08:38:28.311809   79927 logs.go:276] 0 containers: []
	W0806 08:38:28.311820   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:28.311828   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:28.311896   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:28.351492   79927 cri.go:89] found id: ""
	I0806 08:38:28.351522   79927 logs.go:276] 0 containers: []
	W0806 08:38:28.351530   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:28.351535   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:28.351581   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:28.389039   79927 cri.go:89] found id: ""
	I0806 08:38:28.389067   79927 logs.go:276] 0 containers: []
	W0806 08:38:28.389079   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:28.389086   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:28.389175   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:28.432249   79927 cri.go:89] found id: ""
	I0806 08:38:28.432277   79927 logs.go:276] 0 containers: []
	W0806 08:38:28.432287   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:28.432296   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:28.432354   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:28.478277   79927 cri.go:89] found id: ""
	I0806 08:38:28.478303   79927 logs.go:276] 0 containers: []
	W0806 08:38:28.478313   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:28.478321   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:28.478380   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:28.516167   79927 cri.go:89] found id: ""
	I0806 08:38:28.516199   79927 logs.go:276] 0 containers: []
	W0806 08:38:28.516211   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:28.516219   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:28.516286   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:28.555493   79927 cri.go:89] found id: ""
	I0806 08:38:28.555523   79927 logs.go:276] 0 containers: []
	W0806 08:38:28.555534   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:28.555545   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:28.555560   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:28.627101   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:28.627124   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:28.627147   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:28.716655   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:28.716709   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:28.755469   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:28.755510   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:26.286925   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:28.288654   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:24.355686   79219 logs.go:123] Gathering logs for coredns [9e1c91bc3a920b6f7db4477b8219f13fa092c68ff53abe2390325a68dad253e5] ...
	I0806 08:38:24.355729   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e1c91bc3a920b6f7db4477b8219f13fa092c68ff53abe2390325a68dad253e5"
	I0806 08:38:24.396350   79219 logs.go:123] Gathering logs for kube-proxy [612fc6a8f2c1208097a6d6287c5fecb20026608a278af5d0d4efc9662e065eb9] ...
	I0806 08:38:24.396402   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 612fc6a8f2c1208097a6d6287c5fecb20026608a278af5d0d4efc9662e065eb9"
	I0806 08:38:24.435066   79219 logs.go:123] Gathering logs for storage-provisioner [367eb6487187a37acafb1fa74364a2d9ec5d9241888ed2e871fb14f2af161975] ...
	I0806 08:38:24.435098   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 367eb6487187a37acafb1fa74364a2d9ec5d9241888ed2e871fb14f2af161975"
	I0806 08:38:24.474159   79219 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:24.474188   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 08:38:24.601081   79219 logs.go:123] Gathering logs for kube-apiserver [f7edb6bece3347d88a263663a1d6549165dafdcb67ad723494b937766cba6da6] ...
	I0806 08:38:24.601112   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7edb6bece3347d88a263663a1d6549165dafdcb67ad723494b937766cba6da6"
	I0806 08:38:27.170209   79219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:27.187573   79219 api_server.go:72] duration metric: took 4m15.285938182s to wait for apiserver process to appear ...
	I0806 08:38:27.187597   79219 api_server.go:88] waiting for apiserver healthz status ...
	I0806 08:38:27.187626   79219 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:27.187673   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:27.224204   79219 cri.go:89] found id: "f7edb6bece3347d88a263663a1d6549165dafdcb67ad723494b937766cba6da6"
	I0806 08:38:27.224227   79219 cri.go:89] found id: ""
	I0806 08:38:27.224237   79219 logs.go:276] 1 containers: [f7edb6bece3347d88a263663a1d6549165dafdcb67ad723494b937766cba6da6]
	I0806 08:38:27.224284   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:27.228883   79219 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:27.228957   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:27.275914   79219 cri.go:89] found id: "df2efdb62a4af10f89a1b37599d4258ac3b72660353795f8c2141f3ef70c1f65"
	I0806 08:38:27.275939   79219 cri.go:89] found id: ""
	I0806 08:38:27.275948   79219 logs.go:276] 1 containers: [df2efdb62a4af10f89a1b37599d4258ac3b72660353795f8c2141f3ef70c1f65]
	I0806 08:38:27.276009   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:27.280979   79219 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:27.281037   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:27.326292   79219 cri.go:89] found id: "9e1c91bc3a920b6f7db4477b8219f13fa092c68ff53abe2390325a68dad253e5"
	I0806 08:38:27.326312   79219 cri.go:89] found id: ""
	I0806 08:38:27.326319   79219 logs.go:276] 1 containers: [9e1c91bc3a920b6f7db4477b8219f13fa092c68ff53abe2390325a68dad253e5]
	I0806 08:38:27.326365   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:27.331249   79219 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:27.331326   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:27.374229   79219 cri.go:89] found id: "a25c5660a3a2b7226bf308b44307a3912efa7a9a73e79ec63f83133b80a44683"
	I0806 08:38:27.374250   79219 cri.go:89] found id: ""
	I0806 08:38:27.374257   79219 logs.go:276] 1 containers: [a25c5660a3a2b7226bf308b44307a3912efa7a9a73e79ec63f83133b80a44683]
	I0806 08:38:27.374300   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:27.378875   79219 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:27.378957   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:27.416159   79219 cri.go:89] found id: "612fc6a8f2c1208097a6d6287c5fecb20026608a278af5d0d4efc9662e065eb9"
	I0806 08:38:27.416183   79219 cri.go:89] found id: ""
	I0806 08:38:27.416192   79219 logs.go:276] 1 containers: [612fc6a8f2c1208097a6d6287c5fecb20026608a278af5d0d4efc9662e065eb9]
	I0806 08:38:27.416243   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:27.420463   79219 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:27.420531   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:27.461673   79219 cri.go:89] found id: "d8651a7ed6615e826b950eab003841c2ec872b48160f207e77dd32a684125a23"
	I0806 08:38:27.461700   79219 cri.go:89] found id: ""
	I0806 08:38:27.461710   79219 logs.go:276] 1 containers: [d8651a7ed6615e826b950eab003841c2ec872b48160f207e77dd32a684125a23]
	I0806 08:38:27.461762   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:27.466404   79219 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:27.466474   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:27.507684   79219 cri.go:89] found id: ""
	I0806 08:38:27.507708   79219 logs.go:276] 0 containers: []
	W0806 08:38:27.507716   79219 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:27.507722   79219 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0806 08:38:27.507787   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0806 08:38:27.547812   79219 cri.go:89] found id: "367eb6487187a37acafb1fa74364a2d9ec5d9241888ed2e871fb14f2af161975"
	I0806 08:38:27.547844   79219 cri.go:89] found id: "78abe2efa92d2d6c9171ed65bd4d941693be3207554f9318a7cefb91544eeb20"
	I0806 08:38:27.547850   79219 cri.go:89] found id: ""
	I0806 08:38:27.547859   79219 logs.go:276] 2 containers: [367eb6487187a37acafb1fa74364a2d9ec5d9241888ed2e871fb14f2af161975 78abe2efa92d2d6c9171ed65bd4d941693be3207554f9318a7cefb91544eeb20]
	I0806 08:38:27.547929   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:27.552639   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:27.556993   79219 logs.go:123] Gathering logs for kube-proxy [612fc6a8f2c1208097a6d6287c5fecb20026608a278af5d0d4efc9662e065eb9] ...
	I0806 08:38:27.557022   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 612fc6a8f2c1208097a6d6287c5fecb20026608a278af5d0d4efc9662e065eb9"
	I0806 08:38:27.599317   79219 logs.go:123] Gathering logs for storage-provisioner [78abe2efa92d2d6c9171ed65bd4d941693be3207554f9318a7cefb91544eeb20] ...
	I0806 08:38:27.599345   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78abe2efa92d2d6c9171ed65bd4d941693be3207554f9318a7cefb91544eeb20"
	I0806 08:38:27.643812   79219 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:27.643845   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:28.119873   79219 logs.go:123] Gathering logs for container status ...
	I0806 08:38:28.119909   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:28.160698   79219 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:28.160744   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 08:38:28.275895   79219 logs.go:123] Gathering logs for coredns [9e1c91bc3a920b6f7db4477b8219f13fa092c68ff53abe2390325a68dad253e5] ...
	I0806 08:38:28.275927   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e1c91bc3a920b6f7db4477b8219f13fa092c68ff53abe2390325a68dad253e5"
	I0806 08:38:28.324525   79219 logs.go:123] Gathering logs for kube-scheduler [a25c5660a3a2b7226bf308b44307a3912efa7a9a73e79ec63f83133b80a44683] ...
	I0806 08:38:28.324559   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a25c5660a3a2b7226bf308b44307a3912efa7a9a73e79ec63f83133b80a44683"
	I0806 08:38:28.372860   79219 logs.go:123] Gathering logs for etcd [df2efdb62a4af10f89a1b37599d4258ac3b72660353795f8c2141f3ef70c1f65] ...
	I0806 08:38:28.372891   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df2efdb62a4af10f89a1b37599d4258ac3b72660353795f8c2141f3ef70c1f65"
	I0806 08:38:28.438909   79219 logs.go:123] Gathering logs for kube-controller-manager [d8651a7ed6615e826b950eab003841c2ec872b48160f207e77dd32a684125a23] ...
	I0806 08:38:28.438943   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8651a7ed6615e826b950eab003841c2ec872b48160f207e77dd32a684125a23"
	I0806 08:38:28.514238   79219 logs.go:123] Gathering logs for storage-provisioner [367eb6487187a37acafb1fa74364a2d9ec5d9241888ed2e871fb14f2af161975] ...
	I0806 08:38:28.514279   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 367eb6487187a37acafb1fa74364a2d9ec5d9241888ed2e871fb14f2af161975"
	I0806 08:38:28.552420   79219 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:28.552451   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:28.605414   79219 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:28.605452   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:28.621364   79219 logs.go:123] Gathering logs for kube-apiserver [f7edb6bece3347d88a263663a1d6549165dafdcb67ad723494b937766cba6da6] ...
	I0806 08:38:28.621404   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7edb6bece3347d88a263663a1d6549165dafdcb67ad723494b937766cba6da6"
	I0806 08:38:27.469521   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:29.967734   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:28.812203   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:28.812237   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:31.327103   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:31.340745   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:31.340826   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:31.382470   79927 cri.go:89] found id: ""
	I0806 08:38:31.382503   79927 logs.go:276] 0 containers: []
	W0806 08:38:31.382515   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:31.382523   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:31.382593   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:31.423319   79927 cri.go:89] found id: ""
	I0806 08:38:31.423343   79927 logs.go:276] 0 containers: []
	W0806 08:38:31.423353   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:31.423359   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:31.423423   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:31.463073   79927 cri.go:89] found id: ""
	I0806 08:38:31.463099   79927 logs.go:276] 0 containers: []
	W0806 08:38:31.463108   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:31.463115   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:31.463165   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:31.499169   79927 cri.go:89] found id: ""
	I0806 08:38:31.499209   79927 logs.go:276] 0 containers: []
	W0806 08:38:31.499220   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:31.499228   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:31.499284   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:31.535030   79927 cri.go:89] found id: ""
	I0806 08:38:31.535060   79927 logs.go:276] 0 containers: []
	W0806 08:38:31.535071   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:31.535079   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:31.535145   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:31.575872   79927 cri.go:89] found id: ""
	I0806 08:38:31.575894   79927 logs.go:276] 0 containers: []
	W0806 08:38:31.575902   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:31.575907   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:31.575952   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:31.614766   79927 cri.go:89] found id: ""
	I0806 08:38:31.614787   79927 logs.go:276] 0 containers: []
	W0806 08:38:31.614797   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:31.614804   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:31.614860   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:31.655929   79927 cri.go:89] found id: ""
	I0806 08:38:31.655954   79927 logs.go:276] 0 containers: []
	W0806 08:38:31.655964   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:31.655992   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:31.656009   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:31.728563   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:31.728611   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:31.745445   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:31.745479   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:31.827965   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:31.827998   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:31.828014   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:31.935051   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:31.935093   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:30.788021   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:32.788768   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:31.179830   79219 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8443/healthz ...
	I0806 08:38:31.184262   79219 api_server.go:279] https://192.168.39.190:8443/healthz returned 200:
	ok
	I0806 08:38:31.185245   79219 api_server.go:141] control plane version: v1.30.3
	I0806 08:38:31.185267   79219 api_server.go:131] duration metric: took 3.997664057s to wait for apiserver health ...
	I0806 08:38:31.185274   79219 system_pods.go:43] waiting for kube-system pods to appear ...
	I0806 08:38:31.185297   79219 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:31.185339   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:31.222960   79219 cri.go:89] found id: "f7edb6bece3347d88a263663a1d6549165dafdcb67ad723494b937766cba6da6"
	I0806 08:38:31.222986   79219 cri.go:89] found id: ""
	I0806 08:38:31.222993   79219 logs.go:276] 1 containers: [f7edb6bece3347d88a263663a1d6549165dafdcb67ad723494b937766cba6da6]
	I0806 08:38:31.223042   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:31.227448   79219 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:31.227514   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:31.265326   79219 cri.go:89] found id: "df2efdb62a4af10f89a1b37599d4258ac3b72660353795f8c2141f3ef70c1f65"
	I0806 08:38:31.265349   79219 cri.go:89] found id: ""
	I0806 08:38:31.265356   79219 logs.go:276] 1 containers: [df2efdb62a4af10f89a1b37599d4258ac3b72660353795f8c2141f3ef70c1f65]
	I0806 08:38:31.265402   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:31.269672   79219 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:31.269732   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:31.310202   79219 cri.go:89] found id: "9e1c91bc3a920b6f7db4477b8219f13fa092c68ff53abe2390325a68dad253e5"
	I0806 08:38:31.310228   79219 cri.go:89] found id: ""
	I0806 08:38:31.310237   79219 logs.go:276] 1 containers: [9e1c91bc3a920b6f7db4477b8219f13fa092c68ff53abe2390325a68dad253e5]
	I0806 08:38:31.310283   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:31.314486   79219 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:31.314540   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:31.352612   79219 cri.go:89] found id: "a25c5660a3a2b7226bf308b44307a3912efa7a9a73e79ec63f83133b80a44683"
	I0806 08:38:31.352637   79219 cri.go:89] found id: ""
	I0806 08:38:31.352647   79219 logs.go:276] 1 containers: [a25c5660a3a2b7226bf308b44307a3912efa7a9a73e79ec63f83133b80a44683]
	I0806 08:38:31.352707   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:31.358230   79219 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:31.358300   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:31.402435   79219 cri.go:89] found id: "612fc6a8f2c1208097a6d6287c5fecb20026608a278af5d0d4efc9662e065eb9"
	I0806 08:38:31.402462   79219 cri.go:89] found id: ""
	I0806 08:38:31.402472   79219 logs.go:276] 1 containers: [612fc6a8f2c1208097a6d6287c5fecb20026608a278af5d0d4efc9662e065eb9]
	I0806 08:38:31.402530   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:31.407723   79219 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:31.407793   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:31.455484   79219 cri.go:89] found id: "d8651a7ed6615e826b950eab003841c2ec872b48160f207e77dd32a684125a23"
	I0806 08:38:31.455511   79219 cri.go:89] found id: ""
	I0806 08:38:31.455520   79219 logs.go:276] 1 containers: [d8651a7ed6615e826b950eab003841c2ec872b48160f207e77dd32a684125a23]
	I0806 08:38:31.455582   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:31.461155   79219 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:31.461231   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:31.501317   79219 cri.go:89] found id: ""
	I0806 08:38:31.501342   79219 logs.go:276] 0 containers: []
	W0806 08:38:31.501352   79219 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:31.501359   79219 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0806 08:38:31.501418   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0806 08:38:31.552712   79219 cri.go:89] found id: "367eb6487187a37acafb1fa74364a2d9ec5d9241888ed2e871fb14f2af161975"
	I0806 08:38:31.552742   79219 cri.go:89] found id: "78abe2efa92d2d6c9171ed65bd4d941693be3207554f9318a7cefb91544eeb20"
	I0806 08:38:31.552748   79219 cri.go:89] found id: ""
	I0806 08:38:31.552758   79219 logs.go:276] 2 containers: [367eb6487187a37acafb1fa74364a2d9ec5d9241888ed2e871fb14f2af161975 78abe2efa92d2d6c9171ed65bd4d941693be3207554f9318a7cefb91544eeb20]
	I0806 08:38:31.552820   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:31.561505   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:31.566613   79219 logs.go:123] Gathering logs for storage-provisioner [367eb6487187a37acafb1fa74364a2d9ec5d9241888ed2e871fb14f2af161975] ...
	I0806 08:38:31.566640   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 367eb6487187a37acafb1fa74364a2d9ec5d9241888ed2e871fb14f2af161975"
	I0806 08:38:31.613033   79219 logs.go:123] Gathering logs for storage-provisioner [78abe2efa92d2d6c9171ed65bd4d941693be3207554f9318a7cefb91544eeb20] ...
	I0806 08:38:31.613073   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78abe2efa92d2d6c9171ed65bd4d941693be3207554f9318a7cefb91544eeb20"
	I0806 08:38:31.655353   79219 logs.go:123] Gathering logs for kube-scheduler [a25c5660a3a2b7226bf308b44307a3912efa7a9a73e79ec63f83133b80a44683] ...
	I0806 08:38:31.655389   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a25c5660a3a2b7226bf308b44307a3912efa7a9a73e79ec63f83133b80a44683"
	I0806 08:38:31.703934   79219 logs.go:123] Gathering logs for kube-controller-manager [d8651a7ed6615e826b950eab003841c2ec872b48160f207e77dd32a684125a23] ...
	I0806 08:38:31.703960   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8651a7ed6615e826b950eab003841c2ec872b48160f207e77dd32a684125a23"
	I0806 08:38:31.768880   79219 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:31.768916   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 08:38:31.896045   79219 logs.go:123] Gathering logs for kube-apiserver [f7edb6bece3347d88a263663a1d6549165dafdcb67ad723494b937766cba6da6] ...
	I0806 08:38:31.896079   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7edb6bece3347d88a263663a1d6549165dafdcb67ad723494b937766cba6da6"
	I0806 08:38:31.966646   79219 logs.go:123] Gathering logs for etcd [df2efdb62a4af10f89a1b37599d4258ac3b72660353795f8c2141f3ef70c1f65] ...
	I0806 08:38:31.966682   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df2efdb62a4af10f89a1b37599d4258ac3b72660353795f8c2141f3ef70c1f65"
	I0806 08:38:32.029471   79219 logs.go:123] Gathering logs for coredns [9e1c91bc3a920b6f7db4477b8219f13fa092c68ff53abe2390325a68dad253e5] ...
	I0806 08:38:32.029509   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e1c91bc3a920b6f7db4477b8219f13fa092c68ff53abe2390325a68dad253e5"
	I0806 08:38:32.071406   79219 logs.go:123] Gathering logs for kube-proxy [612fc6a8f2c1208097a6d6287c5fecb20026608a278af5d0d4efc9662e065eb9] ...
	I0806 08:38:32.071435   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 612fc6a8f2c1208097a6d6287c5fecb20026608a278af5d0d4efc9662e065eb9"
	I0806 08:38:32.122333   79219 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:32.122368   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:32.478494   79219 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:32.478540   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:32.530649   79219 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:32.530696   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:32.545866   79219 logs.go:123] Gathering logs for container status ...
	I0806 08:38:32.545895   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:35.101370   79219 system_pods.go:59] 8 kube-system pods found
	I0806 08:38:35.101397   79219 system_pods.go:61] "coredns-7db6d8ff4d-4xxwm" [eeb47dea-32e9-458f-9ad9-2af6d4e68cc0] Running
	I0806 08:38:35.101402   79219 system_pods.go:61] "etcd-embed-certs-324808" [1e0a82cd-6704-4da3-8992-15c68a7aaf70] Running
	I0806 08:38:35.101405   79219 system_pods.go:61] "kube-apiserver-embed-certs-324808" [a33c54d9-1fc5-4243-8fe9-d535c40908c3] Running
	I0806 08:38:35.101409   79219 system_pods.go:61] "kube-controller-manager-embed-certs-324808" [b1c519b8-9c96-4c87-90f3-cc277cfe7763] Running
	I0806 08:38:35.101412   79219 system_pods.go:61] "kube-proxy-8lbzv" [0c45e6c0-9b7f-4c48-bff5-38d4a5949bec] Running
	I0806 08:38:35.101415   79219 system_pods.go:61] "kube-scheduler-embed-certs-324808" [78c2827f-341a-4441-bf7e-eff947fc3ea8] Running
	I0806 08:38:35.101421   79219 system_pods.go:61] "metrics-server-569cc877fc-8xb9k" [7a5fb326-11a7-48fa-bcde-a22026a1dccb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0806 08:38:35.101424   79219 system_pods.go:61] "storage-provisioner" [15c7d89a-e994-4cca-931c-f646157a15e8] Running
	I0806 08:38:35.101431   79219 system_pods.go:74] duration metric: took 3.916151772s to wait for pod list to return data ...
	I0806 08:38:35.101438   79219 default_sa.go:34] waiting for default service account to be created ...
	I0806 08:38:35.103518   79219 default_sa.go:45] found service account: "default"
	I0806 08:38:35.103536   79219 default_sa.go:55] duration metric: took 2.092582ms for default service account to be created ...
	I0806 08:38:35.103546   79219 system_pods.go:116] waiting for k8s-apps to be running ...
	I0806 08:38:35.108383   79219 system_pods.go:86] 8 kube-system pods found
	I0806 08:38:35.108409   79219 system_pods.go:89] "coredns-7db6d8ff4d-4xxwm" [eeb47dea-32e9-458f-9ad9-2af6d4e68cc0] Running
	I0806 08:38:35.108417   79219 system_pods.go:89] "etcd-embed-certs-324808" [1e0a82cd-6704-4da3-8992-15c68a7aaf70] Running
	I0806 08:38:35.108423   79219 system_pods.go:89] "kube-apiserver-embed-certs-324808" [a33c54d9-1fc5-4243-8fe9-d535c40908c3] Running
	I0806 08:38:35.108427   79219 system_pods.go:89] "kube-controller-manager-embed-certs-324808" [b1c519b8-9c96-4c87-90f3-cc277cfe7763] Running
	I0806 08:38:35.108431   79219 system_pods.go:89] "kube-proxy-8lbzv" [0c45e6c0-9b7f-4c48-bff5-38d4a5949bec] Running
	I0806 08:38:35.108436   79219 system_pods.go:89] "kube-scheduler-embed-certs-324808" [78c2827f-341a-4441-bf7e-eff947fc3ea8] Running
	I0806 08:38:35.108442   79219 system_pods.go:89] "metrics-server-569cc877fc-8xb9k" [7a5fb326-11a7-48fa-bcde-a22026a1dccb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0806 08:38:35.108447   79219 system_pods.go:89] "storage-provisioner" [15c7d89a-e994-4cca-931c-f646157a15e8] Running
	I0806 08:38:35.108453   79219 system_pods.go:126] duration metric: took 4.901954ms to wait for k8s-apps to be running ...
	I0806 08:38:35.108463   79219 system_svc.go:44] waiting for kubelet service to be running ....
	I0806 08:38:35.108505   79219 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 08:38:35.126018   79219 system_svc.go:56] duration metric: took 17.549117ms WaitForService to wait for kubelet
	I0806 08:38:35.126045   79219 kubeadm.go:582] duration metric: took 4m23.224415251s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0806 08:38:35.126065   79219 node_conditions.go:102] verifying NodePressure condition ...
	I0806 08:38:35.129214   79219 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0806 08:38:35.129231   79219 node_conditions.go:123] node cpu capacity is 2
	I0806 08:38:35.129241   79219 node_conditions.go:105] duration metric: took 3.172893ms to run NodePressure ...
	I0806 08:38:35.129252   79219 start.go:241] waiting for startup goroutines ...
	I0806 08:38:35.129259   79219 start.go:246] waiting for cluster config update ...
	I0806 08:38:35.129273   79219 start.go:255] writing updated cluster config ...
	I0806 08:38:35.129540   79219 ssh_runner.go:195] Run: rm -f paused
	I0806 08:38:35.178651   79219 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0806 08:38:35.180921   79219 out.go:177] * Done! kubectl is now configured to use "embed-certs-324808" cluster and "default" namespace by default
	I0806 08:38:31.968170   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:34.468911   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:34.478890   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:34.493573   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:34.493650   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:34.529054   79927 cri.go:89] found id: ""
	I0806 08:38:34.529083   79927 logs.go:276] 0 containers: []
	W0806 08:38:34.529094   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:34.529101   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:34.529171   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:34.570214   79927 cri.go:89] found id: ""
	I0806 08:38:34.570245   79927 logs.go:276] 0 containers: []
	W0806 08:38:34.570256   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:34.570264   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:34.570325   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:34.607198   79927 cri.go:89] found id: ""
	I0806 08:38:34.607232   79927 logs.go:276] 0 containers: []
	W0806 08:38:34.607243   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:34.607250   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:34.607296   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:34.646238   79927 cri.go:89] found id: ""
	I0806 08:38:34.646266   79927 logs.go:276] 0 containers: []
	W0806 08:38:34.646278   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:34.646287   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:34.646354   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:34.682137   79927 cri.go:89] found id: ""
	I0806 08:38:34.682161   79927 logs.go:276] 0 containers: []
	W0806 08:38:34.682169   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:34.682175   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:34.682227   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:34.721211   79927 cri.go:89] found id: ""
	I0806 08:38:34.721238   79927 logs.go:276] 0 containers: []
	W0806 08:38:34.721246   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:34.721252   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:34.721306   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:34.759693   79927 cri.go:89] found id: ""
	I0806 08:38:34.759728   79927 logs.go:276] 0 containers: []
	W0806 08:38:34.759737   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:34.759743   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:34.759793   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:34.799224   79927 cri.go:89] found id: ""
	I0806 08:38:34.799246   79927 logs.go:276] 0 containers: []
	W0806 08:38:34.799254   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:34.799262   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:34.799273   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:34.852682   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:34.852727   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:34.866456   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:34.866494   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:34.933029   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:34.933055   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:34.933076   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:35.017215   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:35.017253   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:37.567662   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:37.582220   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:37.582284   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:37.622112   79927 cri.go:89] found id: ""
	I0806 08:38:37.622145   79927 logs.go:276] 0 containers: []
	W0806 08:38:37.622153   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:37.622164   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:37.622228   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:37.657767   79927 cri.go:89] found id: ""
	I0806 08:38:37.657791   79927 logs.go:276] 0 containers: []
	W0806 08:38:37.657798   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:37.657804   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:37.657849   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:37.691442   79927 cri.go:89] found id: ""
	I0806 08:38:37.691471   79927 logs.go:276] 0 containers: []
	W0806 08:38:37.691480   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:37.691485   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:37.691539   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:37.725810   79927 cri.go:89] found id: ""
	I0806 08:38:37.725837   79927 logs.go:276] 0 containers: []
	W0806 08:38:37.725859   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:37.725866   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:37.725922   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:37.764708   79927 cri.go:89] found id: ""
	I0806 08:38:37.764737   79927 logs.go:276] 0 containers: []
	W0806 08:38:37.764748   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:37.764755   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:37.764818   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:37.802915   79927 cri.go:89] found id: ""
	I0806 08:38:37.802939   79927 logs.go:276] 0 containers: []
	W0806 08:38:37.802947   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:37.802953   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:37.803008   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:37.838492   79927 cri.go:89] found id: ""
	I0806 08:38:37.838525   79927 logs.go:276] 0 containers: []
	W0806 08:38:37.838536   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:37.838544   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:37.838607   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:37.873937   79927 cri.go:89] found id: ""
	I0806 08:38:37.873969   79927 logs.go:276] 0 containers: []
	W0806 08:38:37.873979   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:37.873990   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:37.874005   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:37.912169   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:37.912193   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:37.968269   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:37.968305   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:37.983414   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:37.983440   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:38.055031   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:38.055049   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:38.055062   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:35.289081   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:37.789220   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:36.968687   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:39.468051   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:41.468488   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:40.635449   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:40.649962   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:40.650045   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:40.688975   79927 cri.go:89] found id: ""
	I0806 08:38:40.689006   79927 logs.go:276] 0 containers: []
	W0806 08:38:40.689017   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:40.689029   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:40.689093   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:40.726740   79927 cri.go:89] found id: ""
	I0806 08:38:40.726770   79927 logs.go:276] 0 containers: []
	W0806 08:38:40.726780   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:40.726788   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:40.726886   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:40.767402   79927 cri.go:89] found id: ""
	I0806 08:38:40.767432   79927 logs.go:276] 0 containers: []
	W0806 08:38:40.767445   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:40.767453   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:40.767510   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:40.809994   79927 cri.go:89] found id: ""
	I0806 08:38:40.810023   79927 logs.go:276] 0 containers: []
	W0806 08:38:40.810033   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:40.810042   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:40.810102   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:40.845624   79927 cri.go:89] found id: ""
	I0806 08:38:40.845653   79927 logs.go:276] 0 containers: []
	W0806 08:38:40.845662   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:40.845668   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:40.845715   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:40.883830   79927 cri.go:89] found id: ""
	I0806 08:38:40.883856   79927 logs.go:276] 0 containers: []
	W0806 08:38:40.883871   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:40.883879   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:40.883937   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:40.919930   79927 cri.go:89] found id: ""
	I0806 08:38:40.919958   79927 logs.go:276] 0 containers: []
	W0806 08:38:40.919966   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:40.919971   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:40.920018   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:40.954563   79927 cri.go:89] found id: ""
	I0806 08:38:40.954594   79927 logs.go:276] 0 containers: []
	W0806 08:38:40.954605   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:40.954616   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:40.954629   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:41.008780   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:41.008813   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:41.022292   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:41.022321   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:41.090241   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:41.090273   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:41.090294   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:41.171739   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:41.171779   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:43.712410   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:43.726676   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:43.726747   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:40.288192   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:42.289039   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:42.789083   79614 pod_ready.go:81] duration metric: took 4m0.00771712s for pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace to be "Ready" ...
	E0806 08:38:42.789107   79614 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0806 08:38:42.789117   79614 pod_ready.go:38] duration metric: took 4m5.553596655s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 08:38:42.789135   79614 api_server.go:52] waiting for apiserver process to appear ...
	I0806 08:38:42.789163   79614 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:42.789215   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:42.837092   79614 cri.go:89] found id: "49843be03d16addb301ffd26e517770ff29b00f049efb524a2139b8b2153325c"
	I0806 08:38:42.837119   79614 cri.go:89] found id: ""
	I0806 08:38:42.837128   79614 logs.go:276] 1 containers: [49843be03d16addb301ffd26e517770ff29b00f049efb524a2139b8b2153325c]
	I0806 08:38:42.837188   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:42.842270   79614 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:42.842334   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:42.878018   79614 cri.go:89] found id: "b5fa6fe8f1db6b59f47ea637515efb2864e0408ce1df0e6851b3bda0e240f5d3"
	I0806 08:38:42.878044   79614 cri.go:89] found id: ""
	I0806 08:38:42.878053   79614 logs.go:276] 1 containers: [b5fa6fe8f1db6b59f47ea637515efb2864e0408ce1df0e6851b3bda0e240f5d3]
	I0806 08:38:42.878115   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:42.882437   79614 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:42.882509   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:42.925635   79614 cri.go:89] found id: "59f63eeae644f7c8944cdb7b1d05d9e49fe84eb28e9ccbad2d2dd79846d7eb52"
	I0806 08:38:42.925655   79614 cri.go:89] found id: ""
	I0806 08:38:42.925662   79614 logs.go:276] 1 containers: [59f63eeae644f7c8944cdb7b1d05d9e49fe84eb28e9ccbad2d2dd79846d7eb52]
	I0806 08:38:42.925704   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:42.930525   79614 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:42.930583   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:42.972716   79614 cri.go:89] found id: "e596abcad3dc13d27548701f515beebba051e04a9e20f8ce1d018731d880ff1c"
	I0806 08:38:42.972736   79614 cri.go:89] found id: ""
	I0806 08:38:42.972744   79614 logs.go:276] 1 containers: [e596abcad3dc13d27548701f515beebba051e04a9e20f8ce1d018731d880ff1c]
	I0806 08:38:42.972791   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:42.977840   79614 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:42.977894   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:43.017316   79614 cri.go:89] found id: "75990046864b4a4db9f8c02f44de4eefbb0cf45584880b053bce4ca0b677ce4e"
	I0806 08:38:43.017339   79614 cri.go:89] found id: ""
	I0806 08:38:43.017346   79614 logs.go:276] 1 containers: [75990046864b4a4db9f8c02f44de4eefbb0cf45584880b053bce4ca0b677ce4e]
	I0806 08:38:43.017396   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:43.022341   79614 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:43.022416   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:43.067113   79614 cri.go:89] found id: "05bbffa4c989e42bd020ebe6b943459ec1d7b31a6c5e5ba9a37056da15542658"
	I0806 08:38:43.067145   79614 cri.go:89] found id: ""
	I0806 08:38:43.067155   79614 logs.go:276] 1 containers: [05bbffa4c989e42bd020ebe6b943459ec1d7b31a6c5e5ba9a37056da15542658]
	I0806 08:38:43.067211   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:43.071674   79614 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:43.071748   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:43.114123   79614 cri.go:89] found id: ""
	I0806 08:38:43.114146   79614 logs.go:276] 0 containers: []
	W0806 08:38:43.114154   79614 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:43.114159   79614 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0806 08:38:43.114211   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0806 08:38:43.155927   79614 cri.go:89] found id: "e668e4a5fa59e279bc227b6110983cc4660b639d5b1e1291ae60d361f682d82e"
	I0806 08:38:43.155955   79614 cri.go:89] found id: "abd677ac30faba3fef9e9d562601ad0012d7f1b48852e08b8f2684efda6b0f2e"
	I0806 08:38:43.155961   79614 cri.go:89] found id: ""
	I0806 08:38:43.155969   79614 logs.go:276] 2 containers: [e668e4a5fa59e279bc227b6110983cc4660b639d5b1e1291ae60d361f682d82e abd677ac30faba3fef9e9d562601ad0012d7f1b48852e08b8f2684efda6b0f2e]
	I0806 08:38:43.156017   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:43.165735   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:43.171834   79614 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:43.171861   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:43.232588   79614 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:43.232628   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:43.248033   79614 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:43.248080   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 08:38:43.383729   79614 logs.go:123] Gathering logs for etcd [b5fa6fe8f1db6b59f47ea637515efb2864e0408ce1df0e6851b3bda0e240f5d3] ...
	I0806 08:38:43.383760   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5fa6fe8f1db6b59f47ea637515efb2864e0408ce1df0e6851b3bda0e240f5d3"
	I0806 08:38:43.428082   79614 logs.go:123] Gathering logs for coredns [59f63eeae644f7c8944cdb7b1d05d9e49fe84eb28e9ccbad2d2dd79846d7eb52] ...
	I0806 08:38:43.428116   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59f63eeae644f7c8944cdb7b1d05d9e49fe84eb28e9ccbad2d2dd79846d7eb52"
	I0806 08:38:43.467054   79614 logs.go:123] Gathering logs for storage-provisioner [e668e4a5fa59e279bc227b6110983cc4660b639d5b1e1291ae60d361f682d82e] ...
	I0806 08:38:43.467081   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e668e4a5fa59e279bc227b6110983cc4660b639d5b1e1291ae60d361f682d82e"
	I0806 08:38:43.502900   79614 logs.go:123] Gathering logs for storage-provisioner [abd677ac30faba3fef9e9d562601ad0012d7f1b48852e08b8f2684efda6b0f2e] ...
	I0806 08:38:43.502928   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abd677ac30faba3fef9e9d562601ad0012d7f1b48852e08b8f2684efda6b0f2e"
	I0806 08:38:43.539338   79614 logs.go:123] Gathering logs for kube-apiserver [49843be03d16addb301ffd26e517770ff29b00f049efb524a2139b8b2153325c] ...
	I0806 08:38:43.539369   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49843be03d16addb301ffd26e517770ff29b00f049efb524a2139b8b2153325c"
	I0806 08:38:43.585813   79614 logs.go:123] Gathering logs for kube-scheduler [e596abcad3dc13d27548701f515beebba051e04a9e20f8ce1d018731d880ff1c] ...
	I0806 08:38:43.585855   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e596abcad3dc13d27548701f515beebba051e04a9e20f8ce1d018731d880ff1c"
	I0806 08:38:43.627141   79614 logs.go:123] Gathering logs for kube-proxy [75990046864b4a4db9f8c02f44de4eefbb0cf45584880b053bce4ca0b677ce4e] ...
	I0806 08:38:43.627173   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75990046864b4a4db9f8c02f44de4eefbb0cf45584880b053bce4ca0b677ce4e"
	I0806 08:38:43.663754   79614 logs.go:123] Gathering logs for kube-controller-manager [05bbffa4c989e42bd020ebe6b943459ec1d7b31a6c5e5ba9a37056da15542658] ...
	I0806 08:38:43.663783   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 05bbffa4c989e42bd020ebe6b943459ec1d7b31a6c5e5ba9a37056da15542658"
	I0806 08:38:43.717594   79614 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:43.717631   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:43.469134   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:45.967796   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:43.765343   79927 cri.go:89] found id: ""
	I0806 08:38:43.765379   79927 logs.go:276] 0 containers: []
	W0806 08:38:43.765390   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:43.765398   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:43.765457   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:43.803813   79927 cri.go:89] found id: ""
	I0806 08:38:43.803846   79927 logs.go:276] 0 containers: []
	W0806 08:38:43.803859   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:43.803868   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:43.803933   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:43.841307   79927 cri.go:89] found id: ""
	I0806 08:38:43.841339   79927 logs.go:276] 0 containers: []
	W0806 08:38:43.841349   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:43.841356   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:43.841418   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:43.876488   79927 cri.go:89] found id: ""
	I0806 08:38:43.876521   79927 logs.go:276] 0 containers: []
	W0806 08:38:43.876534   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:43.876544   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:43.876606   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:43.910972   79927 cri.go:89] found id: ""
	I0806 08:38:43.911000   79927 logs.go:276] 0 containers: []
	W0806 08:38:43.911010   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:43.911017   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:43.911077   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:43.945541   79927 cri.go:89] found id: ""
	I0806 08:38:43.945573   79927 logs.go:276] 0 containers: []
	W0806 08:38:43.945584   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:43.945592   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:43.945655   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:43.981366   79927 cri.go:89] found id: ""
	I0806 08:38:43.981399   79927 logs.go:276] 0 containers: []
	W0806 08:38:43.981408   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:43.981413   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:43.981463   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:44.015349   79927 cri.go:89] found id: ""
	I0806 08:38:44.015375   79927 logs.go:276] 0 containers: []
	W0806 08:38:44.015384   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:44.015394   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:44.015408   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:44.068089   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:44.068129   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:44.082496   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:44.082524   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:44.157765   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:44.157788   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:44.157805   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:44.242842   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:44.242885   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:46.789280   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:46.803490   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:46.803562   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:46.844164   79927 cri.go:89] found id: ""
	I0806 08:38:46.844192   79927 logs.go:276] 0 containers: []
	W0806 08:38:46.844200   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:46.844206   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:46.844247   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:46.880968   79927 cri.go:89] found id: ""
	I0806 08:38:46.881003   79927 logs.go:276] 0 containers: []
	W0806 08:38:46.881015   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:46.881022   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:46.881082   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:46.919291   79927 cri.go:89] found id: ""
	I0806 08:38:46.919311   79927 logs.go:276] 0 containers: []
	W0806 08:38:46.919319   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:46.919324   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:46.919369   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:46.960591   79927 cri.go:89] found id: ""
	I0806 08:38:46.960616   79927 logs.go:276] 0 containers: []
	W0806 08:38:46.960626   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:46.960632   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:46.960681   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:46.995913   79927 cri.go:89] found id: ""
	I0806 08:38:46.995943   79927 logs.go:276] 0 containers: []
	W0806 08:38:46.995954   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:46.995961   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:46.996045   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:47.035185   79927 cri.go:89] found id: ""
	I0806 08:38:47.035213   79927 logs.go:276] 0 containers: []
	W0806 08:38:47.035225   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:47.035236   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:47.035292   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:47.079605   79927 cri.go:89] found id: ""
	I0806 08:38:47.079634   79927 logs.go:276] 0 containers: []
	W0806 08:38:47.079646   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:47.079653   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:47.079701   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:47.118819   79927 cri.go:89] found id: ""
	I0806 08:38:47.118849   79927 logs.go:276] 0 containers: []
	W0806 08:38:47.118861   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:47.118873   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:47.118888   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:47.181357   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:47.181394   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:47.197676   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:47.197707   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:47.270304   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:47.270331   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:47.270348   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:47.354145   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:47.354176   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:44.291978   79614 logs.go:123] Gathering logs for container status ...
	I0806 08:38:44.292026   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:46.843357   79614 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:46.860793   79614 api_server.go:72] duration metric: took 4m14.882891742s to wait for apiserver process to appear ...
	I0806 08:38:46.860820   79614 api_server.go:88] waiting for apiserver healthz status ...
	I0806 08:38:46.860858   79614 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:46.860923   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:46.909044   79614 cri.go:89] found id: "49843be03d16addb301ffd26e517770ff29b00f049efb524a2139b8b2153325c"
	I0806 08:38:46.909068   79614 cri.go:89] found id: ""
	I0806 08:38:46.909076   79614 logs.go:276] 1 containers: [49843be03d16addb301ffd26e517770ff29b00f049efb524a2139b8b2153325c]
	I0806 08:38:46.909134   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:46.913443   79614 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:46.913522   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:46.955246   79614 cri.go:89] found id: "b5fa6fe8f1db6b59f47ea637515efb2864e0408ce1df0e6851b3bda0e240f5d3"
	I0806 08:38:46.955273   79614 cri.go:89] found id: ""
	I0806 08:38:46.955281   79614 logs.go:276] 1 containers: [b5fa6fe8f1db6b59f47ea637515efb2864e0408ce1df0e6851b3bda0e240f5d3]
	I0806 08:38:46.955347   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:46.959914   79614 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:46.959994   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:47.001639   79614 cri.go:89] found id: "59f63eeae644f7c8944cdb7b1d05d9e49fe84eb28e9ccbad2d2dd79846d7eb52"
	I0806 08:38:47.001665   79614 cri.go:89] found id: ""
	I0806 08:38:47.001673   79614 logs.go:276] 1 containers: [59f63eeae644f7c8944cdb7b1d05d9e49fe84eb28e9ccbad2d2dd79846d7eb52]
	I0806 08:38:47.001717   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:47.006260   79614 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:47.006344   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:47.049226   79614 cri.go:89] found id: "e596abcad3dc13d27548701f515beebba051e04a9e20f8ce1d018731d880ff1c"
	I0806 08:38:47.049252   79614 cri.go:89] found id: ""
	I0806 08:38:47.049261   79614 logs.go:276] 1 containers: [e596abcad3dc13d27548701f515beebba051e04a9e20f8ce1d018731d880ff1c]
	I0806 08:38:47.049317   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:47.053544   79614 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:47.053626   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:47.095805   79614 cri.go:89] found id: "75990046864b4a4db9f8c02f44de4eefbb0cf45584880b053bce4ca0b677ce4e"
	I0806 08:38:47.095832   79614 cri.go:89] found id: ""
	I0806 08:38:47.095841   79614 logs.go:276] 1 containers: [75990046864b4a4db9f8c02f44de4eefbb0cf45584880b053bce4ca0b677ce4e]
	I0806 08:38:47.095895   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:47.100272   79614 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:47.100329   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:47.146108   79614 cri.go:89] found id: "05bbffa4c989e42bd020ebe6b943459ec1d7b31a6c5e5ba9a37056da15542658"
	I0806 08:38:47.146132   79614 cri.go:89] found id: ""
	I0806 08:38:47.146142   79614 logs.go:276] 1 containers: [05bbffa4c989e42bd020ebe6b943459ec1d7b31a6c5e5ba9a37056da15542658]
	I0806 08:38:47.146200   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:47.150808   79614 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:47.150863   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:47.194122   79614 cri.go:89] found id: ""
	I0806 08:38:47.194157   79614 logs.go:276] 0 containers: []
	W0806 08:38:47.194169   79614 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:47.194177   79614 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0806 08:38:47.194241   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0806 08:38:47.237872   79614 cri.go:89] found id: "e668e4a5fa59e279bc227b6110983cc4660b639d5b1e1291ae60d361f682d82e"
	I0806 08:38:47.237906   79614 cri.go:89] found id: "abd677ac30faba3fef9e9d562601ad0012d7f1b48852e08b8f2684efda6b0f2e"
	I0806 08:38:47.237915   79614 cri.go:89] found id: ""
	I0806 08:38:47.237925   79614 logs.go:276] 2 containers: [e668e4a5fa59e279bc227b6110983cc4660b639d5b1e1291ae60d361f682d82e abd677ac30faba3fef9e9d562601ad0012d7f1b48852e08b8f2684efda6b0f2e]
	I0806 08:38:47.237988   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:47.242528   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:47.247317   79614 logs.go:123] Gathering logs for kube-proxy [75990046864b4a4db9f8c02f44de4eefbb0cf45584880b053bce4ca0b677ce4e] ...
	I0806 08:38:47.247341   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75990046864b4a4db9f8c02f44de4eefbb0cf45584880b053bce4ca0b677ce4e"
	I0806 08:38:47.289612   79614 logs.go:123] Gathering logs for kube-controller-manager [05bbffa4c989e42bd020ebe6b943459ec1d7b31a6c5e5ba9a37056da15542658] ...
	I0806 08:38:47.289644   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 05bbffa4c989e42bd020ebe6b943459ec1d7b31a6c5e5ba9a37056da15542658"
	I0806 08:38:47.345619   79614 logs.go:123] Gathering logs for storage-provisioner [e668e4a5fa59e279bc227b6110983cc4660b639d5b1e1291ae60d361f682d82e] ...
	I0806 08:38:47.345653   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e668e4a5fa59e279bc227b6110983cc4660b639d5b1e1291ae60d361f682d82e"
	I0806 08:38:47.401281   79614 logs.go:123] Gathering logs for storage-provisioner [abd677ac30faba3fef9e9d562601ad0012d7f1b48852e08b8f2684efda6b0f2e] ...
	I0806 08:38:47.401306   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abd677ac30faba3fef9e9d562601ad0012d7f1b48852e08b8f2684efda6b0f2e"
	I0806 08:38:47.438335   79614 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:47.438364   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 08:38:47.537334   79614 logs.go:123] Gathering logs for etcd [b5fa6fe8f1db6b59f47ea637515efb2864e0408ce1df0e6851b3bda0e240f5d3] ...
	I0806 08:38:47.537360   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5fa6fe8f1db6b59f47ea637515efb2864e0408ce1df0e6851b3bda0e240f5d3"
	I0806 08:38:47.592101   79614 logs.go:123] Gathering logs for coredns [59f63eeae644f7c8944cdb7b1d05d9e49fe84eb28e9ccbad2d2dd79846d7eb52] ...
	I0806 08:38:47.592132   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59f63eeae644f7c8944cdb7b1d05d9e49fe84eb28e9ccbad2d2dd79846d7eb52"
	I0806 08:38:47.626714   79614 logs.go:123] Gathering logs for kube-scheduler [e596abcad3dc13d27548701f515beebba051e04a9e20f8ce1d018731d880ff1c] ...
	I0806 08:38:47.626738   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e596abcad3dc13d27548701f515beebba051e04a9e20f8ce1d018731d880ff1c"
	I0806 08:38:47.663232   79614 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:47.663262   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:48.133308   79614 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:48.133344   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:48.189669   79614 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:48.189709   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:48.208530   79614 logs.go:123] Gathering logs for kube-apiserver [49843be03d16addb301ffd26e517770ff29b00f049efb524a2139b8b2153325c] ...
	I0806 08:38:48.208557   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49843be03d16addb301ffd26e517770ff29b00f049efb524a2139b8b2153325c"
	I0806 08:38:48.264499   79614 logs.go:123] Gathering logs for container status ...
	I0806 08:38:48.264535   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:49.901481   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:49.914862   79927 kubeadm.go:597] duration metric: took 4m3.80396481s to restartPrimaryControlPlane
	W0806 08:38:49.914926   79927 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0806 08:38:49.914952   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0806 08:38:50.394829   79927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 08:38:50.409425   79927 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0806 08:38:50.419349   79927 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0806 08:38:50.429688   79927 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0806 08:38:50.429711   79927 kubeadm.go:157] found existing configuration files:
	
	I0806 08:38:50.429756   79927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0806 08:38:50.440055   79927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0806 08:38:50.440126   79927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0806 08:38:50.450734   79927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0806 08:38:50.461860   79927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0806 08:38:50.461943   79927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0806 08:38:50.473324   79927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0806 08:38:50.483381   79927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0806 08:38:50.483440   79927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0806 08:38:50.493247   79927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0806 08:38:50.502800   79927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0806 08:38:50.502861   79927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0806 08:38:50.512897   79927 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0806 08:38:50.585658   79927 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0806 08:38:50.585795   79927 kubeadm.go:310] [preflight] Running pre-flight checks
	I0806 08:38:50.740936   79927 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0806 08:38:50.741079   79927 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0806 08:38:50.741200   79927 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0806 08:38:50.938086   79927 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0806 08:38:48.468228   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:50.968702   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:50.940205   79927 out.go:204]   - Generating certificates and keys ...
	I0806 08:38:50.940315   79927 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0806 08:38:50.940428   79927 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0806 08:38:50.940543   79927 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0806 08:38:50.940621   79927 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0806 08:38:50.940723   79927 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0806 08:38:50.940800   79927 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0806 08:38:50.940881   79927 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0806 08:38:50.941150   79927 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0806 08:38:50.941954   79927 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0806 08:38:50.942765   79927 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0806 08:38:50.942910   79927 kubeadm.go:310] [certs] Using the existing "sa" key
	I0806 08:38:50.942993   79927 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0806 08:38:51.039558   79927 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0806 08:38:51.265243   79927 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0806 08:38:51.348501   79927 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0806 08:38:51.509558   79927 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0806 08:38:51.530725   79927 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0806 08:38:51.534108   79927 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0806 08:38:51.534297   79927 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0806 08:38:51.680149   79927 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0806 08:38:51.682195   79927 out.go:204]   - Booting up control plane ...
	I0806 08:38:51.682331   79927 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0806 08:38:51.693736   79927 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0806 08:38:51.695315   79927 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0806 08:38:51.697107   79927 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0806 08:38:51.700906   79927 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0806 08:38:50.822580   79614 api_server.go:253] Checking apiserver healthz at https://192.168.50.235:8444/healthz ...
	I0806 08:38:50.827121   79614 api_server.go:279] https://192.168.50.235:8444/healthz returned 200:
	ok
	I0806 08:38:50.828327   79614 api_server.go:141] control plane version: v1.30.3
	I0806 08:38:50.828355   79614 api_server.go:131] duration metric: took 3.967524972s to wait for apiserver health ...
	I0806 08:38:50.828381   79614 system_pods.go:43] waiting for kube-system pods to appear ...
	I0806 08:38:50.828406   79614 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:50.828460   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:50.875518   79614 cri.go:89] found id: "49843be03d16addb301ffd26e517770ff29b00f049efb524a2139b8b2153325c"
	I0806 08:38:50.875545   79614 cri.go:89] found id: ""
	I0806 08:38:50.875555   79614 logs.go:276] 1 containers: [49843be03d16addb301ffd26e517770ff29b00f049efb524a2139b8b2153325c]
	I0806 08:38:50.875611   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:50.881260   79614 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:50.881321   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:50.923252   79614 cri.go:89] found id: "b5fa6fe8f1db6b59f47ea637515efb2864e0408ce1df0e6851b3bda0e240f5d3"
	I0806 08:38:50.923281   79614 cri.go:89] found id: ""
	I0806 08:38:50.923292   79614 logs.go:276] 1 containers: [b5fa6fe8f1db6b59f47ea637515efb2864e0408ce1df0e6851b3bda0e240f5d3]
	I0806 08:38:50.923350   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:50.928115   79614 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:50.928176   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:50.971427   79614 cri.go:89] found id: "59f63eeae644f7c8944cdb7b1d05d9e49fe84eb28e9ccbad2d2dd79846d7eb52"
	I0806 08:38:50.971453   79614 cri.go:89] found id: ""
	I0806 08:38:50.971463   79614 logs.go:276] 1 containers: [59f63eeae644f7c8944cdb7b1d05d9e49fe84eb28e9ccbad2d2dd79846d7eb52]
	I0806 08:38:50.971520   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:50.975665   79614 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:50.975789   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:51.014601   79614 cri.go:89] found id: "e596abcad3dc13d27548701f515beebba051e04a9e20f8ce1d018731d880ff1c"
	I0806 08:38:51.014633   79614 cri.go:89] found id: ""
	I0806 08:38:51.014644   79614 logs.go:276] 1 containers: [e596abcad3dc13d27548701f515beebba051e04a9e20f8ce1d018731d880ff1c]
	I0806 08:38:51.014722   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:51.019197   79614 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:51.019266   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:51.057063   79614 cri.go:89] found id: "75990046864b4a4db9f8c02f44de4eefbb0cf45584880b053bce4ca0b677ce4e"
	I0806 08:38:51.057089   79614 cri.go:89] found id: ""
	I0806 08:38:51.057098   79614 logs.go:276] 1 containers: [75990046864b4a4db9f8c02f44de4eefbb0cf45584880b053bce4ca0b677ce4e]
	I0806 08:38:51.057159   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:51.061504   79614 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:51.061571   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:51.106496   79614 cri.go:89] found id: "05bbffa4c989e42bd020ebe6b943459ec1d7b31a6c5e5ba9a37056da15542658"
	I0806 08:38:51.106523   79614 cri.go:89] found id: ""
	I0806 08:38:51.106532   79614 logs.go:276] 1 containers: [05bbffa4c989e42bd020ebe6b943459ec1d7b31a6c5e5ba9a37056da15542658]
	I0806 08:38:51.106590   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:51.111040   79614 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:51.111109   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:51.156042   79614 cri.go:89] found id: ""
	I0806 08:38:51.156072   79614 logs.go:276] 0 containers: []
	W0806 08:38:51.156083   79614 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:51.156090   79614 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0806 08:38:51.156152   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0806 08:38:51.208712   79614 cri.go:89] found id: "e668e4a5fa59e279bc227b6110983cc4660b639d5b1e1291ae60d361f682d82e"
	I0806 08:38:51.208733   79614 cri.go:89] found id: "abd677ac30faba3fef9e9d562601ad0012d7f1b48852e08b8f2684efda6b0f2e"
	I0806 08:38:51.208737   79614 cri.go:89] found id: ""
	I0806 08:38:51.208744   79614 logs.go:276] 2 containers: [e668e4a5fa59e279bc227b6110983cc4660b639d5b1e1291ae60d361f682d82e abd677ac30faba3fef9e9d562601ad0012d7f1b48852e08b8f2684efda6b0f2e]
	I0806 08:38:51.208796   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:51.213508   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:51.219082   79614 logs.go:123] Gathering logs for etcd [b5fa6fe8f1db6b59f47ea637515efb2864e0408ce1df0e6851b3bda0e240f5d3] ...
	I0806 08:38:51.219104   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5fa6fe8f1db6b59f47ea637515efb2864e0408ce1df0e6851b3bda0e240f5d3"
	I0806 08:38:51.266352   79614 logs.go:123] Gathering logs for storage-provisioner [e668e4a5fa59e279bc227b6110983cc4660b639d5b1e1291ae60d361f682d82e] ...
	I0806 08:38:51.266380   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e668e4a5fa59e279bc227b6110983cc4660b639d5b1e1291ae60d361f682d82e"
	I0806 08:38:51.315128   79614 logs.go:123] Gathering logs for storage-provisioner [abd677ac30faba3fef9e9d562601ad0012d7f1b48852e08b8f2684efda6b0f2e] ...
	I0806 08:38:51.315168   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abd677ac30faba3fef9e9d562601ad0012d7f1b48852e08b8f2684efda6b0f2e"
	I0806 08:38:51.357872   79614 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:51.357914   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:51.768288   79614 logs.go:123] Gathering logs for container status ...
	I0806 08:38:51.768328   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:51.825304   79614 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:51.825341   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:51.888532   79614 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:51.888567   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 08:38:51.998271   79614 logs.go:123] Gathering logs for coredns [59f63eeae644f7c8944cdb7b1d05d9e49fe84eb28e9ccbad2d2dd79846d7eb52] ...
	I0806 08:38:51.998307   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59f63eeae644f7c8944cdb7b1d05d9e49fe84eb28e9ccbad2d2dd79846d7eb52"
	I0806 08:38:52.034549   79614 logs.go:123] Gathering logs for kube-scheduler [e596abcad3dc13d27548701f515beebba051e04a9e20f8ce1d018731d880ff1c] ...
	I0806 08:38:52.034574   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e596abcad3dc13d27548701f515beebba051e04a9e20f8ce1d018731d880ff1c"
	I0806 08:38:52.079289   79614 logs.go:123] Gathering logs for kube-proxy [75990046864b4a4db9f8c02f44de4eefbb0cf45584880b053bce4ca0b677ce4e] ...
	I0806 08:38:52.079317   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75990046864b4a4db9f8c02f44de4eefbb0cf45584880b053bce4ca0b677ce4e"
	I0806 08:38:52.124134   79614 logs.go:123] Gathering logs for kube-controller-manager [05bbffa4c989e42bd020ebe6b943459ec1d7b31a6c5e5ba9a37056da15542658] ...
	I0806 08:38:52.124163   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 05bbffa4c989e42bd020ebe6b943459ec1d7b31a6c5e5ba9a37056da15542658"
	I0806 08:38:52.191877   79614 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:52.191914   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:52.210712   79614 logs.go:123] Gathering logs for kube-apiserver [49843be03d16addb301ffd26e517770ff29b00f049efb524a2139b8b2153325c] ...
	I0806 08:38:52.210740   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49843be03d16addb301ffd26e517770ff29b00f049efb524a2139b8b2153325c"
	I0806 08:38:54.770330   79614 system_pods.go:59] 8 kube-system pods found
	I0806 08:38:54.770356   79614 system_pods.go:61] "coredns-7db6d8ff4d-gx8tt" [48f83ba8-a238-4baf-94af-88370fefcb02] Running
	I0806 08:38:54.770361   79614 system_pods.go:61] "etcd-default-k8s-diff-port-990751" [bf342ba4-1e11-40bf-80fd-02b402043ecf] Running
	I0806 08:38:54.770365   79614 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-990751" [11ee42c4-c9e9-41cf-ace2-deac0f30153a] Running
	I0806 08:38:54.770369   79614 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-990751" [f17fca30-0afc-4ed8-a8cc-cb64e69723f3] Running
	I0806 08:38:54.770373   79614 system_pods.go:61] "kube-proxy-lpf88" [866c10f0-2c44-4c03-b460-ff2194bd1ec6] Running
	I0806 08:38:54.770377   79614 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-990751" [2e591ea7-07bd-464f-bf0e-bc24bfcca016] Running
	I0806 08:38:54.770383   79614 system_pods.go:61] "metrics-server-569cc877fc-hnxd4" [f369ac70-49b7-448a-9852-89232fb66da9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0806 08:38:54.770388   79614 system_pods.go:61] "storage-provisioner" [0faff3fc-d34b-418e-972b-95a2e36b577a] Running
	I0806 08:38:54.770395   79614 system_pods.go:74] duration metric: took 3.942006529s to wait for pod list to return data ...
	I0806 08:38:54.770401   79614 default_sa.go:34] waiting for default service account to be created ...
	I0806 08:38:54.773935   79614 default_sa.go:45] found service account: "default"
	I0806 08:38:54.773956   79614 default_sa.go:55] duration metric: took 3.549307ms for default service account to be created ...
	I0806 08:38:54.773963   79614 system_pods.go:116] waiting for k8s-apps to be running ...
	I0806 08:38:54.779748   79614 system_pods.go:86] 8 kube-system pods found
	I0806 08:38:54.779773   79614 system_pods.go:89] "coredns-7db6d8ff4d-gx8tt" [48f83ba8-a238-4baf-94af-88370fefcb02] Running
	I0806 08:38:54.779780   79614 system_pods.go:89] "etcd-default-k8s-diff-port-990751" [bf342ba4-1e11-40bf-80fd-02b402043ecf] Running
	I0806 08:38:54.779787   79614 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-990751" [11ee42c4-c9e9-41cf-ace2-deac0f30153a] Running
	I0806 08:38:54.779794   79614 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-990751" [f17fca30-0afc-4ed8-a8cc-cb64e69723f3] Running
	I0806 08:38:54.779801   79614 system_pods.go:89] "kube-proxy-lpf88" [866c10f0-2c44-4c03-b460-ff2194bd1ec6] Running
	I0806 08:38:54.779807   79614 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-990751" [2e591ea7-07bd-464f-bf0e-bc24bfcca016] Running
	I0806 08:38:54.779818   79614 system_pods.go:89] "metrics-server-569cc877fc-hnxd4" [f369ac70-49b7-448a-9852-89232fb66da9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0806 08:38:54.779825   79614 system_pods.go:89] "storage-provisioner" [0faff3fc-d34b-418e-972b-95a2e36b577a] Running
	I0806 08:38:54.779834   79614 system_pods.go:126] duration metric: took 5.864748ms to wait for k8s-apps to be running ...
	I0806 08:38:54.779842   79614 system_svc.go:44] waiting for kubelet service to be running ....
	I0806 08:38:54.779898   79614 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 08:38:54.796325   79614 system_svc.go:56] duration metric: took 16.472895ms WaitForService to wait for kubelet
	I0806 08:38:54.796353   79614 kubeadm.go:582] duration metric: took 4m22.818456744s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0806 08:38:54.796397   79614 node_conditions.go:102] verifying NodePressure condition ...
	I0806 08:38:54.800536   79614 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0806 08:38:54.800564   79614 node_conditions.go:123] node cpu capacity is 2
	I0806 08:38:54.800577   79614 node_conditions.go:105] duration metric: took 4.173499ms to run NodePressure ...
	I0806 08:38:54.800591   79614 start.go:241] waiting for startup goroutines ...
	I0806 08:38:54.800600   79614 start.go:246] waiting for cluster config update ...
	I0806 08:38:54.800613   79614 start.go:255] writing updated cluster config ...
	I0806 08:38:54.800936   79614 ssh_runner.go:195] Run: rm -f paused
	I0806 08:38:54.850703   79614 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0806 08:38:54.852771   79614 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-990751" cluster and "default" namespace by default
	I0806 08:38:53.469241   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:55.968335   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:58.469143   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:39:00.470785   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:39:02.967983   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:39:05.468164   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:39:07.469643   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:39:09.971774   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:39:12.469590   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:39:14.968935   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:39:17.468202   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:39:19.968322   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:39:21.968455   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:39:24.467291   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:39:26.475005   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:39:28.968164   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:39:31.467876   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:39:31.701723   79927 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0806 08:39:31.702058   79927 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0806 08:39:31.702242   79927 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0806 08:39:33.468293   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:39:34.961921   78922 pod_ready.go:81] duration metric: took 4m0.000413299s for pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace to be "Ready" ...
	E0806 08:39:34.961944   78922 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace to be "Ready" (will not retry!)
	I0806 08:39:34.961967   78922 pod_ready.go:38] duration metric: took 4m12.510868918s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 08:39:34.962010   78922 kubeadm.go:597] duration metric: took 4m19.878821661s to restartPrimaryControlPlane
	W0806 08:39:34.962072   78922 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0806 08:39:34.962105   78922 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0806 08:39:36.702773   79927 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0806 08:39:36.703067   79927 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0806 08:39:46.703330   79927 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0806 08:39:46.703594   79927 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0806 08:40:01.383913   78922 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.421778806s)
	I0806 08:40:01.383994   78922 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 08:40:01.409726   78922 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0806 08:40:01.432284   78922 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0806 08:40:01.445162   78922 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0806 08:40:01.445187   78922 kubeadm.go:157] found existing configuration files:
	
	I0806 08:40:01.445242   78922 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0806 08:40:01.466287   78922 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0806 08:40:01.466344   78922 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0806 08:40:01.481355   78922 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0806 08:40:01.499933   78922 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0806 08:40:01.500003   78922 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0806 08:40:01.518094   78922 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0806 08:40:01.527345   78922 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0806 08:40:01.527408   78922 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0806 08:40:01.539328   78922 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0806 08:40:01.549752   78922 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0806 08:40:01.549863   78922 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0806 08:40:01.561309   78922 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0806 08:40:01.609660   78922 kubeadm.go:310] W0806 08:40:01.587147    2983 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0806 08:40:01.610476   78922 kubeadm.go:310] W0806 08:40:01.588016    2983 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0806 08:40:01.749335   78922 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0806 08:40:06.704889   79927 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0806 08:40:06.705136   79927 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0806 08:40:10.616781   78922 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-rc.0
	I0806 08:40:10.616868   78922 kubeadm.go:310] [preflight] Running pre-flight checks
	I0806 08:40:10.616966   78922 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0806 08:40:10.617110   78922 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0806 08:40:10.617234   78922 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0806 08:40:10.617332   78922 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0806 08:40:10.618729   78922 out.go:204]   - Generating certificates and keys ...
	I0806 08:40:10.618805   78922 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0806 08:40:10.618886   78922 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0806 08:40:10.618961   78922 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0806 08:40:10.619025   78922 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0806 08:40:10.619127   78922 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0806 08:40:10.619200   78922 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0806 08:40:10.619289   78922 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0806 08:40:10.619373   78922 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0806 08:40:10.619485   78922 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0806 08:40:10.619596   78922 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0806 08:40:10.619644   78922 kubeadm.go:310] [certs] Using the existing "sa" key
	I0806 08:40:10.619721   78922 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0806 08:40:10.619787   78922 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0806 08:40:10.619871   78922 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0806 08:40:10.619916   78922 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0806 08:40:10.619968   78922 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0806 08:40:10.620028   78922 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0806 08:40:10.620097   78922 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0806 08:40:10.620153   78922 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0806 08:40:10.621538   78922 out.go:204]   - Booting up control plane ...
	I0806 08:40:10.621632   78922 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0806 08:40:10.621739   78922 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0806 08:40:10.621802   78922 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0806 08:40:10.621898   78922 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0806 08:40:10.621979   78922 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0806 08:40:10.622011   78922 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0806 08:40:10.622117   78922 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0806 08:40:10.622225   78922 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0806 08:40:10.622291   78922 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.639662ms
	I0806 08:40:10.622351   78922 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0806 08:40:10.622399   78922 kubeadm.go:310] [api-check] The API server is healthy after 5.503393689s
	I0806 08:40:10.622505   78922 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0806 08:40:10.622639   78922 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0806 08:40:10.622692   78922 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0806 08:40:10.622843   78922 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-703340 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0806 08:40:10.622895   78922 kubeadm.go:310] [bootstrap-token] Using token: 5r8726.myu0qo9tvx4sxjd3
	I0806 08:40:10.624254   78922 out.go:204]   - Configuring RBAC rules ...
	I0806 08:40:10.624381   78922 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0806 08:40:10.624463   78922 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0806 08:40:10.624613   78922 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0806 08:40:10.624813   78922 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0806 08:40:10.624990   78922 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0806 08:40:10.625114   78922 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0806 08:40:10.625253   78922 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0806 08:40:10.625312   78922 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0806 08:40:10.625378   78922 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0806 08:40:10.625387   78922 kubeadm.go:310] 
	I0806 08:40:10.625464   78922 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0806 08:40:10.625472   78922 kubeadm.go:310] 
	I0806 08:40:10.625580   78922 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0806 08:40:10.625595   78922 kubeadm.go:310] 
	I0806 08:40:10.625615   78922 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0806 08:40:10.625668   78922 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0806 08:40:10.625708   78922 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0806 08:40:10.625712   78922 kubeadm.go:310] 
	I0806 08:40:10.625822   78922 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0806 08:40:10.625846   78922 kubeadm.go:310] 
	I0806 08:40:10.625891   78922 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0806 08:40:10.625898   78922 kubeadm.go:310] 
	I0806 08:40:10.625940   78922 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0806 08:40:10.626007   78922 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0806 08:40:10.626068   78922 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0806 08:40:10.626074   78922 kubeadm.go:310] 
	I0806 08:40:10.626173   78922 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0806 08:40:10.626272   78922 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0806 08:40:10.626282   78922 kubeadm.go:310] 
	I0806 08:40:10.626381   78922 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 5r8726.myu0qo9tvx4sxjd3 \
	I0806 08:40:10.626499   78922 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d36e5c1dfe989c7d561c809d7c06e0fc13926ac8f6d664cc5d6865ea5f79931b \
	I0806 08:40:10.626529   78922 kubeadm.go:310] 	--control-plane 
	I0806 08:40:10.626538   78922 kubeadm.go:310] 
	I0806 08:40:10.626639   78922 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0806 08:40:10.626647   78922 kubeadm.go:310] 
	I0806 08:40:10.626745   78922 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 5r8726.myu0qo9tvx4sxjd3 \
	I0806 08:40:10.626893   78922 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d36e5c1dfe989c7d561c809d7c06e0fc13926ac8f6d664cc5d6865ea5f79931b 
	I0806 08:40:10.626920   78922 cni.go:84] Creating CNI manager for ""
	I0806 08:40:10.626929   78922 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0806 08:40:10.628682   78922 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0806 08:40:10.630096   78922 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0806 08:40:10.642492   78922 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0806 08:40:10.667673   78922 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0806 08:40:10.667763   78922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:40:10.667833   78922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-703340 minikube.k8s.io/updated_at=2024_08_06T08_40_10_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=e92cb06692f5ea1ba801d10d148e5e92e807f9c8 minikube.k8s.io/name=no-preload-703340 minikube.k8s.io/primary=true
	I0806 08:40:10.705177   78922 ops.go:34] apiserver oom_adj: -16
	I0806 08:40:10.901976   78922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:40:11.401990   78922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:40:11.902906   78922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:40:12.402461   78922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:40:12.902909   78922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:40:13.402018   78922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:40:13.902098   78922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:40:14.402869   78922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:40:14.902034   78922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:40:14.999571   78922 kubeadm.go:1113] duration metric: took 4.331874752s to wait for elevateKubeSystemPrivileges
	I0806 08:40:14.999615   78922 kubeadm.go:394] duration metric: took 4m59.975766448s to StartCluster
	I0806 08:40:14.999636   78922 settings.go:142] acquiring lock: {Name:mkb0bfbcae3b4205ef43487a9de84cfd8afd2e5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:40:14.999722   78922 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19370-13057/kubeconfig
	I0806 08:40:15.001429   78922 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/kubeconfig: {Name:mk5c066914acf8e36556b29792f75b98076ca34a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:40:15.001696   78922 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.126 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0806 08:40:15.001753   78922 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0806 08:40:15.001864   78922 addons.go:69] Setting storage-provisioner=true in profile "no-preload-703340"
	I0806 08:40:15.001909   78922 addons.go:234] Setting addon storage-provisioner=true in "no-preload-703340"
	W0806 08:40:15.001921   78922 addons.go:243] addon storage-provisioner should already be in state true
	I0806 08:40:15.001919   78922 addons.go:69] Setting default-storageclass=true in profile "no-preload-703340"
	I0806 08:40:15.001945   78922 addons.go:69] Setting metrics-server=true in profile "no-preload-703340"
	I0806 08:40:15.001968   78922 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-703340"
	I0806 08:40:15.001984   78922 addons.go:234] Setting addon metrics-server=true in "no-preload-703340"
	W0806 08:40:15.001997   78922 addons.go:243] addon metrics-server should already be in state true
	I0806 08:40:15.001958   78922 host.go:66] Checking if "no-preload-703340" exists ...
	I0806 08:40:15.002030   78922 host.go:66] Checking if "no-preload-703340" exists ...
	I0806 08:40:15.001918   78922 config.go:182] Loaded profile config "no-preload-703340": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-rc.0
	I0806 08:40:15.002284   78922 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:40:15.002295   78922 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:40:15.002310   78922 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:40:15.002319   78922 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:40:15.002354   78922 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:40:15.002312   78922 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:40:15.003644   78922 out.go:177] * Verifying Kubernetes components...
	I0806 08:40:15.005157   78922 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 08:40:15.018061   78922 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35339
	I0806 08:40:15.018548   78922 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:40:15.018866   78922 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44375
	I0806 08:40:15.019201   78922 main.go:141] libmachine: Using API Version  1
	I0806 08:40:15.019226   78922 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:40:15.019270   78922 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:40:15.019582   78922 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:40:15.019743   78922 main.go:141] libmachine: Using API Version  1
	I0806 08:40:15.019748   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetState
	I0806 08:40:15.019765   78922 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:40:15.020089   78922 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:40:15.020617   78922 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:40:15.020664   78922 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:40:15.020815   78922 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32773
	I0806 08:40:15.021303   78922 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:40:15.021775   78922 main.go:141] libmachine: Using API Version  1
	I0806 08:40:15.021796   78922 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:40:15.022125   78922 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:40:15.022549   78922 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:40:15.022577   78922 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:40:15.023653   78922 addons.go:234] Setting addon default-storageclass=true in "no-preload-703340"
	W0806 08:40:15.023678   78922 addons.go:243] addon default-storageclass should already be in state true
	I0806 08:40:15.023708   78922 host.go:66] Checking if "no-preload-703340" exists ...
	I0806 08:40:15.024066   78922 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:40:15.024120   78922 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:40:15.036138   78922 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44447
	I0806 08:40:15.036586   78922 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:40:15.037132   78922 main.go:141] libmachine: Using API Version  1
	I0806 08:40:15.037153   78922 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:40:15.037609   78922 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:40:15.037921   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetState
	I0806 08:40:15.038640   78922 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42341
	I0806 08:40:15.038777   78922 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34793
	I0806 08:40:15.039003   78922 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:40:15.039098   78922 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:40:15.039473   78922 main.go:141] libmachine: Using API Version  1
	I0806 08:40:15.039489   78922 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:40:15.039607   78922 main.go:141] libmachine: Using API Version  1
	I0806 08:40:15.039623   78922 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:40:15.039820   78922 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:40:15.040041   78922 main.go:141] libmachine: (no-preload-703340) Calling .DriverName
	I0806 08:40:15.040181   78922 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:40:15.040414   78922 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:40:15.040456   78922 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:40:15.040477   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetState
	I0806 08:40:15.042137   78922 main.go:141] libmachine: (no-preload-703340) Calling .DriverName
	I0806 08:40:15.042223   78922 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0806 08:40:15.043522   78922 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 08:40:15.043585   78922 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0806 08:40:15.043601   78922 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0806 08:40:15.043619   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHHostname
	I0806 08:40:15.044812   78922 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0806 08:40:15.044829   78922 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0806 08:40:15.044854   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHHostname
	I0806 08:40:15.047376   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:40:15.047978   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:40:15.048001   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:40:15.048228   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:40:15.048269   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHPort
	I0806 08:40:15.048485   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHKeyPath
	I0806 08:40:15.048617   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHUsername
	I0806 08:40:15.048667   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:40:15.048688   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:40:15.048800   78922 sshutil.go:53] new ssh client: &{IP:192.168.72.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/no-preload-703340/id_rsa Username:docker}
	I0806 08:40:15.049107   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHPort
	I0806 08:40:15.049267   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHKeyPath
	I0806 08:40:15.049403   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHUsername
	I0806 08:40:15.049518   78922 sshutil.go:53] new ssh client: &{IP:192.168.72.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/no-preload-703340/id_rsa Username:docker}
	I0806 08:40:15.059440   78922 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33377
	I0806 08:40:15.059910   78922 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:40:15.060423   78922 main.go:141] libmachine: Using API Version  1
	I0806 08:40:15.060441   78922 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:40:15.060741   78922 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:40:15.060922   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetState
	I0806 08:40:15.062303   78922 main.go:141] libmachine: (no-preload-703340) Calling .DriverName
	I0806 08:40:15.062491   78922 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0806 08:40:15.062503   78922 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0806 08:40:15.062516   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHHostname
	I0806 08:40:15.065588   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:40:15.066008   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:40:15.066024   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:40:15.066147   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHPort
	I0806 08:40:15.066302   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHKeyPath
	I0806 08:40:15.066407   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHUsername
	I0806 08:40:15.066500   78922 sshutil.go:53] new ssh client: &{IP:192.168.72.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/no-preload-703340/id_rsa Username:docker}
	I0806 08:40:15.209763   78922 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 08:40:15.231626   78922 node_ready.go:35] waiting up to 6m0s for node "no-preload-703340" to be "Ready" ...
	I0806 08:40:15.239924   78922 node_ready.go:49] node "no-preload-703340" has status "Ready":"True"
	I0806 08:40:15.239952   78922 node_ready.go:38] duration metric: took 8.292674ms for node "no-preload-703340" to be "Ready" ...
	I0806 08:40:15.239964   78922 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 08:40:15.247043   78922 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-9dt4q" in "kube-system" namespace to be "Ready" ...
	I0806 08:40:15.320295   78922 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0806 08:40:15.343811   78922 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0806 08:40:15.343838   78922 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0806 08:40:15.367466   78922 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0806 08:40:15.392170   78922 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0806 08:40:15.392200   78922 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0806 08:40:15.440711   78922 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0806 08:40:15.440741   78922 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0806 08:40:15.483982   78922 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0806 08:40:16.105714   78922 main.go:141] libmachine: Making call to close driver server
	I0806 08:40:16.105739   78922 main.go:141] libmachine: (no-preload-703340) Calling .Close
	I0806 08:40:16.105768   78922 main.go:141] libmachine: Making call to close driver server
	I0806 08:40:16.105789   78922 main.go:141] libmachine: (no-preload-703340) Calling .Close
	I0806 08:40:16.106051   78922 main.go:141] libmachine: (no-preload-703340) DBG | Closing plugin on server side
	I0806 08:40:16.106086   78922 main.go:141] libmachine: (no-preload-703340) DBG | Closing plugin on server side
	I0806 08:40:16.106097   78922 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:40:16.106113   78922 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:40:16.106133   78922 main.go:141] libmachine: Making call to close driver server
	I0806 08:40:16.106144   78922 main.go:141] libmachine: (no-preload-703340) Calling .Close
	I0806 08:40:16.106176   78922 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:40:16.106254   78922 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:40:16.106280   78922 main.go:141] libmachine: Making call to close driver server
	I0806 08:40:16.106287   78922 main.go:141] libmachine: (no-preload-703340) Calling .Close
	I0806 08:40:16.107761   78922 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:40:16.107779   78922 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:40:16.107767   78922 main.go:141] libmachine: (no-preload-703340) DBG | Closing plugin on server side
	I0806 08:40:16.107805   78922 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:40:16.107820   78922 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:40:16.107874   78922 main.go:141] libmachine: (no-preload-703340) DBG | Closing plugin on server side
	I0806 08:40:16.133295   78922 main.go:141] libmachine: Making call to close driver server
	I0806 08:40:16.133318   78922 main.go:141] libmachine: (no-preload-703340) Calling .Close
	I0806 08:40:16.133642   78922 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:40:16.133660   78922 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:40:16.133680   78922 main.go:141] libmachine: (no-preload-703340) DBG | Closing plugin on server side
	I0806 08:40:16.628763   78922 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.144732374s)
	I0806 08:40:16.628864   78922 main.go:141] libmachine: Making call to close driver server
	I0806 08:40:16.628891   78922 main.go:141] libmachine: (no-preload-703340) Calling .Close
	I0806 08:40:16.629348   78922 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:40:16.629375   78922 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:40:16.629385   78922 main.go:141] libmachine: Making call to close driver server
	I0806 08:40:16.629396   78922 main.go:141] libmachine: (no-preload-703340) Calling .Close
	I0806 08:40:16.629406   78922 main.go:141] libmachine: (no-preload-703340) DBG | Closing plugin on server side
	I0806 08:40:16.629673   78922 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:40:16.629690   78922 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:40:16.629702   78922 addons.go:475] Verifying addon metrics-server=true in "no-preload-703340"
	I0806 08:40:16.631683   78922 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0806 08:40:16.632723   78922 addons.go:510] duration metric: took 1.630970284s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0806 08:40:17.264829   78922 pod_ready.go:102] pod "coredns-6f6b679f8f-9dt4q" in "kube-system" namespace has status "Ready":"False"
	I0806 08:40:19.754652   78922 pod_ready.go:102] pod "coredns-6f6b679f8f-9dt4q" in "kube-system" namespace has status "Ready":"False"
	I0806 08:40:21.753116   78922 pod_ready.go:92] pod "coredns-6f6b679f8f-9dt4q" in "kube-system" namespace has status "Ready":"True"
	I0806 08:40:21.753139   78922 pod_ready.go:81] duration metric: took 6.506065964s for pod "coredns-6f6b679f8f-9dt4q" in "kube-system" namespace to be "Ready" ...
	I0806 08:40:21.753149   78922 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-mpfwj" in "kube-system" namespace to be "Ready" ...
	I0806 08:40:21.757521   78922 pod_ready.go:92] pod "coredns-6f6b679f8f-mpfwj" in "kube-system" namespace has status "Ready":"True"
	I0806 08:40:21.757542   78922 pod_ready.go:81] duration metric: took 4.3867ms for pod "coredns-6f6b679f8f-mpfwj" in "kube-system" namespace to be "Ready" ...
	I0806 08:40:21.757553   78922 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-703340" in "kube-system" namespace to be "Ready" ...
	I0806 08:40:21.762161   78922 pod_ready.go:92] pod "etcd-no-preload-703340" in "kube-system" namespace has status "Ready":"True"
	I0806 08:40:21.762179   78922 pod_ready.go:81] duration metric: took 4.619377ms for pod "etcd-no-preload-703340" in "kube-system" namespace to be "Ready" ...
	I0806 08:40:21.762188   78922 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-703340" in "kube-system" namespace to be "Ready" ...
	I0806 08:40:21.766394   78922 pod_ready.go:92] pod "kube-apiserver-no-preload-703340" in "kube-system" namespace has status "Ready":"True"
	I0806 08:40:21.766411   78922 pod_ready.go:81] duration metric: took 4.216869ms for pod "kube-apiserver-no-preload-703340" in "kube-system" namespace to be "Ready" ...
	I0806 08:40:21.766421   78922 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-703340" in "kube-system" namespace to be "Ready" ...
	I0806 08:40:21.770569   78922 pod_ready.go:92] pod "kube-controller-manager-no-preload-703340" in "kube-system" namespace has status "Ready":"True"
	I0806 08:40:21.770592   78922 pod_ready.go:81] duration metric: took 4.160997ms for pod "kube-controller-manager-no-preload-703340" in "kube-system" namespace to be "Ready" ...
	I0806 08:40:21.770600   78922 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zsv9x" in "kube-system" namespace to be "Ready" ...
	I0806 08:40:22.151795   78922 pod_ready.go:92] pod "kube-proxy-zsv9x" in "kube-system" namespace has status "Ready":"True"
	I0806 08:40:22.151819   78922 pod_ready.go:81] duration metric: took 381.212951ms for pod "kube-proxy-zsv9x" in "kube-system" namespace to be "Ready" ...
	I0806 08:40:22.151829   78922 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-703340" in "kube-system" namespace to be "Ready" ...
	I0806 08:40:22.550798   78922 pod_ready.go:92] pod "kube-scheduler-no-preload-703340" in "kube-system" namespace has status "Ready":"True"
	I0806 08:40:22.550821   78922 pod_ready.go:81] duration metric: took 398.985533ms for pod "kube-scheduler-no-preload-703340" in "kube-system" namespace to be "Ready" ...
	I0806 08:40:22.550830   78922 pod_ready.go:38] duration metric: took 7.310852309s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 08:40:22.550842   78922 api_server.go:52] waiting for apiserver process to appear ...
	I0806 08:40:22.550888   78922 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:40:22.567410   78922 api_server.go:72] duration metric: took 7.565683505s to wait for apiserver process to appear ...
	I0806 08:40:22.567441   78922 api_server.go:88] waiting for apiserver healthz status ...
	I0806 08:40:22.567464   78922 api_server.go:253] Checking apiserver healthz at https://192.168.72.126:8443/healthz ...
	I0806 08:40:22.572514   78922 api_server.go:279] https://192.168.72.126:8443/healthz returned 200:
	ok
	I0806 08:40:22.573572   78922 api_server.go:141] control plane version: v1.31.0-rc.0
	I0806 08:40:22.573596   78922 api_server.go:131] duration metric: took 6.147703ms to wait for apiserver health ...
	I0806 08:40:22.573606   78922 system_pods.go:43] waiting for kube-system pods to appear ...
	I0806 08:40:22.754910   78922 system_pods.go:59] 9 kube-system pods found
	I0806 08:40:22.754937   78922 system_pods.go:61] "coredns-6f6b679f8f-9dt4q" [a62f9c28-2e06-4072-b459-3a879cba65d6] Running
	I0806 08:40:22.754942   78922 system_pods.go:61] "coredns-6f6b679f8f-mpfwj" [9da40ee8-73c2-4479-a2aa-32a2dce7e21d] Running
	I0806 08:40:22.754945   78922 system_pods.go:61] "etcd-no-preload-703340" [2b89a18a-69f6-429b-8616-b84bf9e666b4] Running
	I0806 08:40:22.754950   78922 system_pods.go:61] "kube-apiserver-no-preload-703340" [8b863a60-03ad-4d18-855a-12d29dc63612] Running
	I0806 08:40:22.754953   78922 system_pods.go:61] "kube-controller-manager-no-preload-703340" [8490ec19-da74-4c39-9a69-17918506ea10] Running
	I0806 08:40:22.754956   78922 system_pods.go:61] "kube-proxy-zsv9x" [2859a7f2-f880-4be4-806e-163eb9caf535] Running
	I0806 08:40:22.754959   78922 system_pods.go:61] "kube-scheduler-no-preload-703340" [e003ccd3-48e7-4e4e-9f05-166003935575] Running
	I0806 08:40:22.754964   78922 system_pods.go:61] "metrics-server-6867b74b74-tqnjm" [4ab2bdd5-bd70-445a-84ac-6a45bdaf06a7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0806 08:40:22.754972   78922 system_pods.go:61] "storage-provisioner" [5497ddbc-8ed5-4b42-80e8-81aa7c1b8fa7] Running
	I0806 08:40:22.754980   78922 system_pods.go:74] duration metric: took 181.367063ms to wait for pod list to return data ...
	I0806 08:40:22.754990   78922 default_sa.go:34] waiting for default service account to be created ...
	I0806 08:40:22.951215   78922 default_sa.go:45] found service account: "default"
	I0806 08:40:22.951242   78922 default_sa.go:55] duration metric: took 196.244443ms for default service account to be created ...
	I0806 08:40:22.951250   78922 system_pods.go:116] waiting for k8s-apps to be running ...
	I0806 08:40:23.154822   78922 system_pods.go:86] 9 kube-system pods found
	I0806 08:40:23.154852   78922 system_pods.go:89] "coredns-6f6b679f8f-9dt4q" [a62f9c28-2e06-4072-b459-3a879cba65d6] Running
	I0806 08:40:23.154858   78922 system_pods.go:89] "coredns-6f6b679f8f-mpfwj" [9da40ee8-73c2-4479-a2aa-32a2dce7e21d] Running
	I0806 08:40:23.154862   78922 system_pods.go:89] "etcd-no-preload-703340" [2b89a18a-69f6-429b-8616-b84bf9e666b4] Running
	I0806 08:40:23.154865   78922 system_pods.go:89] "kube-apiserver-no-preload-703340" [8b863a60-03ad-4d18-855a-12d29dc63612] Running
	I0806 08:40:23.154870   78922 system_pods.go:89] "kube-controller-manager-no-preload-703340" [8490ec19-da74-4c39-9a69-17918506ea10] Running
	I0806 08:40:23.154875   78922 system_pods.go:89] "kube-proxy-zsv9x" [2859a7f2-f880-4be4-806e-163eb9caf535] Running
	I0806 08:40:23.154879   78922 system_pods.go:89] "kube-scheduler-no-preload-703340" [e003ccd3-48e7-4e4e-9f05-166003935575] Running
	I0806 08:40:23.154886   78922 system_pods.go:89] "metrics-server-6867b74b74-tqnjm" [4ab2bdd5-bd70-445a-84ac-6a45bdaf06a7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0806 08:40:23.154890   78922 system_pods.go:89] "storage-provisioner" [5497ddbc-8ed5-4b42-80e8-81aa7c1b8fa7] Running
	I0806 08:40:23.154898   78922 system_pods.go:126] duration metric: took 203.642884ms to wait for k8s-apps to be running ...
	I0806 08:40:23.154905   78922 system_svc.go:44] waiting for kubelet service to be running ....
	I0806 08:40:23.154948   78922 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 08:40:23.170577   78922 system_svc.go:56] duration metric: took 15.661926ms WaitForService to wait for kubelet
	I0806 08:40:23.170609   78922 kubeadm.go:582] duration metric: took 8.168885934s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0806 08:40:23.170631   78922 node_conditions.go:102] verifying NodePressure condition ...
	I0806 08:40:23.352727   78922 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0806 08:40:23.352753   78922 node_conditions.go:123] node cpu capacity is 2
	I0806 08:40:23.352763   78922 node_conditions.go:105] duration metric: took 182.12736ms to run NodePressure ...
	I0806 08:40:23.352774   78922 start.go:241] waiting for startup goroutines ...
	I0806 08:40:23.352780   78922 start.go:246] waiting for cluster config update ...
	I0806 08:40:23.352790   78922 start.go:255] writing updated cluster config ...
	I0806 08:40:23.353072   78922 ssh_runner.go:195] Run: rm -f paused
	I0806 08:40:23.401609   78922 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-rc.0 (minor skew: 1)
	I0806 08:40:23.403639   78922 out.go:177] * Done! kubectl is now configured to use "no-preload-703340" cluster and "default" namespace by default
	I0806 08:40:46.706828   79927 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0806 08:40:46.707130   79927 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0806 08:40:46.707156   79927 kubeadm.go:310] 
	I0806 08:40:46.707218   79927 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0806 08:40:46.707278   79927 kubeadm.go:310] 		timed out waiting for the condition
	I0806 08:40:46.707304   79927 kubeadm.go:310] 
	I0806 08:40:46.707381   79927 kubeadm.go:310] 	This error is likely caused by:
	I0806 08:40:46.707439   79927 kubeadm.go:310] 		- The kubelet is not running
	I0806 08:40:46.707569   79927 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0806 08:40:46.707580   79927 kubeadm.go:310] 
	I0806 08:40:46.707705   79927 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0806 08:40:46.707756   79927 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0806 08:40:46.707805   79927 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0806 08:40:46.707814   79927 kubeadm.go:310] 
	I0806 08:40:46.707974   79927 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0806 08:40:46.708118   79927 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0806 08:40:46.708130   79927 kubeadm.go:310] 
	I0806 08:40:46.708297   79927 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0806 08:40:46.708448   79927 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0806 08:40:46.708577   79927 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0806 08:40:46.708673   79927 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0806 08:40:46.708684   79927 kubeadm.go:310] 
	I0806 08:40:46.709441   79927 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0806 08:40:46.709593   79927 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0806 08:40:46.709791   79927 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0806 08:40:46.709862   79927 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0806 08:40:46.709928   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0806 08:40:47.169443   79927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 08:40:47.187784   79927 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0806 08:40:47.199239   79927 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0806 08:40:47.199267   79927 kubeadm.go:157] found existing configuration files:
	
	I0806 08:40:47.199325   79927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0806 08:40:47.209294   79927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0806 08:40:47.209361   79927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0806 08:40:47.219483   79927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0806 08:40:47.229674   79927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0806 08:40:47.229732   79927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0806 08:40:47.240729   79927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0806 08:40:47.250964   79927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0806 08:40:47.251018   79927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0806 08:40:47.261322   79927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0806 08:40:47.271411   79927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0806 08:40:47.271502   79927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0806 08:40:47.281986   79927 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0806 08:40:47.514409   79927 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0806 08:42:43.760446   79927 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0806 08:42:43.760586   79927 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0806 08:42:43.762233   79927 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0806 08:42:43.762301   79927 kubeadm.go:310] [preflight] Running pre-flight checks
	I0806 08:42:43.762391   79927 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0806 08:42:43.762504   79927 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0806 08:42:43.762602   79927 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0806 08:42:43.762653   79927 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0806 08:42:43.764435   79927 out.go:204]   - Generating certificates and keys ...
	I0806 08:42:43.764523   79927 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0806 08:42:43.764587   79927 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0806 08:42:43.764684   79927 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0806 08:42:43.764761   79927 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0806 08:42:43.764833   79927 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0806 08:42:43.764905   79927 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0806 08:42:43.765002   79927 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0806 08:42:43.765100   79927 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0806 08:42:43.765202   79927 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0806 08:42:43.765338   79927 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0806 08:42:43.765400   79927 kubeadm.go:310] [certs] Using the existing "sa" key
	I0806 08:42:43.765516   79927 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0806 08:42:43.765601   79927 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0806 08:42:43.765680   79927 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0806 08:42:43.765774   79927 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0806 08:42:43.765854   79927 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0806 08:42:43.766002   79927 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0806 08:42:43.766133   79927 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0806 08:42:43.766194   79927 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0806 08:42:43.766273   79927 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0806 08:42:43.767861   79927 out.go:204]   - Booting up control plane ...
	I0806 08:42:43.767957   79927 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0806 08:42:43.768051   79927 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0806 08:42:43.768136   79927 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0806 08:42:43.768232   79927 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0806 08:42:43.768487   79927 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0806 08:42:43.768559   79927 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0806 08:42:43.768635   79927 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0806 08:42:43.768862   79927 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0806 08:42:43.768945   79927 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0806 08:42:43.769124   79927 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0806 08:42:43.769195   79927 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0806 08:42:43.769379   79927 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0806 08:42:43.769463   79927 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0806 08:42:43.769721   79927 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0806 08:42:43.769823   79927 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0806 08:42:43.770089   79927 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0806 08:42:43.770102   79927 kubeadm.go:310] 
	I0806 08:42:43.770161   79927 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0806 08:42:43.770223   79927 kubeadm.go:310] 		timed out waiting for the condition
	I0806 08:42:43.770234   79927 kubeadm.go:310] 
	I0806 08:42:43.770274   79927 kubeadm.go:310] 	This error is likely caused by:
	I0806 08:42:43.770303   79927 kubeadm.go:310] 		- The kubelet is not running
	I0806 08:42:43.770389   79927 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0806 08:42:43.770399   79927 kubeadm.go:310] 
	I0806 08:42:43.770487   79927 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0806 08:42:43.770519   79927 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0806 08:42:43.770547   79927 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0806 08:42:43.770554   79927 kubeadm.go:310] 
	I0806 08:42:43.770657   79927 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0806 08:42:43.770756   79927 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0806 08:42:43.770769   79927 kubeadm.go:310] 
	I0806 08:42:43.770932   79927 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0806 08:42:43.771079   79927 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0806 08:42:43.771178   79927 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0806 08:42:43.771266   79927 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0806 08:42:43.771315   79927 kubeadm.go:310] 
	I0806 08:42:43.771336   79927 kubeadm.go:394] duration metric: took 7m57.719144545s to StartCluster
	I0806 08:42:43.771381   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:42:43.771451   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:42:43.817981   79927 cri.go:89] found id: ""
	I0806 08:42:43.818010   79927 logs.go:276] 0 containers: []
	W0806 08:42:43.818021   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:42:43.818028   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:42:43.818087   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:42:43.855481   79927 cri.go:89] found id: ""
	I0806 08:42:43.855506   79927 logs.go:276] 0 containers: []
	W0806 08:42:43.855517   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:42:43.855524   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:42:43.855584   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:42:43.893110   79927 cri.go:89] found id: ""
	I0806 08:42:43.893141   79927 logs.go:276] 0 containers: []
	W0806 08:42:43.893152   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:42:43.893172   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:42:43.893255   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:42:43.929785   79927 cri.go:89] found id: ""
	I0806 08:42:43.929844   79927 logs.go:276] 0 containers: []
	W0806 08:42:43.929853   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:42:43.929858   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:42:43.929921   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:42:43.970501   79927 cri.go:89] found id: ""
	I0806 08:42:43.970533   79927 logs.go:276] 0 containers: []
	W0806 08:42:43.970541   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:42:43.970546   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:42:43.970590   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:42:44.007344   79927 cri.go:89] found id: ""
	I0806 08:42:44.007373   79927 logs.go:276] 0 containers: []
	W0806 08:42:44.007382   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:42:44.007388   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:42:44.007434   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:42:44.043724   79927 cri.go:89] found id: ""
	I0806 08:42:44.043747   79927 logs.go:276] 0 containers: []
	W0806 08:42:44.043755   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:42:44.043761   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:42:44.043810   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:42:44.084642   79927 cri.go:89] found id: ""
	I0806 08:42:44.084682   79927 logs.go:276] 0 containers: []
	W0806 08:42:44.084694   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:42:44.084709   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:42:44.084724   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:42:44.157081   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:42:44.157125   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:42:44.175703   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:42:44.175746   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:42:44.275665   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:42:44.275699   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:42:44.275717   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:42:44.385031   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:42:44.385076   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0806 08:42:44.425573   79927 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0806 08:42:44.425616   79927 out.go:239] * 
	W0806 08:42:44.425683   79927 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0806 08:42:44.425713   79927 out.go:239] * 
	W0806 08:42:44.426633   79927 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0806 08:42:44.429961   79927 out.go:177] 
	W0806 08:42:44.431352   79927 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0806 08:42:44.431398   79927 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0806 08:42:44.431414   79927 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0806 08:42:44.432963   79927 out.go:177] 
	
	
	==> CRI-O <==
	Aug 06 08:42:46 old-k8s-version-409903 crio[649]: time="2024-08-06 08:42:46.363249613Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722933766363230293,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=97a7bf94-25ce-4e91-aaea-d19c36222d0a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 08:42:46 old-k8s-version-409903 crio[649]: time="2024-08-06 08:42:46.364177233Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b657416c-953c-4bbf-abef-5dbf473fcb35 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:42:46 old-k8s-version-409903 crio[649]: time="2024-08-06 08:42:46.364234311Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b657416c-953c-4bbf-abef-5dbf473fcb35 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:42:46 old-k8s-version-409903 crio[649]: time="2024-08-06 08:42:46.364267918Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=b657416c-953c-4bbf-abef-5dbf473fcb35 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:42:46 old-k8s-version-409903 crio[649]: time="2024-08-06 08:42:46.399243517Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=db4e095f-7181-4495-8005-665d2512d365 name=/runtime.v1.RuntimeService/Version
	Aug 06 08:42:46 old-k8s-version-409903 crio[649]: time="2024-08-06 08:42:46.399317431Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=db4e095f-7181-4495-8005-665d2512d365 name=/runtime.v1.RuntimeService/Version
	Aug 06 08:42:46 old-k8s-version-409903 crio[649]: time="2024-08-06 08:42:46.400932933Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=369a427e-acae-4995-aa9d-8f772bc16531 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 08:42:46 old-k8s-version-409903 crio[649]: time="2024-08-06 08:42:46.401415489Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722933766401382672,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=369a427e-acae-4995-aa9d-8f772bc16531 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 08:42:46 old-k8s-version-409903 crio[649]: time="2024-08-06 08:42:46.402039686Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f70810e6-a8d2-4262-ba00-644bc78f6d1d name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:42:46 old-k8s-version-409903 crio[649]: time="2024-08-06 08:42:46.402119406Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f70810e6-a8d2-4262-ba00-644bc78f6d1d name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:42:46 old-k8s-version-409903 crio[649]: time="2024-08-06 08:42:46.402165660Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=f70810e6-a8d2-4262-ba00-644bc78f6d1d name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:42:46 old-k8s-version-409903 crio[649]: time="2024-08-06 08:42:46.437355451Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5b19b36c-4912-4c52-b687-778a461fd5bd name=/runtime.v1.RuntimeService/Version
	Aug 06 08:42:46 old-k8s-version-409903 crio[649]: time="2024-08-06 08:42:46.437432998Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5b19b36c-4912-4c52-b687-778a461fd5bd name=/runtime.v1.RuntimeService/Version
	Aug 06 08:42:46 old-k8s-version-409903 crio[649]: time="2024-08-06 08:42:46.438717748Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9f8202de-070a-4f32-a73b-ef21d216d1af name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 08:42:46 old-k8s-version-409903 crio[649]: time="2024-08-06 08:42:46.439093008Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722933766439050298,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9f8202de-070a-4f32-a73b-ef21d216d1af name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 08:42:46 old-k8s-version-409903 crio[649]: time="2024-08-06 08:42:46.439626320Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=34c49ae8-f92b-4ffd-af89-e11b02670332 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:42:46 old-k8s-version-409903 crio[649]: time="2024-08-06 08:42:46.439763051Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=34c49ae8-f92b-4ffd-af89-e11b02670332 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:42:46 old-k8s-version-409903 crio[649]: time="2024-08-06 08:42:46.439808204Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=34c49ae8-f92b-4ffd-af89-e11b02670332 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:42:46 old-k8s-version-409903 crio[649]: time="2024-08-06 08:42:46.474840830Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b8f52ac1-dcee-4978-b927-5004fa5c9888 name=/runtime.v1.RuntimeService/Version
	Aug 06 08:42:46 old-k8s-version-409903 crio[649]: time="2024-08-06 08:42:46.474968751Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b8f52ac1-dcee-4978-b927-5004fa5c9888 name=/runtime.v1.RuntimeService/Version
	Aug 06 08:42:46 old-k8s-version-409903 crio[649]: time="2024-08-06 08:42:46.476478395Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f2dcb0c1-f718-414d-97b9-383b0b436d6e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 08:42:46 old-k8s-version-409903 crio[649]: time="2024-08-06 08:42:46.477007335Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722933766476977797,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f2dcb0c1-f718-414d-97b9-383b0b436d6e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 08:42:46 old-k8s-version-409903 crio[649]: time="2024-08-06 08:42:46.477528605Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3d368a60-fc8b-4ad1-aa98-946308cbc9f7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:42:46 old-k8s-version-409903 crio[649]: time="2024-08-06 08:42:46.477593613Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3d368a60-fc8b-4ad1-aa98-946308cbc9f7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:42:46 old-k8s-version-409903 crio[649]: time="2024-08-06 08:42:46.477629168Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=3d368a60-fc8b-4ad1-aa98-946308cbc9f7 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Aug 6 08:34] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.055941] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041778] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.077438] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.582101] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.479002] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000016] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.372783] systemd-fstab-generator[571]: Ignoring "noauto" option for root device
	[  +0.066842] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064276] systemd-fstab-generator[583]: Ignoring "noauto" option for root device
	[  +0.187929] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.204236] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.277743] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +6.753482] systemd-fstab-generator[840]: Ignoring "noauto" option for root device
	[  +0.064437] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.801648] systemd-fstab-generator[963]: Ignoring "noauto" option for root device
	[ +10.357148] kauditd_printk_skb: 46 callbacks suppressed
	[Aug 6 08:38] systemd-fstab-generator[5042]: Ignoring "noauto" option for root device
	[Aug 6 08:40] systemd-fstab-generator[5323]: Ignoring "noauto" option for root device
	[  +0.070161] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 08:42:46 up 8 min,  0 users,  load average: 0.09, 0.11, 0.06
	Linux old-k8s-version-409903 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Aug 06 08:42:43 old-k8s-version-409903 kubelet[5503]: internal/singleflight.(*Group).doCall(0x70c5750, 0xc000ba6410, 0xc000b72900, 0x23, 0xc000273080)
	Aug 06 08:42:43 old-k8s-version-409903 kubelet[5503]:         /usr/local/go/src/internal/singleflight/singleflight.go:95 +0x2e
	Aug 06 08:42:43 old-k8s-version-409903 kubelet[5503]: created by internal/singleflight.(*Group).DoChan
	Aug 06 08:42:43 old-k8s-version-409903 kubelet[5503]:         /usr/local/go/src/internal/singleflight/singleflight.go:88 +0x2cc
	Aug 06 08:42:43 old-k8s-version-409903 kubelet[5503]: goroutine 153 [runnable]:
	Aug 06 08:42:43 old-k8s-version-409903 kubelet[5503]: net._C2func_getaddrinfo(0xc000b32980, 0x0, 0xc000b9db60, 0xc00012bfc8, 0x0, 0x0, 0x0)
	Aug 06 08:42:43 old-k8s-version-409903 kubelet[5503]:         _cgo_gotypes.go:94 +0x55
	Aug 06 08:42:43 old-k8s-version-409903 kubelet[5503]: net.cgoLookupIPCNAME.func1(0xc000b32980, 0x20, 0x20, 0xc000b9db60, 0xc00012bfc8, 0x0, 0xc000ba16a0, 0x57a492)
	Aug 06 08:42:43 old-k8s-version-409903 kubelet[5503]:         /usr/local/go/src/net/cgo_unix.go:161 +0xc5
	Aug 06 08:42:43 old-k8s-version-409903 kubelet[5503]: net.cgoLookupIPCNAME(0x48ab5d6, 0x3, 0xc000b728d0, 0x1f, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	Aug 06 08:42:43 old-k8s-version-409903 kubelet[5503]:         /usr/local/go/src/net/cgo_unix.go:161 +0x16b
	Aug 06 08:42:43 old-k8s-version-409903 kubelet[5503]: net.cgoIPLookup(0xc0002ea000, 0x48ab5d6, 0x3, 0xc000b728d0, 0x1f)
	Aug 06 08:42:43 old-k8s-version-409903 kubelet[5503]:         /usr/local/go/src/net/cgo_unix.go:218 +0x67
	Aug 06 08:42:43 old-k8s-version-409903 kubelet[5503]: created by net.cgoLookupIP
	Aug 06 08:42:43 old-k8s-version-409903 kubelet[5503]:         /usr/local/go/src/net/cgo_unix.go:228 +0xc7
	Aug 06 08:42:43 old-k8s-version-409903 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Aug 06 08:42:43 old-k8s-version-409903 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Aug 06 08:42:44 old-k8s-version-409903 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Aug 06 08:42:44 old-k8s-version-409903 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Aug 06 08:42:44 old-k8s-version-409903 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Aug 06 08:42:44 old-k8s-version-409903 kubelet[5553]: I0806 08:42:44.199432    5553 server.go:416] Version: v1.20.0
	Aug 06 08:42:44 old-k8s-version-409903 kubelet[5553]: I0806 08:42:44.199855    5553 server.go:837] Client rotation is on, will bootstrap in background
	Aug 06 08:42:44 old-k8s-version-409903 kubelet[5553]: I0806 08:42:44.201621    5553 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Aug 06 08:42:44 old-k8s-version-409903 kubelet[5553]: W0806 08:42:44.202532    5553 manager.go:159] Cannot detect current cgroup on cgroup v2
	Aug 06 08:42:44 old-k8s-version-409903 kubelet[5553]: I0806 08:42:44.202941    5553 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-409903 -n old-k8s-version-409903
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-409903 -n old-k8s-version-409903: exit status 2 (238.545938ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-409903" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (709.44s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0806 08:38:45.266495   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/custom-flannel-405163/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-324808 -n embed-certs-324808
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-08-06 08:47:35.718571611 +0000 UTC m=+6178.943382050
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-324808 -n embed-certs-324808
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-324808 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-324808 logs -n 25: (2.167761175s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p flannel-405163 sudo cat                             | flannel-405163               | jenkins | v1.33.1 | 06 Aug 24 08:25 UTC | 06 Aug 24 08:25 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p flannel-405163 sudo                                 | flannel-405163               | jenkins | v1.33.1 | 06 Aug 24 08:25 UTC | 06 Aug 24 08:25 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p flannel-405163 sudo                                 | flannel-405163               | jenkins | v1.33.1 | 06 Aug 24 08:25 UTC | 06 Aug 24 08:25 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p flannel-405163 sudo                                 | flannel-405163               | jenkins | v1.33.1 | 06 Aug 24 08:25 UTC | 06 Aug 24 08:25 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p flannel-405163 sudo find                            | flannel-405163               | jenkins | v1.33.1 | 06 Aug 24 08:25 UTC | 06 Aug 24 08:25 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p flannel-405163 sudo crio                            | flannel-405163               | jenkins | v1.33.1 | 06 Aug 24 08:25 UTC | 06 Aug 24 08:25 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p flannel-405163                                      | flannel-405163               | jenkins | v1.33.1 | 06 Aug 24 08:25 UTC | 06 Aug 24 08:25 UTC |
	| delete  | -p                                                     | disable-driver-mounts-607762 | jenkins | v1.33.1 | 06 Aug 24 08:25 UTC | 06 Aug 24 08:25 UTC |
	|         | disable-driver-mounts-607762                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-990751 | jenkins | v1.33.1 | 06 Aug 24 08:25 UTC | 06 Aug 24 08:27 UTC |
	|         | default-k8s-diff-port-990751                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-703340             | no-preload-703340            | jenkins | v1.33.1 | 06 Aug 24 08:26 UTC | 06 Aug 24 08:26 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-703340                                   | no-preload-703340            | jenkins | v1.33.1 | 06 Aug 24 08:26 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-324808            | embed-certs-324808           | jenkins | v1.33.1 | 06 Aug 24 08:26 UTC | 06 Aug 24 08:26 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-324808                                  | embed-certs-324808           | jenkins | v1.33.1 | 06 Aug 24 08:26 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-990751  | default-k8s-diff-port-990751 | jenkins | v1.33.1 | 06 Aug 24 08:27 UTC | 06 Aug 24 08:27 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-990751 | jenkins | v1.33.1 | 06 Aug 24 08:27 UTC |                     |
	|         | default-k8s-diff-port-990751                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-703340                  | no-preload-703340            | jenkins | v1.33.1 | 06 Aug 24 08:28 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-703340                                   | no-preload-703340            | jenkins | v1.33.1 | 06 Aug 24 08:28 UTC | 06 Aug 24 08:40 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-409903        | old-k8s-version-409903       | jenkins | v1.33.1 | 06 Aug 24 08:28 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-324808                 | embed-certs-324808           | jenkins | v1.33.1 | 06 Aug 24 08:29 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-324808                                  | embed-certs-324808           | jenkins | v1.33.1 | 06 Aug 24 08:29 UTC | 06 Aug 24 08:38 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-990751       | default-k8s-diff-port-990751 | jenkins | v1.33.1 | 06 Aug 24 08:30 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-990751 | jenkins | v1.33.1 | 06 Aug 24 08:30 UTC | 06 Aug 24 08:38 UTC |
	|         | default-k8s-diff-port-990751                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-409903                              | old-k8s-version-409903       | jenkins | v1.33.1 | 06 Aug 24 08:30 UTC | 06 Aug 24 08:30 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-409903             | old-k8s-version-409903       | jenkins | v1.33.1 | 06 Aug 24 08:30 UTC | 06 Aug 24 08:30 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-409903                              | old-k8s-version-409903       | jenkins | v1.33.1 | 06 Aug 24 08:30 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/06 08:30:58
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0806 08:30:58.757517   79927 out.go:291] Setting OutFile to fd 1 ...
	I0806 08:30:58.757772   79927 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 08:30:58.757780   79927 out.go:304] Setting ErrFile to fd 2...
	I0806 08:30:58.757784   79927 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 08:30:58.757992   79927 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19370-13057/.minikube/bin
	I0806 08:30:58.758527   79927 out.go:298] Setting JSON to false
	I0806 08:30:58.759405   79927 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8005,"bootTime":1722925054,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0806 08:30:58.759463   79927 start.go:139] virtualization: kvm guest
	I0806 08:30:58.762200   79927 out.go:177] * [old-k8s-version-409903] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0806 08:30:58.763461   79927 out.go:177]   - MINIKUBE_LOCATION=19370
	I0806 08:30:58.763514   79927 notify.go:220] Checking for updates...
	I0806 08:30:58.766017   79927 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0806 08:30:58.767180   79927 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19370-13057/kubeconfig
	I0806 08:30:58.768270   79927 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19370-13057/.minikube
	I0806 08:30:58.769303   79927 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0806 08:30:58.770385   79927 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0806 08:30:58.771931   79927 config.go:182] Loaded profile config "old-k8s-version-409903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0806 08:30:58.772319   79927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:30:58.772407   79927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:30:58.787845   79927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39563
	I0806 08:30:58.788270   79927 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:30:58.788863   79927 main.go:141] libmachine: Using API Version  1
	I0806 08:30:58.788879   79927 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:30:58.789169   79927 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:30:58.789314   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .DriverName
	I0806 08:30:58.791190   79927 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0806 08:30:58.792487   79927 driver.go:392] Setting default libvirt URI to qemu:///system
	I0806 08:30:58.792776   79927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:30:58.792811   79927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:30:58.807347   79927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33093
	I0806 08:30:58.807788   79927 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:30:58.808260   79927 main.go:141] libmachine: Using API Version  1
	I0806 08:30:58.808276   79927 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:30:58.808633   79927 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:30:58.808849   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .DriverName
	I0806 08:30:58.845305   79927 out.go:177] * Using the kvm2 driver based on existing profile
	I0806 08:30:58.846560   79927 start.go:297] selected driver: kvm2
	I0806 08:30:58.846580   79927 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-409903 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-409903 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.158 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 08:30:58.846684   79927 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0806 08:30:58.847335   79927 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 08:30:58.847395   79927 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19370-13057/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0806 08:30:58.862013   79927 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0806 08:30:58.862398   79927 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0806 08:30:58.862431   79927 cni.go:84] Creating CNI manager for ""
	I0806 08:30:58.862441   79927 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0806 08:30:58.862493   79927 start.go:340] cluster config:
	{Name:old-k8s-version-409903 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-409903 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.158 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 08:30:58.862609   79927 iso.go:125] acquiring lock: {Name:mke58d71de28babe305c5eb0c31c274e9713bb3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 08:30:58.864484   79927 out.go:177] * Starting "old-k8s-version-409903" primary control-plane node in "old-k8s-version-409903" cluster
	I0806 08:30:58.865763   79927 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0806 08:30:58.865796   79927 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0806 08:30:58.865803   79927 cache.go:56] Caching tarball of preloaded images
	I0806 08:30:58.865912   79927 preload.go:172] Found /home/jenkins/minikube-integration/19370-13057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0806 08:30:58.865928   79927 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0806 08:30:58.866013   79927 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/old-k8s-version-409903/config.json ...
	I0806 08:30:58.866228   79927 start.go:360] acquireMachinesLock for old-k8s-version-409903: {Name:mkc602ac9c8cc3360dcc0fcb8104303837aaab61 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 08:31:01.700664   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:31:04.772608   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:31:10.852634   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:31:13.924605   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:31:20.004661   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:31:23.076602   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:31:29.156621   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:31:32.228616   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:31:38.308684   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:31:41.380622   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:31:47.460632   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:31:50.532638   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:31:56.612610   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:31:59.684696   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:32:05.764642   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:32:08.836560   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:32:14.916653   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:32:17.988720   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:32:24.068597   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:32:27.140686   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:32:33.220630   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:32:36.292609   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:32:42.372654   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:32:45.444619   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:32:51.524637   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:32:54.596677   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:33:00.676664   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:33:03.748647   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:33:09.828643   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:33:12.900674   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:33:18.980642   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:33:22.052616   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:33:28.132639   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:33:31.204678   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:33:34.209491   79219 start.go:364] duration metric: took 4m19.788239383s to acquireMachinesLock for "embed-certs-324808"
	I0806 08:33:34.209554   79219 start.go:96] Skipping create...Using existing machine configuration
	I0806 08:33:34.209563   79219 fix.go:54] fixHost starting: 
	I0806 08:33:34.209914   79219 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:33:34.209945   79219 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:33:34.225180   79219 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44037
	I0806 08:33:34.225607   79219 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:33:34.226014   79219 main.go:141] libmachine: Using API Version  1
	I0806 08:33:34.226033   79219 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:33:34.226323   79219 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:33:34.226522   79219 main.go:141] libmachine: (embed-certs-324808) Calling .DriverName
	I0806 08:33:34.226684   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetState
	I0806 08:33:34.228172   79219 fix.go:112] recreateIfNeeded on embed-certs-324808: state=Stopped err=<nil>
	I0806 08:33:34.228200   79219 main.go:141] libmachine: (embed-certs-324808) Calling .DriverName
	W0806 08:33:34.228355   79219 fix.go:138] unexpected machine state, will restart: <nil>
	I0806 08:33:34.231261   79219 out.go:177] * Restarting existing kvm2 VM for "embed-certs-324808" ...
	I0806 08:33:34.232543   79219 main.go:141] libmachine: (embed-certs-324808) Calling .Start
	I0806 08:33:34.232728   79219 main.go:141] libmachine: (embed-certs-324808) Ensuring networks are active...
	I0806 08:33:34.233475   79219 main.go:141] libmachine: (embed-certs-324808) Ensuring network default is active
	I0806 08:33:34.233819   79219 main.go:141] libmachine: (embed-certs-324808) Ensuring network mk-embed-certs-324808 is active
	I0806 08:33:34.234291   79219 main.go:141] libmachine: (embed-certs-324808) Getting domain xml...
	I0806 08:33:34.235129   79219 main.go:141] libmachine: (embed-certs-324808) Creating domain...
	I0806 08:33:34.206591   78922 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 08:33:34.206636   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetMachineName
	I0806 08:33:34.206950   78922 buildroot.go:166] provisioning hostname "no-preload-703340"
	I0806 08:33:34.206975   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetMachineName
	I0806 08:33:34.207187   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHHostname
	I0806 08:33:34.209379   78922 machine.go:97] duration metric: took 4m37.416826471s to provisionDockerMachine
	I0806 08:33:34.209416   78922 fix.go:56] duration metric: took 4m37.443014145s for fixHost
	I0806 08:33:34.209424   78922 start.go:83] releasing machines lock for "no-preload-703340", held for 4m37.443037097s
	W0806 08:33:34.209442   78922 start.go:714] error starting host: provision: host is not running
	W0806 08:33:34.209538   78922 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0806 08:33:34.209545   78922 start.go:729] Will try again in 5 seconds ...
	I0806 08:33:35.434516   79219 main.go:141] libmachine: (embed-certs-324808) Waiting to get IP...
	I0806 08:33:35.435546   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:35.435973   79219 main.go:141] libmachine: (embed-certs-324808) DBG | unable to find current IP address of domain embed-certs-324808 in network mk-embed-certs-324808
	I0806 08:33:35.436045   79219 main.go:141] libmachine: (embed-certs-324808) DBG | I0806 08:33:35.435966   80472 retry.go:31] will retry after 190.247547ms: waiting for machine to come up
	I0806 08:33:35.627362   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:35.627768   79219 main.go:141] libmachine: (embed-certs-324808) DBG | unable to find current IP address of domain embed-certs-324808 in network mk-embed-certs-324808
	I0806 08:33:35.627803   79219 main.go:141] libmachine: (embed-certs-324808) DBG | I0806 08:33:35.627742   80472 retry.go:31] will retry after 306.927129ms: waiting for machine to come up
	I0806 08:33:35.936242   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:35.936714   79219 main.go:141] libmachine: (embed-certs-324808) DBG | unable to find current IP address of domain embed-certs-324808 in network mk-embed-certs-324808
	I0806 08:33:35.936738   79219 main.go:141] libmachine: (embed-certs-324808) DBG | I0806 08:33:35.936661   80472 retry.go:31] will retry after 363.399025ms: waiting for machine to come up
	I0806 08:33:36.301454   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:36.301934   79219 main.go:141] libmachine: (embed-certs-324808) DBG | unable to find current IP address of domain embed-certs-324808 in network mk-embed-certs-324808
	I0806 08:33:36.301967   79219 main.go:141] libmachine: (embed-certs-324808) DBG | I0806 08:33:36.301879   80472 retry.go:31] will retry after 495.07719ms: waiting for machine to come up
	I0806 08:33:36.798489   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:36.798930   79219 main.go:141] libmachine: (embed-certs-324808) DBG | unable to find current IP address of domain embed-certs-324808 in network mk-embed-certs-324808
	I0806 08:33:36.798958   79219 main.go:141] libmachine: (embed-certs-324808) DBG | I0806 08:33:36.798870   80472 retry.go:31] will retry after 755.286107ms: waiting for machine to come up
	I0806 08:33:37.555799   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:37.556155   79219 main.go:141] libmachine: (embed-certs-324808) DBG | unable to find current IP address of domain embed-certs-324808 in network mk-embed-certs-324808
	I0806 08:33:37.556187   79219 main.go:141] libmachine: (embed-certs-324808) DBG | I0806 08:33:37.556094   80472 retry.go:31] will retry after 600.530399ms: waiting for machine to come up
	I0806 08:33:38.157817   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:38.158303   79219 main.go:141] libmachine: (embed-certs-324808) DBG | unable to find current IP address of domain embed-certs-324808 in network mk-embed-certs-324808
	I0806 08:33:38.158334   79219 main.go:141] libmachine: (embed-certs-324808) DBG | I0806 08:33:38.158241   80472 retry.go:31] will retry after 943.262354ms: waiting for machine to come up
	I0806 08:33:39.103226   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:39.103615   79219 main.go:141] libmachine: (embed-certs-324808) DBG | unable to find current IP address of domain embed-certs-324808 in network mk-embed-certs-324808
	I0806 08:33:39.103645   79219 main.go:141] libmachine: (embed-certs-324808) DBG | I0806 08:33:39.103563   80472 retry.go:31] will retry after 1.100862399s: waiting for machine to come up
	I0806 08:33:39.211204   78922 start.go:360] acquireMachinesLock for no-preload-703340: {Name:mkc602ac9c8cc3360dcc0fcb8104303837aaab61 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 08:33:40.205796   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:40.206130   79219 main.go:141] libmachine: (embed-certs-324808) DBG | unable to find current IP address of domain embed-certs-324808 in network mk-embed-certs-324808
	I0806 08:33:40.206160   79219 main.go:141] libmachine: (embed-certs-324808) DBG | I0806 08:33:40.206077   80472 retry.go:31] will retry after 1.405550913s: waiting for machine to come up
	I0806 08:33:41.613629   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:41.614060   79219 main.go:141] libmachine: (embed-certs-324808) DBG | unable to find current IP address of domain embed-certs-324808 in network mk-embed-certs-324808
	I0806 08:33:41.614092   79219 main.go:141] libmachine: (embed-certs-324808) DBG | I0806 08:33:41.614001   80472 retry.go:31] will retry after 2.190867783s: waiting for machine to come up
	I0806 08:33:43.807417   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:43.808017   79219 main.go:141] libmachine: (embed-certs-324808) DBG | unable to find current IP address of domain embed-certs-324808 in network mk-embed-certs-324808
	I0806 08:33:43.808049   79219 main.go:141] libmachine: (embed-certs-324808) DBG | I0806 08:33:43.807958   80472 retry.go:31] will retry after 2.233558066s: waiting for machine to come up
	I0806 08:33:46.043427   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:46.043803   79219 main.go:141] libmachine: (embed-certs-324808) DBG | unable to find current IP address of domain embed-certs-324808 in network mk-embed-certs-324808
	I0806 08:33:46.043830   79219 main.go:141] libmachine: (embed-certs-324808) DBG | I0806 08:33:46.043754   80472 retry.go:31] will retry after 3.171710196s: waiting for machine to come up
	I0806 08:33:49.218967   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:49.219319   79219 main.go:141] libmachine: (embed-certs-324808) DBG | unable to find current IP address of domain embed-certs-324808 in network mk-embed-certs-324808
	I0806 08:33:49.219349   79219 main.go:141] libmachine: (embed-certs-324808) DBG | I0806 08:33:49.219266   80472 retry.go:31] will retry after 4.141473633s: waiting for machine to come up
	I0806 08:33:53.361887   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.362300   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has current primary IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.362315   79219 main.go:141] libmachine: (embed-certs-324808) Found IP for machine: 192.168.39.190
	I0806 08:33:53.362324   79219 main.go:141] libmachine: (embed-certs-324808) Reserving static IP address...
	I0806 08:33:53.362762   79219 main.go:141] libmachine: (embed-certs-324808) Reserved static IP address: 192.168.39.190
	I0806 08:33:53.362812   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "embed-certs-324808", mac: "52:54:00:b9:df:8c", ip: "192.168.39.190"} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:33:53.362826   79219 main.go:141] libmachine: (embed-certs-324808) Waiting for SSH to be available...
	I0806 08:33:53.362858   79219 main.go:141] libmachine: (embed-certs-324808) DBG | skip adding static IP to network mk-embed-certs-324808 - found existing host DHCP lease matching {name: "embed-certs-324808", mac: "52:54:00:b9:df:8c", ip: "192.168.39.190"}
	I0806 08:33:53.362871   79219 main.go:141] libmachine: (embed-certs-324808) DBG | Getting to WaitForSSH function...
	I0806 08:33:53.364761   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.365170   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:33:53.365204   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.365301   79219 main.go:141] libmachine: (embed-certs-324808) DBG | Using SSH client type: external
	I0806 08:33:53.365333   79219 main.go:141] libmachine: (embed-certs-324808) DBG | Using SSH private key: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/embed-certs-324808/id_rsa (-rw-------)
	I0806 08:33:53.365366   79219 main.go:141] libmachine: (embed-certs-324808) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.190 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19370-13057/.minikube/machines/embed-certs-324808/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0806 08:33:53.365378   79219 main.go:141] libmachine: (embed-certs-324808) DBG | About to run SSH command:
	I0806 08:33:53.365386   79219 main.go:141] libmachine: (embed-certs-324808) DBG | exit 0
	I0806 08:33:53.492385   79219 main.go:141] libmachine: (embed-certs-324808) DBG | SSH cmd err, output: <nil>: 
	I0806 08:33:53.492739   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetConfigRaw
	I0806 08:33:53.493342   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetIP
	I0806 08:33:53.495371   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.495663   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:33:53.495693   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.495982   79219 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/embed-certs-324808/config.json ...
	I0806 08:33:53.496215   79219 machine.go:94] provisionDockerMachine start ...
	I0806 08:33:53.496238   79219 main.go:141] libmachine: (embed-certs-324808) Calling .DriverName
	I0806 08:33:53.496460   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHHostname
	I0806 08:33:53.498625   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.498924   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:33:53.498945   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.499096   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHPort
	I0806 08:33:53.499265   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHKeyPath
	I0806 08:33:53.499394   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHKeyPath
	I0806 08:33:53.499593   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHUsername
	I0806 08:33:53.499815   79219 main.go:141] libmachine: Using SSH client type: native
	I0806 08:33:53.500050   79219 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.190 22 <nil> <nil>}
	I0806 08:33:53.500063   79219 main.go:141] libmachine: About to run SSH command:
	hostname
	I0806 08:33:53.612698   79219 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0806 08:33:53.612725   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetMachineName
	I0806 08:33:53.612979   79219 buildroot.go:166] provisioning hostname "embed-certs-324808"
	I0806 08:33:53.613002   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetMachineName
	I0806 08:33:53.613164   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHHostname
	I0806 08:33:53.615731   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.616123   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:33:53.616157   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.616316   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHPort
	I0806 08:33:53.616530   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHKeyPath
	I0806 08:33:53.616676   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHKeyPath
	I0806 08:33:53.616810   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHUsername
	I0806 08:33:53.616961   79219 main.go:141] libmachine: Using SSH client type: native
	I0806 08:33:53.617124   79219 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.190 22 <nil> <nil>}
	I0806 08:33:53.617138   79219 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-324808 && echo "embed-certs-324808" | sudo tee /etc/hostname
	I0806 08:33:53.742677   79219 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-324808
	
	I0806 08:33:53.742722   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHHostname
	I0806 08:33:53.745670   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.745984   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:33:53.746015   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.746195   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHPort
	I0806 08:33:53.746380   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHKeyPath
	I0806 08:33:53.746545   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHKeyPath
	I0806 08:33:53.746676   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHUsername
	I0806 08:33:53.746867   79219 main.go:141] libmachine: Using SSH client type: native
	I0806 08:33:53.747033   79219 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.190 22 <nil> <nil>}
	I0806 08:33:53.747048   79219 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-324808' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-324808/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-324808' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0806 08:33:53.869596   79219 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 08:33:53.869629   79219 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19370-13057/.minikube CaCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19370-13057/.minikube}
	I0806 08:33:53.869652   79219 buildroot.go:174] setting up certificates
	I0806 08:33:53.869663   79219 provision.go:84] configureAuth start
	I0806 08:33:53.869680   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetMachineName
	I0806 08:33:53.869953   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetIP
	I0806 08:33:53.872701   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.873071   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:33:53.873105   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.873246   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHHostname
	I0806 08:33:53.875521   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.875928   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:33:53.875959   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.876009   79219 provision.go:143] copyHostCerts
	I0806 08:33:53.876077   79219 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem, removing ...
	I0806 08:33:53.876089   79219 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem
	I0806 08:33:53.876170   79219 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem (1123 bytes)
	I0806 08:33:53.876287   79219 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem, removing ...
	I0806 08:33:53.876297   79219 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem
	I0806 08:33:53.876337   79219 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem (1675 bytes)
	I0806 08:33:53.876450   79219 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem, removing ...
	I0806 08:33:53.876460   79219 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem
	I0806 08:33:53.876499   79219 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem (1078 bytes)
	I0806 08:33:53.876578   79219 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem org=jenkins.embed-certs-324808 san=[127.0.0.1 192.168.39.190 embed-certs-324808 localhost minikube]
	I0806 08:33:53.961275   79219 provision.go:177] copyRemoteCerts
	I0806 08:33:53.961344   79219 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0806 08:33:53.961375   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHHostname
	I0806 08:33:53.964004   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.964356   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:33:53.964404   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.964596   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHPort
	I0806 08:33:53.964829   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHKeyPath
	I0806 08:33:53.964983   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHUsername
	I0806 08:33:53.965115   79219 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/embed-certs-324808/id_rsa Username:docker}
	I0806 08:33:54.050335   79219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0806 08:33:54.074977   79219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0806 08:33:54.100201   79219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0806 08:33:54.123356   79219 provision.go:87] duration metric: took 253.6775ms to configureAuth
	I0806 08:33:54.123386   79219 buildroot.go:189] setting minikube options for container-runtime
	I0806 08:33:54.123556   79219 config.go:182] Loaded profile config "embed-certs-324808": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 08:33:54.123634   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHHostname
	I0806 08:33:54.126279   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:54.126721   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:33:54.126746   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:54.126941   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHPort
	I0806 08:33:54.127143   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHKeyPath
	I0806 08:33:54.127309   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHKeyPath
	I0806 08:33:54.127444   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHUsername
	I0806 08:33:54.127611   79219 main.go:141] libmachine: Using SSH client type: native
	I0806 08:33:54.127785   79219 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.190 22 <nil> <nil>}
	I0806 08:33:54.127800   79219 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0806 08:33:54.653395   79614 start.go:364] duration metric: took 3m35.719137455s to acquireMachinesLock for "default-k8s-diff-port-990751"
	I0806 08:33:54.653444   79614 start.go:96] Skipping create...Using existing machine configuration
	I0806 08:33:54.653452   79614 fix.go:54] fixHost starting: 
	I0806 08:33:54.653812   79614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:33:54.653844   79614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:33:54.673686   79614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35867
	I0806 08:33:54.674123   79614 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:33:54.674581   79614 main.go:141] libmachine: Using API Version  1
	I0806 08:33:54.674604   79614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:33:54.674950   79614 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:33:54.675108   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .DriverName
	I0806 08:33:54.675246   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetState
	I0806 08:33:54.676908   79614 fix.go:112] recreateIfNeeded on default-k8s-diff-port-990751: state=Stopped err=<nil>
	I0806 08:33:54.676939   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .DriverName
	W0806 08:33:54.677097   79614 fix.go:138] unexpected machine state, will restart: <nil>
	I0806 08:33:54.678912   79614 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-990751" ...
	I0806 08:33:54.403758   79219 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0806 08:33:54.403785   79219 machine.go:97] duration metric: took 907.554493ms to provisionDockerMachine
	I0806 08:33:54.403799   79219 start.go:293] postStartSetup for "embed-certs-324808" (driver="kvm2")
	I0806 08:33:54.403814   79219 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0806 08:33:54.403854   79219 main.go:141] libmachine: (embed-certs-324808) Calling .DriverName
	I0806 08:33:54.404139   79219 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0806 08:33:54.404168   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHHostname
	I0806 08:33:54.406768   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:54.407114   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:33:54.407142   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:54.407328   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHPort
	I0806 08:33:54.407510   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHKeyPath
	I0806 08:33:54.407647   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHUsername
	I0806 08:33:54.407765   79219 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/embed-certs-324808/id_rsa Username:docker}
	I0806 08:33:54.495482   79219 ssh_runner.go:195] Run: cat /etc/os-release
	I0806 08:33:54.499691   79219 info.go:137] Remote host: Buildroot 2023.02.9
	I0806 08:33:54.499725   79219 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-13057/.minikube/addons for local assets ...
	I0806 08:33:54.499915   79219 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-13057/.minikube/files for local assets ...
	I0806 08:33:54.500104   79219 filesync.go:149] local asset: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem -> 202462.pem in /etc/ssl/certs
	I0806 08:33:54.500243   79219 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0806 08:33:54.509690   79219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem --> /etc/ssl/certs/202462.pem (1708 bytes)
	I0806 08:33:54.534233   79219 start.go:296] duration metric: took 130.420444ms for postStartSetup
	I0806 08:33:54.534282   79219 fix.go:56] duration metric: took 20.324719602s for fixHost
	I0806 08:33:54.534307   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHHostname
	I0806 08:33:54.536871   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:54.537202   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:33:54.537234   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:54.537393   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHPort
	I0806 08:33:54.537562   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHKeyPath
	I0806 08:33:54.537746   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHKeyPath
	I0806 08:33:54.537911   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHUsername
	I0806 08:33:54.538105   79219 main.go:141] libmachine: Using SSH client type: native
	I0806 08:33:54.538285   79219 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.190 22 <nil> <nil>}
	I0806 08:33:54.538295   79219 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0806 08:33:54.653264   79219 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722933234.630593565
	
	I0806 08:33:54.653288   79219 fix.go:216] guest clock: 1722933234.630593565
	I0806 08:33:54.653298   79219 fix.go:229] Guest: 2024-08-06 08:33:54.630593565 +0000 UTC Remote: 2024-08-06 08:33:54.534287484 +0000 UTC m=+280.251105524 (delta=96.306081ms)
	I0806 08:33:54.653327   79219 fix.go:200] guest clock delta is within tolerance: 96.306081ms
	I0806 08:33:54.653337   79219 start.go:83] releasing machines lock for "embed-certs-324808", held for 20.443802529s
	I0806 08:33:54.653361   79219 main.go:141] libmachine: (embed-certs-324808) Calling .DriverName
	I0806 08:33:54.653640   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetIP
	I0806 08:33:54.656332   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:54.656741   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:33:54.656766   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:54.656933   79219 main.go:141] libmachine: (embed-certs-324808) Calling .DriverName
	I0806 08:33:54.657443   79219 main.go:141] libmachine: (embed-certs-324808) Calling .DriverName
	I0806 08:33:54.657623   79219 main.go:141] libmachine: (embed-certs-324808) Calling .DriverName
	I0806 08:33:54.657705   79219 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0806 08:33:54.657748   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHHostname
	I0806 08:33:54.657857   79219 ssh_runner.go:195] Run: cat /version.json
	I0806 08:33:54.657877   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHHostname
	I0806 08:33:54.660302   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:54.660482   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:54.660694   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:33:54.660729   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:54.660823   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:33:54.660831   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHPort
	I0806 08:33:54.660844   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:54.661011   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHPort
	I0806 08:33:54.661094   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHKeyPath
	I0806 08:33:54.661165   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHKeyPath
	I0806 08:33:54.661223   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHUsername
	I0806 08:33:54.661280   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHUsername
	I0806 08:33:54.661338   79219 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/embed-certs-324808/id_rsa Username:docker}
	I0806 08:33:54.661386   79219 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/embed-certs-324808/id_rsa Username:docker}
	I0806 08:33:54.770935   79219 ssh_runner.go:195] Run: systemctl --version
	I0806 08:33:54.777166   79219 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0806 08:33:54.920252   79219 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0806 08:33:54.927740   79219 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0806 08:33:54.927829   79219 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0806 08:33:54.949691   79219 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0806 08:33:54.949720   79219 start.go:495] detecting cgroup driver to use...
	I0806 08:33:54.949794   79219 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0806 08:33:54.967575   79219 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 08:33:54.982019   79219 docker.go:217] disabling cri-docker service (if available) ...
	I0806 08:33:54.982094   79219 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0806 08:33:55.001629   79219 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0806 08:33:55.018332   79219 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0806 08:33:55.139207   79219 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0806 08:33:55.302225   79219 docker.go:233] disabling docker service ...
	I0806 08:33:55.302300   79219 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0806 08:33:55.321470   79219 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0806 08:33:55.334925   79219 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0806 08:33:55.456388   79219 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0806 08:33:55.579725   79219 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0806 08:33:55.594362   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 08:33:55.613084   79219 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0806 08:33:55.613165   79219 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:33:55.623414   79219 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0806 08:33:55.623487   79219 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:33:55.633837   79219 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:33:55.645893   79219 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:33:55.658623   79219 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0806 08:33:55.670016   79219 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:33:55.681227   79219 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:33:55.700047   79219 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:33:55.711409   79219 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0806 08:33:55.721076   79219 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0806 08:33:55.721138   79219 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0806 08:33:55.734470   79219 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0806 08:33:55.750919   79219 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 08:33:55.880306   79219 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0806 08:33:56.031323   79219 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0806 08:33:56.031401   79219 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0806 08:33:56.036456   79219 start.go:563] Will wait 60s for crictl version
	I0806 08:33:56.036522   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:33:56.040542   79219 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0806 08:33:56.081166   79219 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0806 08:33:56.081258   79219 ssh_runner.go:195] Run: crio --version
	I0806 08:33:56.109963   79219 ssh_runner.go:195] Run: crio --version
	I0806 08:33:56.141118   79219 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0806 08:33:54.680267   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .Start
	I0806 08:33:54.680448   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Ensuring networks are active...
	I0806 08:33:54.681411   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Ensuring network default is active
	I0806 08:33:54.681778   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Ensuring network mk-default-k8s-diff-port-990751 is active
	I0806 08:33:54.682205   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Getting domain xml...
	I0806 08:33:54.683028   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Creating domain...
	I0806 08:33:55.964313   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting to get IP...
	I0806 08:33:55.965483   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:33:55.965965   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | unable to find current IP address of domain default-k8s-diff-port-990751 in network mk-default-k8s-diff-port-990751
	I0806 08:33:55.966053   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | I0806 08:33:55.965953   80608 retry.go:31] will retry after 294.177981ms: waiting for machine to come up
	I0806 08:33:56.261404   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:33:56.262032   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | unable to find current IP address of domain default-k8s-diff-port-990751 in network mk-default-k8s-diff-port-990751
	I0806 08:33:56.262059   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | I0806 08:33:56.261969   80608 retry.go:31] will retry after 375.14393ms: waiting for machine to come up
	I0806 08:33:56.638714   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:33:56.639150   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | unable to find current IP address of domain default-k8s-diff-port-990751 in network mk-default-k8s-diff-port-990751
	I0806 08:33:56.639178   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | I0806 08:33:56.639118   80608 retry.go:31] will retry after 439.582122ms: waiting for machine to come up
	I0806 08:33:57.080785   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:33:57.081335   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | unable to find current IP address of domain default-k8s-diff-port-990751 in network mk-default-k8s-diff-port-990751
	I0806 08:33:57.081373   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | I0806 08:33:57.081299   80608 retry.go:31] will retry after 523.875054ms: waiting for machine to come up
	I0806 08:33:57.606731   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:33:57.607220   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | unable to find current IP address of domain default-k8s-diff-port-990751 in network mk-default-k8s-diff-port-990751
	I0806 08:33:57.607250   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | I0806 08:33:57.607178   80608 retry.go:31] will retry after 727.363483ms: waiting for machine to come up
	I0806 08:33:58.336352   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:33:58.336852   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | unable to find current IP address of domain default-k8s-diff-port-990751 in network mk-default-k8s-diff-port-990751
	I0806 08:33:58.336885   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | I0806 08:33:58.336782   80608 retry.go:31] will retry after 662.304027ms: waiting for machine to come up
	I0806 08:33:56.142523   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetIP
	I0806 08:33:56.145641   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:56.145997   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:33:56.146023   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:56.146239   79219 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0806 08:33:56.150763   79219 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 08:33:56.165617   79219 kubeadm.go:883] updating cluster {Name:embed-certs-324808 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-324808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.190 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0806 08:33:56.165775   79219 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0806 08:33:56.165837   79219 ssh_runner.go:195] Run: sudo crictl images --output json
	I0806 08:33:56.205302   79219 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0806 08:33:56.205393   79219 ssh_runner.go:195] Run: which lz4
	I0806 08:33:56.209791   79219 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0806 08:33:56.214338   79219 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0806 08:33:56.214369   79219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0806 08:33:57.735973   79219 crio.go:462] duration metric: took 1.526221921s to copy over tarball
	I0806 08:33:57.736049   79219 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0806 08:33:59.000478   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:33:59.000905   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | unable to find current IP address of domain default-k8s-diff-port-990751 in network mk-default-k8s-diff-port-990751
	I0806 08:33:59.000971   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | I0806 08:33:59.000829   80608 retry.go:31] will retry after 1.110230284s: waiting for machine to come up
	I0806 08:34:00.112658   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:00.113097   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | unable to find current IP address of domain default-k8s-diff-port-990751 in network mk-default-k8s-diff-port-990751
	I0806 08:34:00.113124   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | I0806 08:34:00.113053   80608 retry.go:31] will retry after 1.081213279s: waiting for machine to come up
	I0806 08:34:01.195646   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:01.196062   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | unable to find current IP address of domain default-k8s-diff-port-990751 in network mk-default-k8s-diff-port-990751
	I0806 08:34:01.196097   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | I0806 08:34:01.196014   80608 retry.go:31] will retry after 1.334553019s: waiting for machine to come up
	I0806 08:34:02.532009   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:02.532492   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | unable to find current IP address of domain default-k8s-diff-port-990751 in network mk-default-k8s-diff-port-990751
	I0806 08:34:02.532520   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | I0806 08:34:02.532449   80608 retry.go:31] will retry after 1.42743583s: waiting for machine to come up
	I0806 08:33:59.991340   79219 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.255258522s)
	I0806 08:33:59.991368   79219 crio.go:469] duration metric: took 2.255367027s to extract the tarball
	I0806 08:33:59.991378   79219 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0806 08:34:00.030261   79219 ssh_runner.go:195] Run: sudo crictl images --output json
	I0806 08:34:00.075217   79219 crio.go:514] all images are preloaded for cri-o runtime.
	I0806 08:34:00.075243   79219 cache_images.go:84] Images are preloaded, skipping loading
	I0806 08:34:00.075253   79219 kubeadm.go:934] updating node { 192.168.39.190 8443 v1.30.3 crio true true} ...
	I0806 08:34:00.075396   79219 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-324808 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.190
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-324808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0806 08:34:00.075477   79219 ssh_runner.go:195] Run: crio config
	I0806 08:34:00.124134   79219 cni.go:84] Creating CNI manager for ""
	I0806 08:34:00.124160   79219 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0806 08:34:00.124174   79219 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0806 08:34:00.124195   79219 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.190 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-324808 NodeName:embed-certs-324808 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.190"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.190 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0806 08:34:00.124348   79219 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.190
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-324808"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.190
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.190"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0806 08:34:00.124446   79219 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0806 08:34:00.134782   79219 binaries.go:44] Found k8s binaries, skipping transfer
	I0806 08:34:00.134874   79219 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0806 08:34:00.144892   79219 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0806 08:34:00.164614   79219 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0806 08:34:00.183911   79219 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0806 08:34:00.203821   79219 ssh_runner.go:195] Run: grep 192.168.39.190	control-plane.minikube.internal$ /etc/hosts
	I0806 08:34:00.207980   79219 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.190	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 08:34:00.220566   79219 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 08:34:00.339428   79219 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 08:34:00.357441   79219 certs.go:68] Setting up /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/embed-certs-324808 for IP: 192.168.39.190
	I0806 08:34:00.357470   79219 certs.go:194] generating shared ca certs ...
	I0806 08:34:00.357486   79219 certs.go:226] acquiring lock for ca certs: {Name:mkf5c5aed91f4108054d9f2c742cad66c86afbed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:34:00.357648   79219 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.key
	I0806 08:34:00.357694   79219 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.key
	I0806 08:34:00.357702   79219 certs.go:256] generating profile certs ...
	I0806 08:34:00.357788   79219 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/embed-certs-324808/client.key
	I0806 08:34:00.357870   79219 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/embed-certs-324808/apiserver.key.078f53b0
	I0806 08:34:00.357907   79219 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/embed-certs-324808/proxy-client.key
	I0806 08:34:00.358033   79219 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246.pem (1338 bytes)
	W0806 08:34:00.358070   79219 certs.go:480] ignoring /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246_empty.pem, impossibly tiny 0 bytes
	I0806 08:34:00.358085   79219 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem (1675 bytes)
	I0806 08:34:00.358122   79219 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem (1078 bytes)
	I0806 08:34:00.358151   79219 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem (1123 bytes)
	I0806 08:34:00.358174   79219 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem (1675 bytes)
	I0806 08:34:00.358231   79219 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem (1708 bytes)
	I0806 08:34:00.359196   79219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0806 08:34:00.397544   79219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0806 08:34:00.436824   79219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0806 08:34:00.478633   79219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0806 08:34:00.525641   79219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/embed-certs-324808/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0806 08:34:00.571003   79219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/embed-certs-324808/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0806 08:34:00.596423   79219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/embed-certs-324808/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0806 08:34:00.621129   79219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/embed-certs-324808/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0806 08:34:00.646297   79219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem --> /usr/share/ca-certificates/202462.pem (1708 bytes)
	I0806 08:34:00.672396   79219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0806 08:34:00.698468   79219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246.pem --> /usr/share/ca-certificates/20246.pem (1338 bytes)
	I0806 08:34:00.724115   79219 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0806 08:34:00.742200   79219 ssh_runner.go:195] Run: openssl version
	I0806 08:34:00.748418   79219 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202462.pem && ln -fs /usr/share/ca-certificates/202462.pem /etc/ssl/certs/202462.pem"
	I0806 08:34:00.760234   79219 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202462.pem
	I0806 08:34:00.764947   79219 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  6 07:17 /usr/share/ca-certificates/202462.pem
	I0806 08:34:00.765013   79219 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202462.pem
	I0806 08:34:00.771065   79219 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202462.pem /etc/ssl/certs/3ec20f2e.0"
	I0806 08:34:00.782995   79219 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0806 08:34:00.795092   79219 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0806 08:34:00.800011   79219 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  6 07:07 /usr/share/ca-certificates/minikubeCA.pem
	I0806 08:34:00.800079   79219 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0806 08:34:00.806038   79219 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0806 08:34:00.817660   79219 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20246.pem && ln -fs /usr/share/ca-certificates/20246.pem /etc/ssl/certs/20246.pem"
	I0806 08:34:00.829045   79219 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20246.pem
	I0806 08:34:00.833640   79219 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  6 07:17 /usr/share/ca-certificates/20246.pem
	I0806 08:34:00.833704   79219 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20246.pem
	I0806 08:34:00.839618   79219 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20246.pem /etc/ssl/certs/51391683.0"
	I0806 08:34:00.850938   79219 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0806 08:34:00.855577   79219 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0806 08:34:00.861960   79219 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0806 08:34:00.868626   79219 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0806 08:34:00.875291   79219 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0806 08:34:00.881553   79219 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0806 08:34:00.887650   79219 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0806 08:34:00.894134   79219 kubeadm.go:392] StartCluster: {Name:embed-certs-324808 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-324808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.190 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 08:34:00.894223   79219 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0806 08:34:00.894272   79219 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0806 08:34:00.938980   79219 cri.go:89] found id: ""
	I0806 08:34:00.939046   79219 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0806 08:34:00.951339   79219 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0806 08:34:00.951369   79219 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0806 08:34:00.951430   79219 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0806 08:34:00.962789   79219 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0806 08:34:00.964165   79219 kubeconfig.go:125] found "embed-certs-324808" server: "https://192.168.39.190:8443"
	I0806 08:34:00.967221   79219 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0806 08:34:00.977500   79219 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.190
	I0806 08:34:00.977535   79219 kubeadm.go:1160] stopping kube-system containers ...
	I0806 08:34:00.977547   79219 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0806 08:34:00.977603   79219 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0806 08:34:01.017218   79219 cri.go:89] found id: ""
	I0806 08:34:01.017315   79219 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0806 08:34:01.035305   79219 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0806 08:34:01.046210   79219 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0806 08:34:01.046233   79219 kubeadm.go:157] found existing configuration files:
	
	I0806 08:34:01.046290   79219 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0806 08:34:01.056904   79219 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0806 08:34:01.056974   79219 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0806 08:34:01.066640   79219 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0806 08:34:01.076042   79219 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0806 08:34:01.076114   79219 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0806 08:34:01.085735   79219 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0806 08:34:01.094602   79219 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0806 08:34:01.094661   79219 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0806 08:34:01.104160   79219 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0806 08:34:01.113176   79219 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0806 08:34:01.113249   79219 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0806 08:34:01.122679   79219 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0806 08:34:01.133032   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:01.259256   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:02.252011   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:02.481629   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:02.570215   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:02.710597   79219 api_server.go:52] waiting for apiserver process to appear ...
	I0806 08:34:02.710700   79219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:03.210803   79219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:03.711111   79219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:03.738971   79219 api_server.go:72] duration metric: took 1.028375323s to wait for apiserver process to appear ...
	I0806 08:34:03.739003   79219 api_server.go:88] waiting for apiserver healthz status ...
	I0806 08:34:03.739025   79219 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8443/healthz ...
	I0806 08:34:03.962232   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:03.962678   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | unable to find current IP address of domain default-k8s-diff-port-990751 in network mk-default-k8s-diff-port-990751
	I0806 08:34:03.962709   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | I0806 08:34:03.962638   80608 retry.go:31] will retry after 2.113116057s: waiting for machine to come up
	I0806 08:34:06.078010   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:06.078537   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | unable to find current IP address of domain default-k8s-diff-port-990751 in network mk-default-k8s-diff-port-990751
	I0806 08:34:06.078570   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | I0806 08:34:06.078503   80608 retry.go:31] will retry after 2.580076274s: waiting for machine to come up
	I0806 08:34:08.662313   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:08.662651   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | unable to find current IP address of domain default-k8s-diff-port-990751 in network mk-default-k8s-diff-port-990751
	I0806 08:34:08.662673   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | I0806 08:34:08.662609   80608 retry.go:31] will retry after 4.365982677s: waiting for machine to come up
	I0806 08:34:06.791233   79219 api_server.go:279] https://192.168.39.190:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0806 08:34:06.791272   79219 api_server.go:103] status: https://192.168.39.190:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0806 08:34:06.791311   79219 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8443/healthz ...
	I0806 08:34:06.804038   79219 api_server.go:279] https://192.168.39.190:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0806 08:34:06.804074   79219 api_server.go:103] status: https://192.168.39.190:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0806 08:34:07.239192   79219 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8443/healthz ...
	I0806 08:34:07.244973   79219 api_server.go:279] https://192.168.39.190:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0806 08:34:07.245005   79219 api_server.go:103] status: https://192.168.39.190:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0806 08:34:07.739556   79219 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8443/healthz ...
	I0806 08:34:07.754354   79219 api_server.go:279] https://192.168.39.190:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0806 08:34:07.754382   79219 api_server.go:103] status: https://192.168.39.190:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0806 08:34:08.240114   79219 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8443/healthz ...
	I0806 08:34:08.244799   79219 api_server.go:279] https://192.168.39.190:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0806 08:34:08.244829   79219 api_server.go:103] status: https://192.168.39.190:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0806 08:34:08.739407   79219 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8443/healthz ...
	I0806 08:34:08.743770   79219 api_server.go:279] https://192.168.39.190:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0806 08:34:08.743804   79219 api_server.go:103] status: https://192.168.39.190:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0806 08:34:09.239855   79219 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8443/healthz ...
	I0806 08:34:09.244078   79219 api_server.go:279] https://192.168.39.190:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0806 08:34:09.244135   79219 api_server.go:103] status: https://192.168.39.190:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0806 08:34:09.739859   79219 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8443/healthz ...
	I0806 08:34:09.744236   79219 api_server.go:279] https://192.168.39.190:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0806 08:34:09.744272   79219 api_server.go:103] status: https://192.168.39.190:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0806 08:34:10.240036   79219 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8443/healthz ...
	I0806 08:34:10.244217   79219 api_server.go:279] https://192.168.39.190:8443/healthz returned 200:
	ok
	I0806 08:34:10.250333   79219 api_server.go:141] control plane version: v1.30.3
	I0806 08:34:10.250361   79219 api_server.go:131] duration metric: took 6.511350709s to wait for apiserver health ...
	I0806 08:34:10.250370   79219 cni.go:84] Creating CNI manager for ""
	I0806 08:34:10.250376   79219 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0806 08:34:10.252472   79219 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0806 08:34:10.253779   79219 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0806 08:34:10.264772   79219 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0806 08:34:10.282904   79219 system_pods.go:43] waiting for kube-system pods to appear ...
	I0806 08:34:10.292343   79219 system_pods.go:59] 8 kube-system pods found
	I0806 08:34:10.292397   79219 system_pods.go:61] "coredns-7db6d8ff4d-4xxwm" [eeb47dea-32e9-458f-9ad9-2af6d4e68cc0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0806 08:34:10.292408   79219 system_pods.go:61] "etcd-embed-certs-324808" [1e0a82cd-6704-4da3-8992-15c68a7aaf70] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0806 08:34:10.292419   79219 system_pods.go:61] "kube-apiserver-embed-certs-324808" [a33c54d9-1fc5-4243-8fe9-d535c40908c3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0806 08:34:10.292426   79219 system_pods.go:61] "kube-controller-manager-embed-certs-324808" [b1c519b8-9c96-4c87-90f3-cc277cfe7763] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0806 08:34:10.292432   79219 system_pods.go:61] "kube-proxy-8lbzv" [0c45e6c0-9b7f-4c48-bff5-38d4a5949bec] Running
	I0806 08:34:10.292438   79219 system_pods.go:61] "kube-scheduler-embed-certs-324808" [78c2827f-341a-4441-bf7e-eff947fc3ea8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0806 08:34:10.292445   79219 system_pods.go:61] "metrics-server-569cc877fc-8xb9k" [7a5fb326-11a7-48fa-bcde-a22026a1dccb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0806 08:34:10.292452   79219 system_pods.go:61] "storage-provisioner" [15c7d89a-e994-4cca-931c-f646157a15e8] Running
	I0806 08:34:10.292465   79219 system_pods.go:74] duration metric: took 9.536183ms to wait for pod list to return data ...
	I0806 08:34:10.292474   79219 node_conditions.go:102] verifying NodePressure condition ...
	I0806 08:34:10.295584   79219 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0806 08:34:10.295616   79219 node_conditions.go:123] node cpu capacity is 2
	I0806 08:34:10.295632   79219 node_conditions.go:105] duration metric: took 3.152403ms to run NodePressure ...
	I0806 08:34:10.295654   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:10.566724   79219 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0806 08:34:10.571777   79219 kubeadm.go:739] kubelet initialised
	I0806 08:34:10.571801   79219 kubeadm.go:740] duration metric: took 5.042307ms waiting for restarted kubelet to initialise ...
	I0806 08:34:10.571812   79219 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 08:34:10.576738   79219 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-4xxwm" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:10.581943   79219 pod_ready.go:97] node "embed-certs-324808" hosting pod "coredns-7db6d8ff4d-4xxwm" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-324808" has status "Ready":"False"
	I0806 08:34:10.581976   79219 pod_ready.go:81] duration metric: took 5.207743ms for pod "coredns-7db6d8ff4d-4xxwm" in "kube-system" namespace to be "Ready" ...
	E0806 08:34:10.581989   79219 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-324808" hosting pod "coredns-7db6d8ff4d-4xxwm" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-324808" has status "Ready":"False"
	I0806 08:34:10.581997   79219 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-324808" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:10.586983   79219 pod_ready.go:97] node "embed-certs-324808" hosting pod "etcd-embed-certs-324808" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-324808" has status "Ready":"False"
	I0806 08:34:10.587007   79219 pod_ready.go:81] duration metric: took 5.000306ms for pod "etcd-embed-certs-324808" in "kube-system" namespace to be "Ready" ...
	E0806 08:34:10.587016   79219 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-324808" hosting pod "etcd-embed-certs-324808" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-324808" has status "Ready":"False"
	I0806 08:34:10.587022   79219 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-324808" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:10.592042   79219 pod_ready.go:97] node "embed-certs-324808" hosting pod "kube-apiserver-embed-certs-324808" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-324808" has status "Ready":"False"
	I0806 08:34:10.592066   79219 pod_ready.go:81] duration metric: took 5.035031ms for pod "kube-apiserver-embed-certs-324808" in "kube-system" namespace to be "Ready" ...
	E0806 08:34:10.592074   79219 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-324808" hosting pod "kube-apiserver-embed-certs-324808" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-324808" has status "Ready":"False"
	I0806 08:34:10.592080   79219 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-324808" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:10.687156   79219 pod_ready.go:97] node "embed-certs-324808" hosting pod "kube-controller-manager-embed-certs-324808" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-324808" has status "Ready":"False"
	I0806 08:34:10.687193   79219 pod_ready.go:81] duration metric: took 95.100916ms for pod "kube-controller-manager-embed-certs-324808" in "kube-system" namespace to be "Ready" ...
	E0806 08:34:10.687207   79219 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-324808" hosting pod "kube-controller-manager-embed-certs-324808" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-324808" has status "Ready":"False"
	I0806 08:34:10.687217   79219 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-8lbzv" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:11.086180   79219 pod_ready.go:97] node "embed-certs-324808" hosting pod "kube-proxy-8lbzv" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-324808" has status "Ready":"False"
	I0806 08:34:11.086213   79219 pod_ready.go:81] duration metric: took 398.984678ms for pod "kube-proxy-8lbzv" in "kube-system" namespace to be "Ready" ...
	E0806 08:34:11.086224   79219 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-324808" hosting pod "kube-proxy-8lbzv" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-324808" has status "Ready":"False"
	I0806 08:34:11.086230   79219 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-324808" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:11.486971   79219 pod_ready.go:97] node "embed-certs-324808" hosting pod "kube-scheduler-embed-certs-324808" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-324808" has status "Ready":"False"
	I0806 08:34:11.486995   79219 pod_ready.go:81] duration metric: took 400.759449ms for pod "kube-scheduler-embed-certs-324808" in "kube-system" namespace to be "Ready" ...
	E0806 08:34:11.487005   79219 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-324808" hosting pod "kube-scheduler-embed-certs-324808" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-324808" has status "Ready":"False"
	I0806 08:34:11.487011   79219 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:11.887030   79219 pod_ready.go:97] node "embed-certs-324808" hosting pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-324808" has status "Ready":"False"
	I0806 08:34:11.887052   79219 pod_ready.go:81] duration metric: took 400.033763ms for pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace to be "Ready" ...
	E0806 08:34:11.887061   79219 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-324808" hosting pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-324808" has status "Ready":"False"
	I0806 08:34:11.887068   79219 pod_ready.go:38] duration metric: took 1.315245982s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 08:34:11.887084   79219 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0806 08:34:11.899633   79219 ops.go:34] apiserver oom_adj: -16
	I0806 08:34:11.899649   79219 kubeadm.go:597] duration metric: took 10.948272769s to restartPrimaryControlPlane
	I0806 08:34:11.899657   79219 kubeadm.go:394] duration metric: took 11.005530169s to StartCluster
	I0806 08:34:11.899676   79219 settings.go:142] acquiring lock: {Name:mkb0bfbcae3b4205ef43487a9de84cfd8afd2e5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:34:11.899745   79219 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19370-13057/kubeconfig
	I0806 08:34:11.901341   79219 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/kubeconfig: {Name:mk5c066914acf8e36556b29792f75b98076ca34a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:34:11.901603   79219 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.190 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0806 08:34:11.901676   79219 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0806 08:34:11.901753   79219 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-324808"
	I0806 08:34:11.901785   79219 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-324808"
	W0806 08:34:11.901796   79219 addons.go:243] addon storage-provisioner should already be in state true
	I0806 08:34:11.901788   79219 addons.go:69] Setting default-storageclass=true in profile "embed-certs-324808"
	I0806 08:34:11.901834   79219 host.go:66] Checking if "embed-certs-324808" exists ...
	I0806 08:34:11.901851   79219 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-324808"
	I0806 08:34:11.901854   79219 config.go:182] Loaded profile config "embed-certs-324808": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 08:34:11.901842   79219 addons.go:69] Setting metrics-server=true in profile "embed-certs-324808"
	I0806 08:34:11.901921   79219 addons.go:234] Setting addon metrics-server=true in "embed-certs-324808"
	W0806 08:34:11.901940   79219 addons.go:243] addon metrics-server should already be in state true
	I0806 08:34:11.901998   79219 host.go:66] Checking if "embed-certs-324808" exists ...
	I0806 08:34:11.902216   79219 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:34:11.902252   79219 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:34:11.902280   79219 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:34:11.902308   79219 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:34:11.902383   79219 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:34:11.902411   79219 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:34:11.904343   79219 out.go:177] * Verifying Kubernetes components...
	I0806 08:34:11.905934   79219 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 08:34:11.917271   79219 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32971
	I0806 08:34:11.917375   79219 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38975
	I0806 08:34:11.917833   79219 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:34:11.917874   79219 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:34:11.917981   79219 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33569
	I0806 08:34:11.918394   79219 main.go:141] libmachine: Using API Version  1
	I0806 08:34:11.918416   79219 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:34:11.918453   79219 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:34:11.918530   79219 main.go:141] libmachine: Using API Version  1
	I0806 08:34:11.918552   79219 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:34:11.918761   79219 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:34:11.918819   79219 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:34:11.918913   79219 main.go:141] libmachine: Using API Version  1
	I0806 08:34:11.918939   79219 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:34:11.918965   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetState
	I0806 08:34:11.919280   79219 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:34:11.919322   79219 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:34:11.919367   79219 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:34:11.919873   79219 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:34:11.919901   79219 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:34:11.921815   79219 addons.go:234] Setting addon default-storageclass=true in "embed-certs-324808"
	W0806 08:34:11.921834   79219 addons.go:243] addon default-storageclass should already be in state true
	I0806 08:34:11.921856   79219 host.go:66] Checking if "embed-certs-324808" exists ...
	I0806 08:34:11.922132   79219 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:34:11.922168   79219 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:34:11.936191   79219 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40029
	I0806 08:34:11.936678   79219 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:34:11.937209   79219 main.go:141] libmachine: Using API Version  1
	I0806 08:34:11.937234   79219 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:34:11.940428   79219 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35087
	I0806 08:34:11.940508   79219 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38421
	I0806 08:34:11.940845   79219 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:34:11.941345   79219 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:34:11.941346   79219 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:34:11.941707   79219 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:34:11.941752   79219 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:34:11.941831   79219 main.go:141] libmachine: Using API Version  1
	I0806 08:34:11.941872   79219 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:34:11.941904   79219 main.go:141] libmachine: Using API Version  1
	I0806 08:34:11.941917   79219 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:34:11.942273   79219 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:34:11.942359   79219 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:34:11.942514   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetState
	I0806 08:34:11.942532   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetState
	I0806 08:34:11.944317   79219 main.go:141] libmachine: (embed-certs-324808) Calling .DriverName
	I0806 08:34:11.944503   79219 main.go:141] libmachine: (embed-certs-324808) Calling .DriverName
	I0806 08:34:11.946511   79219 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0806 08:34:11.946525   79219 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 08:34:11.947929   79219 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0806 08:34:11.947948   79219 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0806 08:34:11.947976   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHHostname
	I0806 08:34:11.948044   79219 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0806 08:34:11.948066   79219 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0806 08:34:11.948085   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHHostname
	I0806 08:34:11.951758   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:34:11.951988   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:34:11.952181   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:34:11.952205   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:34:11.952530   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:34:11.952565   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHPort
	I0806 08:34:11.952566   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:34:11.952693   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHPort
	I0806 08:34:11.952819   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHKeyPath
	I0806 08:34:11.952858   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHKeyPath
	I0806 08:34:11.952970   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHUsername
	I0806 08:34:11.952974   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHUsername
	I0806 08:34:11.953143   79219 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/embed-certs-324808/id_rsa Username:docker}
	I0806 08:34:11.953207   79219 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/embed-certs-324808/id_rsa Username:docker}
	I0806 08:34:11.959949   79219 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35275
	I0806 08:34:11.960353   79219 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:34:11.960900   79219 main.go:141] libmachine: Using API Version  1
	I0806 08:34:11.960929   79219 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:34:11.961241   79219 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:34:11.961451   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetState
	I0806 08:34:11.963031   79219 main.go:141] libmachine: (embed-certs-324808) Calling .DriverName
	I0806 08:34:11.963333   79219 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0806 08:34:11.963349   79219 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0806 08:34:11.963366   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHHostname
	I0806 08:34:11.966079   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:34:11.966448   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:34:11.966481   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:34:11.966559   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHPort
	I0806 08:34:11.966737   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHKeyPath
	I0806 08:34:11.966881   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHUsername
	I0806 08:34:11.967013   79219 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/embed-certs-324808/id_rsa Username:docker}
	I0806 08:34:12.081433   79219 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 08:34:12.100701   79219 node_ready.go:35] waiting up to 6m0s for node "embed-certs-324808" to be "Ready" ...
	I0806 08:34:12.196131   79219 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0806 08:34:12.196151   79219 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0806 08:34:12.201603   79219 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0806 08:34:12.217590   79219 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0806 08:34:12.217626   79219 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0806 08:34:12.239354   79219 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0806 08:34:12.239377   79219 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0806 08:34:12.254418   79219 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0806 08:34:12.266126   79219 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0806 08:34:13.441725   79219 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.187269206s)
	I0806 08:34:13.441780   79219 main.go:141] libmachine: Making call to close driver server
	I0806 08:34:13.441791   79219 main.go:141] libmachine: (embed-certs-324808) Calling .Close
	I0806 08:34:13.441856   79219 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.240222324s)
	I0806 08:34:13.441895   79219 main.go:141] libmachine: Making call to close driver server
	I0806 08:34:13.441910   79219 main.go:141] libmachine: (embed-certs-324808) Calling .Close
	I0806 08:34:13.442096   79219 main.go:141] libmachine: (embed-certs-324808) DBG | Closing plugin on server side
	I0806 08:34:13.442116   79219 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:34:13.442128   79219 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:34:13.442136   79219 main.go:141] libmachine: Making call to close driver server
	I0806 08:34:13.442147   79219 main.go:141] libmachine: (embed-certs-324808) Calling .Close
	I0806 08:34:13.442211   79219 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:34:13.442225   79219 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:34:13.442235   79219 main.go:141] libmachine: Making call to close driver server
	I0806 08:34:13.442245   79219 main.go:141] libmachine: (embed-certs-324808) Calling .Close
	I0806 08:34:13.442414   79219 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:34:13.442424   79219 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:34:13.443716   79219 main.go:141] libmachine: (embed-certs-324808) DBG | Closing plugin on server side
	I0806 08:34:13.443727   79219 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:34:13.443748   79219 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:34:13.448919   79219 main.go:141] libmachine: Making call to close driver server
	I0806 08:34:13.448945   79219 main.go:141] libmachine: (embed-certs-324808) Calling .Close
	I0806 08:34:13.449225   79219 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:34:13.449242   79219 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:34:13.521481   79219 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.255292404s)
	I0806 08:34:13.521542   79219 main.go:141] libmachine: Making call to close driver server
	I0806 08:34:13.521561   79219 main.go:141] libmachine: (embed-certs-324808) Calling .Close
	I0806 08:34:13.521875   79219 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:34:13.521898   79219 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:34:13.521899   79219 main.go:141] libmachine: (embed-certs-324808) DBG | Closing plugin on server side
	I0806 08:34:13.521916   79219 main.go:141] libmachine: Making call to close driver server
	I0806 08:34:13.521927   79219 main.go:141] libmachine: (embed-certs-324808) Calling .Close
	I0806 08:34:13.522147   79219 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:34:13.522165   79219 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:34:13.522175   79219 addons.go:475] Verifying addon metrics-server=true in "embed-certs-324808"
	I0806 08:34:13.522185   79219 main.go:141] libmachine: (embed-certs-324808) DBG | Closing plugin on server side
	I0806 08:34:13.524209   79219 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0806 08:34:13.032813   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.033366   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Found IP for machine: 192.168.50.235
	I0806 08:34:13.033394   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Reserving static IP address...
	I0806 08:34:13.033414   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has current primary IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.033956   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-990751", mac: "52:54:00:fb:51:bd", ip: "192.168.50.235"} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:13.033992   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Reserved static IP address: 192.168.50.235
	I0806 08:34:13.034012   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | skip adding static IP to network mk-default-k8s-diff-port-990751 - found existing host DHCP lease matching {name: "default-k8s-diff-port-990751", mac: "52:54:00:fb:51:bd", ip: "192.168.50.235"}
	I0806 08:34:13.034031   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | Getting to WaitForSSH function...
	I0806 08:34:13.034046   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for SSH to be available...
	I0806 08:34:13.036483   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.036843   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:13.036874   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.037013   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | Using SSH client type: external
	I0806 08:34:13.037042   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | Using SSH private key: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/default-k8s-diff-port-990751/id_rsa (-rw-------)
	I0806 08:34:13.037066   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.235 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19370-13057/.minikube/machines/default-k8s-diff-port-990751/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0806 08:34:13.037078   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | About to run SSH command:
	I0806 08:34:13.037119   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | exit 0
	I0806 08:34:13.164932   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | SSH cmd err, output: <nil>: 
	I0806 08:34:13.165301   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetConfigRaw
	I0806 08:34:13.165892   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetIP
	I0806 08:34:13.169003   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.169408   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:13.169449   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.169783   79614 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/default-k8s-diff-port-990751/config.json ...
	I0806 08:34:13.170431   79614 machine.go:94] provisionDockerMachine start ...
	I0806 08:34:13.170452   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .DriverName
	I0806 08:34:13.170703   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHHostname
	I0806 08:34:13.173536   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.173994   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:13.174022   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.174241   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHPort
	I0806 08:34:13.174456   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHKeyPath
	I0806 08:34:13.174632   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHKeyPath
	I0806 08:34:13.174789   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHUsername
	I0806 08:34:13.175056   79614 main.go:141] libmachine: Using SSH client type: native
	I0806 08:34:13.175329   79614 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.235 22 <nil> <nil>}
	I0806 08:34:13.175344   79614 main.go:141] libmachine: About to run SSH command:
	hostname
	I0806 08:34:13.281288   79614 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0806 08:34:13.281322   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetMachineName
	I0806 08:34:13.281573   79614 buildroot.go:166] provisioning hostname "default-k8s-diff-port-990751"
	I0806 08:34:13.281606   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetMachineName
	I0806 08:34:13.281808   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHHostname
	I0806 08:34:13.284463   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.284825   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:13.284860   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.284988   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHPort
	I0806 08:34:13.285178   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHKeyPath
	I0806 08:34:13.285350   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHKeyPath
	I0806 08:34:13.285468   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHUsername
	I0806 08:34:13.285614   79614 main.go:141] libmachine: Using SSH client type: native
	I0806 08:34:13.285807   79614 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.235 22 <nil> <nil>}
	I0806 08:34:13.285821   79614 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-990751 && echo "default-k8s-diff-port-990751" | sudo tee /etc/hostname
	I0806 08:34:13.413970   79614 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-990751
	
	I0806 08:34:13.414005   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHHostname
	I0806 08:34:13.417279   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.417708   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:13.417757   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.417947   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHPort
	I0806 08:34:13.418153   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHKeyPath
	I0806 08:34:13.418337   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHKeyPath
	I0806 08:34:13.418495   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHUsername
	I0806 08:34:13.418683   79614 main.go:141] libmachine: Using SSH client type: native
	I0806 08:34:13.418935   79614 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.235 22 <nil> <nil>}
	I0806 08:34:13.418970   79614 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-990751' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-990751/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-990751' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0806 08:34:13.534550   79614 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 08:34:13.534578   79614 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19370-13057/.minikube CaCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19370-13057/.minikube}
	I0806 08:34:13.534630   79614 buildroot.go:174] setting up certificates
	I0806 08:34:13.534647   79614 provision.go:84] configureAuth start
	I0806 08:34:13.534664   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetMachineName
	I0806 08:34:13.534962   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetIP
	I0806 08:34:13.537582   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.538144   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:13.538173   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.538329   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHHostname
	I0806 08:34:13.541132   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.541422   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:13.541447   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.541638   79614 provision.go:143] copyHostCerts
	I0806 08:34:13.541693   79614 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem, removing ...
	I0806 08:34:13.541703   79614 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem
	I0806 08:34:13.541761   79614 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem (1078 bytes)
	I0806 08:34:13.541862   79614 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem, removing ...
	I0806 08:34:13.541870   79614 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem
	I0806 08:34:13.541891   79614 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem (1123 bytes)
	I0806 08:34:13.541943   79614 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem, removing ...
	I0806 08:34:13.541950   79614 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem
	I0806 08:34:13.541966   79614 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem (1675 bytes)
	I0806 08:34:13.542013   79614 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-990751 san=[127.0.0.1 192.168.50.235 default-k8s-diff-port-990751 localhost minikube]
	I0806 08:34:13.632290   79614 provision.go:177] copyRemoteCerts
	I0806 08:34:13.632345   79614 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0806 08:34:13.632389   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHHostname
	I0806 08:34:13.635332   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.635619   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:13.635644   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.635868   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHPort
	I0806 08:34:13.636069   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHKeyPath
	I0806 08:34:13.636208   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHUsername
	I0806 08:34:13.636381   79614 sshutil.go:53] new ssh client: &{IP:192.168.50.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/default-k8s-diff-port-990751/id_rsa Username:docker}
	I0806 08:34:13.719415   79614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0806 08:34:13.745981   79614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0806 08:34:13.775906   79614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0806 08:34:13.806948   79614 provision.go:87] duration metric: took 272.283009ms to configureAuth
	I0806 08:34:13.806986   79614 buildroot.go:189] setting minikube options for container-runtime
	I0806 08:34:13.807234   79614 config.go:182] Loaded profile config "default-k8s-diff-port-990751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 08:34:13.807328   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHHostname
	I0806 08:34:13.810500   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.810895   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:13.810928   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.811074   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHPort
	I0806 08:34:13.811302   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHKeyPath
	I0806 08:34:13.811497   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHKeyPath
	I0806 08:34:13.811627   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHUsername
	I0806 08:34:13.811896   79614 main.go:141] libmachine: Using SSH client type: native
	I0806 08:34:13.812124   79614 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.235 22 <nil> <nil>}
	I0806 08:34:13.812148   79614 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0806 08:34:13.525425   79219 addons.go:510] duration metric: took 1.623751418s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0806 08:34:14.104863   79219 node_ready.go:53] node "embed-certs-324808" has status "Ready":"False"
	I0806 08:34:14.353523   79927 start.go:364] duration metric: took 3m15.487259208s to acquireMachinesLock for "old-k8s-version-409903"
	I0806 08:34:14.353614   79927 start.go:96] Skipping create...Using existing machine configuration
	I0806 08:34:14.353629   79927 fix.go:54] fixHost starting: 
	I0806 08:34:14.354059   79927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:34:14.354096   79927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:34:14.373696   79927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33985
	I0806 08:34:14.374148   79927 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:34:14.374646   79927 main.go:141] libmachine: Using API Version  1
	I0806 08:34:14.374672   79927 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:34:14.374995   79927 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:34:14.375277   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .DriverName
	I0806 08:34:14.375472   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetState
	I0806 08:34:14.377209   79927 fix.go:112] recreateIfNeeded on old-k8s-version-409903: state=Stopped err=<nil>
	I0806 08:34:14.377251   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .DriverName
	W0806 08:34:14.377387   79927 fix.go:138] unexpected machine state, will restart: <nil>
	I0806 08:34:14.379029   79927 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-409903" ...
	I0806 08:34:14.113926   79614 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0806 08:34:14.113953   79614 machine.go:97] duration metric: took 943.50747ms to provisionDockerMachine
	I0806 08:34:14.113967   79614 start.go:293] postStartSetup for "default-k8s-diff-port-990751" (driver="kvm2")
	I0806 08:34:14.113982   79614 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0806 08:34:14.114004   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .DriverName
	I0806 08:34:14.114313   79614 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0806 08:34:14.114339   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHHostname
	I0806 08:34:14.116618   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:14.117028   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:14.117067   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:14.117162   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHPort
	I0806 08:34:14.117371   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHKeyPath
	I0806 08:34:14.117545   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHUsername
	I0806 08:34:14.117738   79614 sshutil.go:53] new ssh client: &{IP:192.168.50.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/default-k8s-diff-port-990751/id_rsa Username:docker}
	I0806 08:34:14.199514   79614 ssh_runner.go:195] Run: cat /etc/os-release
	I0806 08:34:14.204063   79614 info.go:137] Remote host: Buildroot 2023.02.9
	I0806 08:34:14.204084   79614 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-13057/.minikube/addons for local assets ...
	I0806 08:34:14.204139   79614 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-13057/.minikube/files for local assets ...
	I0806 08:34:14.204219   79614 filesync.go:149] local asset: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem -> 202462.pem in /etc/ssl/certs
	I0806 08:34:14.204308   79614 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0806 08:34:14.213847   79614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem --> /etc/ssl/certs/202462.pem (1708 bytes)
	I0806 08:34:14.241524   79614 start.go:296] duration metric: took 127.543182ms for postStartSetup
	I0806 08:34:14.241570   79614 fix.go:56] duration metric: took 19.588116399s for fixHost
	I0806 08:34:14.241590   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHHostname
	I0806 08:34:14.244333   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:14.244702   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:14.244733   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:14.244936   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHPort
	I0806 08:34:14.245178   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHKeyPath
	I0806 08:34:14.245338   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHKeyPath
	I0806 08:34:14.245497   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHUsername
	I0806 08:34:14.245637   79614 main.go:141] libmachine: Using SSH client type: native
	I0806 08:34:14.245783   79614 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.235 22 <nil> <nil>}
	I0806 08:34:14.245793   79614 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0806 08:34:14.353301   79614 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722933254.326161993
	
	I0806 08:34:14.353325   79614 fix.go:216] guest clock: 1722933254.326161993
	I0806 08:34:14.353339   79614 fix.go:229] Guest: 2024-08-06 08:34:14.326161993 +0000 UTC Remote: 2024-08-06 08:34:14.2415744 +0000 UTC m=+235.446489438 (delta=84.587593ms)
	I0806 08:34:14.353389   79614 fix.go:200] guest clock delta is within tolerance: 84.587593ms
	I0806 08:34:14.353398   79614 start.go:83] releasing machines lock for "default-k8s-diff-port-990751", held for 19.699973637s
	I0806 08:34:14.353432   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .DriverName
	I0806 08:34:14.353751   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetIP
	I0806 08:34:14.356590   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:14.357031   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:14.357065   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:14.357349   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .DriverName
	I0806 08:34:14.357896   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .DriverName
	I0806 08:34:14.358125   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .DriverName
	I0806 08:34:14.358222   79614 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0806 08:34:14.358265   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHHostname
	I0806 08:34:14.358387   79614 ssh_runner.go:195] Run: cat /version.json
	I0806 08:34:14.358415   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHHostname
	I0806 08:34:14.361131   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:14.361444   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:14.361590   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:14.361629   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:14.361801   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHPort
	I0806 08:34:14.361905   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:14.361940   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:14.362025   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHKeyPath
	I0806 08:34:14.362228   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHUsername
	I0806 08:34:14.362242   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHPort
	I0806 08:34:14.362420   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHKeyPath
	I0806 08:34:14.362424   79614 sshutil.go:53] new ssh client: &{IP:192.168.50.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/default-k8s-diff-port-990751/id_rsa Username:docker}
	I0806 08:34:14.362600   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHUsername
	I0806 08:34:14.362717   79614 sshutil.go:53] new ssh client: &{IP:192.168.50.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/default-k8s-diff-port-990751/id_rsa Username:docker}
	I0806 08:34:14.469232   79614 ssh_runner.go:195] Run: systemctl --version
	I0806 08:34:14.476929   79614 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0806 08:34:14.629487   79614 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0806 08:34:14.637276   79614 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0806 08:34:14.637349   79614 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0806 08:34:14.654612   79614 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0806 08:34:14.654634   79614 start.go:495] detecting cgroup driver to use...
	I0806 08:34:14.654699   79614 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0806 08:34:14.672255   79614 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 08:34:14.689135   79614 docker.go:217] disabling cri-docker service (if available) ...
	I0806 08:34:14.689203   79614 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0806 08:34:14.705745   79614 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0806 08:34:14.721267   79614 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0806 08:34:14.841773   79614 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0806 08:34:14.982757   79614 docker.go:233] disabling docker service ...
	I0806 08:34:14.982837   79614 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0806 08:34:14.997637   79614 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0806 08:34:15.011404   79614 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0806 08:34:15.167108   79614 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0806 08:34:15.286749   79614 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0806 08:34:15.301818   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 08:34:15.320848   79614 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0806 08:34:15.320920   79614 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:15.331331   79614 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0806 08:34:15.331414   79614 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:15.342763   79614 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:15.353534   79614 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:15.365593   79614 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0806 08:34:15.379804   79614 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:15.390200   79614 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:15.410545   79614 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:15.423050   79614 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0806 08:34:15.432810   79614 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0806 08:34:15.432878   79614 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0806 08:34:15.447298   79614 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0806 08:34:15.457992   79614 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 08:34:15.606288   79614 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0806 08:34:15.792322   79614 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0806 08:34:15.792411   79614 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0806 08:34:15.798556   79614 start.go:563] Will wait 60s for crictl version
	I0806 08:34:15.798615   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:34:15.802732   79614 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0806 08:34:15.856880   79614 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0806 08:34:15.856963   79614 ssh_runner.go:195] Run: crio --version
	I0806 08:34:15.890496   79614 ssh_runner.go:195] Run: crio --version
	I0806 08:34:15.924486   79614 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0806 08:34:14.380306   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .Start
	I0806 08:34:14.380486   79927 main.go:141] libmachine: (old-k8s-version-409903) Ensuring networks are active...
	I0806 08:34:14.381155   79927 main.go:141] libmachine: (old-k8s-version-409903) Ensuring network default is active
	I0806 08:34:14.381514   79927 main.go:141] libmachine: (old-k8s-version-409903) Ensuring network mk-old-k8s-version-409903 is active
	I0806 08:34:14.381820   79927 main.go:141] libmachine: (old-k8s-version-409903) Getting domain xml...
	I0806 08:34:14.382679   79927 main.go:141] libmachine: (old-k8s-version-409903) Creating domain...
	I0806 08:34:15.725416   79927 main.go:141] libmachine: (old-k8s-version-409903) Waiting to get IP...
	I0806 08:34:15.726492   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:15.727078   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:34:15.727276   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:34:15.727179   80849 retry.go:31] will retry after 303.235731ms: waiting for machine to come up
	I0806 08:34:16.031996   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:16.032592   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:34:16.032621   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:34:16.032542   80849 retry.go:31] will retry after 355.466914ms: waiting for machine to come up
	I0806 08:34:16.390212   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:16.390736   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:34:16.390768   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:34:16.390674   80849 retry.go:31] will retry after 432.942387ms: waiting for machine to come up
	I0806 08:34:16.825739   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:16.826366   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:34:16.826390   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:34:16.826317   80849 retry.go:31] will retry after 481.17687ms: waiting for machine to come up
	I0806 08:34:17.309005   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:17.309768   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:34:17.309799   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:34:17.309745   80849 retry.go:31] will retry after 631.184181ms: waiting for machine to come up
	I0806 08:34:17.942220   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:17.942742   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:34:17.942766   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:34:17.942701   80849 retry.go:31] will retry after 602.253362ms: waiting for machine to come up
	I0806 08:34:18.546333   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:18.546918   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:34:18.546944   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:34:18.546855   80849 retry.go:31] will retry after 1.061939923s: waiting for machine to come up
	I0806 08:34:15.925709   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetIP
	I0806 08:34:15.929420   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:15.929901   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:15.929931   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:15.930159   79614 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0806 08:34:15.934916   79614 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 08:34:15.949815   79614 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-990751 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-990751 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.235 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0806 08:34:15.949971   79614 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0806 08:34:15.950054   79614 ssh_runner.go:195] Run: sudo crictl images --output json
	I0806 08:34:16.004551   79614 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0806 08:34:16.004631   79614 ssh_runner.go:195] Run: which lz4
	I0806 08:34:16.011388   79614 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0806 08:34:16.017475   79614 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0806 08:34:16.017511   79614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0806 08:34:17.564750   79614 crio.go:462] duration metric: took 1.553401521s to copy over tarball
	I0806 08:34:17.564840   79614 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0806 08:34:16.106922   79219 node_ready.go:53] node "embed-certs-324808" has status "Ready":"False"
	I0806 08:34:17.104735   79219 node_ready.go:49] node "embed-certs-324808" has status "Ready":"True"
	I0806 08:34:17.104778   79219 node_ready.go:38] duration metric: took 5.004030652s for node "embed-certs-324808" to be "Ready" ...
	I0806 08:34:17.104790   79219 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 08:34:17.113668   79219 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-4xxwm" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:17.120777   79219 pod_ready.go:92] pod "coredns-7db6d8ff4d-4xxwm" in "kube-system" namespace has status "Ready":"True"
	I0806 08:34:17.120801   79219 pod_ready.go:81] duration metric: took 7.097875ms for pod "coredns-7db6d8ff4d-4xxwm" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:17.120811   79219 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-324808" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:17.126581   79219 pod_ready.go:92] pod "etcd-embed-certs-324808" in "kube-system" namespace has status "Ready":"True"
	I0806 08:34:17.126606   79219 pod_ready.go:81] duration metric: took 5.788426ms for pod "etcd-embed-certs-324808" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:17.126619   79219 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-324808" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:18.136739   79219 pod_ready.go:92] pod "kube-apiserver-embed-certs-324808" in "kube-system" namespace has status "Ready":"True"
	I0806 08:34:18.136767   79219 pod_ready.go:81] duration metric: took 1.010139212s for pod "kube-apiserver-embed-certs-324808" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:18.136781   79219 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-324808" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:19.610165   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:19.610567   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:34:19.610598   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:34:19.610542   80849 retry.go:31] will retry after 1.339464672s: waiting for machine to come up
	I0806 08:34:20.951505   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:20.952049   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:34:20.952080   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:34:20.951975   80849 retry.go:31] will retry after 1.622574331s: waiting for machine to come up
	I0806 08:34:22.576150   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:22.576559   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:34:22.576582   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:34:22.576508   80849 retry.go:31] will retry after 1.408827574s: waiting for machine to come up
	I0806 08:34:19.920581   79614 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.355705846s)
	I0806 08:34:19.920609   79614 crio.go:469] duration metric: took 2.355825462s to extract the tarball
	I0806 08:34:19.920619   79614 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0806 08:34:19.959118   79614 ssh_runner.go:195] Run: sudo crictl images --output json
	I0806 08:34:20.002413   79614 crio.go:514] all images are preloaded for cri-o runtime.
	I0806 08:34:20.002449   79614 cache_images.go:84] Images are preloaded, skipping loading
	I0806 08:34:20.002461   79614 kubeadm.go:934] updating node { 192.168.50.235 8444 v1.30.3 crio true true} ...
	I0806 08:34:20.002588   79614 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-990751 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.235
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-990751 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0806 08:34:20.002655   79614 ssh_runner.go:195] Run: crio config
	I0806 08:34:20.058943   79614 cni.go:84] Creating CNI manager for ""
	I0806 08:34:20.058970   79614 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0806 08:34:20.058981   79614 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0806 08:34:20.059010   79614 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.235 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-990751 NodeName:default-k8s-diff-port-990751 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.235"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.235 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0806 08:34:20.059247   79614 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.235
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-990751"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.235
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.235"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0806 08:34:20.059337   79614 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0806 08:34:20.072766   79614 binaries.go:44] Found k8s binaries, skipping transfer
	I0806 08:34:20.072876   79614 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0806 08:34:20.085730   79614 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0806 08:34:20.105230   79614 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0806 08:34:20.124800   79614 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0806 08:34:20.146547   79614 ssh_runner.go:195] Run: grep 192.168.50.235	control-plane.minikube.internal$ /etc/hosts
	I0806 08:34:20.150692   79614 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.235	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 08:34:20.164006   79614 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 08:34:20.286820   79614 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 08:34:20.304141   79614 certs.go:68] Setting up /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/default-k8s-diff-port-990751 for IP: 192.168.50.235
	I0806 08:34:20.304166   79614 certs.go:194] generating shared ca certs ...
	I0806 08:34:20.304187   79614 certs.go:226] acquiring lock for ca certs: {Name:mkf5c5aed91f4108054d9f2c742cad66c86afbed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:34:20.304386   79614 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.key
	I0806 08:34:20.304448   79614 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.key
	I0806 08:34:20.304464   79614 certs.go:256] generating profile certs ...
	I0806 08:34:20.304567   79614 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/default-k8s-diff-port-990751/client.key
	I0806 08:34:20.304648   79614 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/default-k8s-diff-port-990751/apiserver.key.aa7eccb7
	I0806 08:34:20.304715   79614 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/default-k8s-diff-port-990751/proxy-client.key
	I0806 08:34:20.304872   79614 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246.pem (1338 bytes)
	W0806 08:34:20.304919   79614 certs.go:480] ignoring /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246_empty.pem, impossibly tiny 0 bytes
	I0806 08:34:20.304929   79614 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem (1675 bytes)
	I0806 08:34:20.304960   79614 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem (1078 bytes)
	I0806 08:34:20.304989   79614 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem (1123 bytes)
	I0806 08:34:20.305019   79614 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem (1675 bytes)
	I0806 08:34:20.305089   79614 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem (1708 bytes)
	I0806 08:34:20.305821   79614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0806 08:34:20.341201   79614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0806 08:34:20.375406   79614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0806 08:34:20.407409   79614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0806 08:34:20.449555   79614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/default-k8s-diff-port-990751/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0806 08:34:20.479479   79614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/default-k8s-diff-port-990751/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0806 08:34:20.518719   79614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/default-k8s-diff-port-990751/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0806 08:34:20.544212   79614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/default-k8s-diff-port-990751/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0806 08:34:20.569413   79614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0806 08:34:20.594573   79614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246.pem --> /usr/share/ca-certificates/20246.pem (1338 bytes)
	I0806 08:34:20.620183   79614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem --> /usr/share/ca-certificates/202462.pem (1708 bytes)
	I0806 08:34:20.648311   79614 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0806 08:34:20.670845   79614 ssh_runner.go:195] Run: openssl version
	I0806 08:34:20.677437   79614 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202462.pem && ln -fs /usr/share/ca-certificates/202462.pem /etc/ssl/certs/202462.pem"
	I0806 08:34:20.688990   79614 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202462.pem
	I0806 08:34:20.694512   79614 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  6 07:17 /usr/share/ca-certificates/202462.pem
	I0806 08:34:20.694580   79614 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202462.pem
	I0806 08:34:20.701362   79614 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202462.pem /etc/ssl/certs/3ec20f2e.0"
	I0806 08:34:20.712966   79614 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0806 08:34:20.724455   79614 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0806 08:34:20.729111   79614 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  6 07:07 /usr/share/ca-certificates/minikubeCA.pem
	I0806 08:34:20.729165   79614 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0806 08:34:20.734973   79614 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0806 08:34:20.746387   79614 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20246.pem && ln -fs /usr/share/ca-certificates/20246.pem /etc/ssl/certs/20246.pem"
	I0806 08:34:20.757763   79614 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20246.pem
	I0806 08:34:20.762396   79614 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  6 07:17 /usr/share/ca-certificates/20246.pem
	I0806 08:34:20.762454   79614 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20246.pem
	I0806 08:34:20.768261   79614 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20246.pem /etc/ssl/certs/51391683.0"
	I0806 08:34:20.779553   79614 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0806 08:34:20.785854   79614 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0806 08:34:20.794054   79614 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0806 08:34:20.802652   79614 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0806 08:34:20.809929   79614 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0806 08:34:20.816886   79614 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0806 08:34:20.823667   79614 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0806 08:34:20.830355   79614 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-990751 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-990751 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.235 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 08:34:20.830456   79614 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0806 08:34:20.830527   79614 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0806 08:34:20.878368   79614 cri.go:89] found id: ""
	I0806 08:34:20.878455   79614 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0806 08:34:20.889175   79614 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0806 08:34:20.889196   79614 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0806 08:34:20.889250   79614 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0806 08:34:20.899842   79614 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0806 08:34:20.901037   79614 kubeconfig.go:125] found "default-k8s-diff-port-990751" server: "https://192.168.50.235:8444"
	I0806 08:34:20.902781   79614 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0806 08:34:20.914810   79614 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.235
	I0806 08:34:20.914847   79614 kubeadm.go:1160] stopping kube-system containers ...
	I0806 08:34:20.914859   79614 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0806 08:34:20.914905   79614 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0806 08:34:20.956719   79614 cri.go:89] found id: ""
	I0806 08:34:20.956772   79614 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0806 08:34:20.977367   79614 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0806 08:34:20.989491   79614 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0806 08:34:20.989513   79614 kubeadm.go:157] found existing configuration files:
	
	I0806 08:34:20.989565   79614 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0806 08:34:21.000759   79614 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0806 08:34:21.000820   79614 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0806 08:34:21.012404   79614 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0806 08:34:21.023449   79614 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0806 08:34:21.023536   79614 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0806 08:34:21.035354   79614 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0806 08:34:21.045285   79614 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0806 08:34:21.045347   79614 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0806 08:34:21.057182   79614 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0806 08:34:21.067431   79614 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0806 08:34:21.067497   79614 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0806 08:34:21.077955   79614 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0806 08:34:21.088824   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:21.209627   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:21.772624   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:22.022615   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:22.107829   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:22.235446   79614 api_server.go:52] waiting for apiserver process to appear ...
	I0806 08:34:22.235552   79614 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:22.735843   79614 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:23.236385   79614 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:23.314994   79614 api_server.go:72] duration metric: took 1.079547638s to wait for apiserver process to appear ...
	I0806 08:34:23.315025   79614 api_server.go:88] waiting for apiserver healthz status ...
	I0806 08:34:23.315071   79614 api_server.go:253] Checking apiserver healthz at https://192.168.50.235:8444/healthz ...
	I0806 08:34:23.315581   79614 api_server.go:269] stopped: https://192.168.50.235:8444/healthz: Get "https://192.168.50.235:8444/healthz": dial tcp 192.168.50.235:8444: connect: connection refused
	I0806 08:34:23.815428   79614 api_server.go:253] Checking apiserver healthz at https://192.168.50.235:8444/healthz ...
	I0806 08:34:20.145118   79219 pod_ready.go:102] pod "kube-controller-manager-embed-certs-324808" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:23.051556   79219 pod_ready.go:102] pod "kube-controller-manager-embed-certs-324808" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:23.077556   79219 pod_ready.go:92] pod "kube-controller-manager-embed-certs-324808" in "kube-system" namespace has status "Ready":"True"
	I0806 08:34:23.077581   79219 pod_ready.go:81] duration metric: took 4.94079164s for pod "kube-controller-manager-embed-certs-324808" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:23.077594   79219 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8lbzv" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:23.091745   79219 pod_ready.go:92] pod "kube-proxy-8lbzv" in "kube-system" namespace has status "Ready":"True"
	I0806 08:34:23.091773   79219 pod_ready.go:81] duration metric: took 14.170363ms for pod "kube-proxy-8lbzv" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:23.091789   79219 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-324808" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:23.105567   79219 pod_ready.go:92] pod "kube-scheduler-embed-certs-324808" in "kube-system" namespace has status "Ready":"True"
	I0806 08:34:23.105597   79219 pod_ready.go:81] duration metric: took 13.798762ms for pod "kube-scheduler-embed-certs-324808" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:23.105612   79219 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:23.987256   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:23.987658   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:34:23.987683   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:34:23.987617   80849 retry.go:31] will retry after 2.000071561s: waiting for machine to come up
	I0806 08:34:25.989501   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:25.989880   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:34:25.989906   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:34:25.989838   80849 retry.go:31] will retry after 2.508269501s: waiting for machine to come up
	I0806 08:34:28.501449   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:28.501842   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:34:28.501874   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:34:28.501802   80849 retry.go:31] will retry after 2.7583106s: waiting for machine to come up
	I0806 08:34:26.696273   79614 api_server.go:279] https://192.168.50.235:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0806 08:34:26.696325   79614 api_server.go:103] status: https://192.168.50.235:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0806 08:34:26.696342   79614 api_server.go:253] Checking apiserver healthz at https://192.168.50.235:8444/healthz ...
	I0806 08:34:26.704047   79614 api_server.go:279] https://192.168.50.235:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0806 08:34:26.704078   79614 api_server.go:103] status: https://192.168.50.235:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0806 08:34:26.815341   79614 api_server.go:253] Checking apiserver healthz at https://192.168.50.235:8444/healthz ...
	I0806 08:34:26.819686   79614 api_server.go:279] https://192.168.50.235:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0806 08:34:26.819718   79614 api_server.go:103] status: https://192.168.50.235:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0806 08:34:27.315242   79614 api_server.go:253] Checking apiserver healthz at https://192.168.50.235:8444/healthz ...
	I0806 08:34:27.320753   79614 api_server.go:279] https://192.168.50.235:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0806 08:34:27.320789   79614 api_server.go:103] status: https://192.168.50.235:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0806 08:34:27.815164   79614 api_server.go:253] Checking apiserver healthz at https://192.168.50.235:8444/healthz ...
	I0806 08:34:27.821943   79614 api_server.go:279] https://192.168.50.235:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0806 08:34:27.821967   79614 api_server.go:103] status: https://192.168.50.235:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0806 08:34:28.315169   79614 api_server.go:253] Checking apiserver healthz at https://192.168.50.235:8444/healthz ...
	I0806 08:34:28.321921   79614 api_server.go:279] https://192.168.50.235:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0806 08:34:28.321945   79614 api_server.go:103] status: https://192.168.50.235:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0806 08:34:28.815717   79614 api_server.go:253] Checking apiserver healthz at https://192.168.50.235:8444/healthz ...
	I0806 08:34:28.819946   79614 api_server.go:279] https://192.168.50.235:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0806 08:34:28.819973   79614 api_server.go:103] status: https://192.168.50.235:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0806 08:34:25.112333   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:27.112488   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:29.315643   79614 api_server.go:253] Checking apiserver healthz at https://192.168.50.235:8444/healthz ...
	I0806 08:34:29.319784   79614 api_server.go:279] https://192.168.50.235:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0806 08:34:29.319813   79614 api_server.go:103] status: https://192.168.50.235:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0806 08:34:29.815945   79614 api_server.go:253] Checking apiserver healthz at https://192.168.50.235:8444/healthz ...
	I0806 08:34:29.821441   79614 api_server.go:279] https://192.168.50.235:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0806 08:34:29.821476   79614 api_server.go:103] status: https://192.168.50.235:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0806 08:34:30.315997   79614 api_server.go:253] Checking apiserver healthz at https://192.168.50.235:8444/healthz ...
	I0806 08:34:30.321179   79614 api_server.go:279] https://192.168.50.235:8444/healthz returned 200:
	ok
	I0806 08:34:30.327083   79614 api_server.go:141] control plane version: v1.30.3
	I0806 08:34:30.327113   79614 api_server.go:131] duration metric: took 7.012080775s to wait for apiserver health ...
	I0806 08:34:30.327122   79614 cni.go:84] Creating CNI manager for ""
	I0806 08:34:30.327128   79614 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0806 08:34:30.328860   79614 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0806 08:34:30.330232   79614 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0806 08:34:30.341069   79614 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0806 08:34:30.359470   79614 system_pods.go:43] waiting for kube-system pods to appear ...
	I0806 08:34:30.369165   79614 system_pods.go:59] 8 kube-system pods found
	I0806 08:34:30.369209   79614 system_pods.go:61] "coredns-7db6d8ff4d-gx8tt" [48f83ba8-a238-4baf-94af-88370fefcb02] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0806 08:34:30.369227   79614 system_pods.go:61] "etcd-default-k8s-diff-port-990751" [bf342ba4-1e11-40bf-80fd-02b402043ecf] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0806 08:34:30.369246   79614 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-990751" [11ee42c4-c9e9-41cf-ace2-deac0f30153a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0806 08:34:30.369256   79614 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-990751" [f17fca30-0afc-4ed8-a8cc-cb64e69723f3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0806 08:34:30.369265   79614 system_pods.go:61] "kube-proxy-lpf88" [866c10f0-2c44-4c03-b460-ff2194bd1ec6] Running
	I0806 08:34:30.369272   79614 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-990751" [2e591ea7-07bd-464f-bf0e-bc24bfcca016] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0806 08:34:30.369282   79614 system_pods.go:61] "metrics-server-569cc877fc-hnxd4" [f369ac70-49b7-448a-9852-89232fb66da9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0806 08:34:30.369290   79614 system_pods.go:61] "storage-provisioner" [0faff3fc-d34b-418e-972b-95a2e36b577a] Running
	I0806 08:34:30.369299   79614 system_pods.go:74] duration metric: took 9.80217ms to wait for pod list to return data ...
	I0806 08:34:30.369311   79614 node_conditions.go:102] verifying NodePressure condition ...
	I0806 08:34:30.372222   79614 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0806 08:34:30.372250   79614 node_conditions.go:123] node cpu capacity is 2
	I0806 08:34:30.372261   79614 node_conditions.go:105] duration metric: took 2.944552ms to run NodePressure ...
	I0806 08:34:30.372279   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:30.639677   79614 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0806 08:34:30.643923   79614 kubeadm.go:739] kubelet initialised
	I0806 08:34:30.643952   79614 kubeadm.go:740] duration metric: took 4.248408ms waiting for restarted kubelet to initialise ...
	I0806 08:34:30.643962   79614 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 08:34:30.648766   79614 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-gx8tt" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:30.653672   79614 pod_ready.go:97] node "default-k8s-diff-port-990751" hosting pod "coredns-7db6d8ff4d-gx8tt" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-990751" has status "Ready":"False"
	I0806 08:34:30.653703   79614 pod_ready.go:81] duration metric: took 4.90961ms for pod "coredns-7db6d8ff4d-gx8tt" in "kube-system" namespace to be "Ready" ...
	E0806 08:34:30.653714   79614 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-990751" hosting pod "coredns-7db6d8ff4d-gx8tt" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-990751" has status "Ready":"False"
	I0806 08:34:30.653723   79614 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-990751" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:30.658080   79614 pod_ready.go:97] node "default-k8s-diff-port-990751" hosting pod "etcd-default-k8s-diff-port-990751" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-990751" has status "Ready":"False"
	I0806 08:34:30.658106   79614 pod_ready.go:81] duration metric: took 4.370083ms for pod "etcd-default-k8s-diff-port-990751" in "kube-system" namespace to be "Ready" ...
	E0806 08:34:30.658119   79614 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-990751" hosting pod "etcd-default-k8s-diff-port-990751" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-990751" has status "Ready":"False"
	I0806 08:34:30.658125   79614 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-990751" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:30.662246   79614 pod_ready.go:97] node "default-k8s-diff-port-990751" hosting pod "kube-apiserver-default-k8s-diff-port-990751" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-990751" has status "Ready":"False"
	I0806 08:34:30.662271   79614 pod_ready.go:81] duration metric: took 4.139553ms for pod "kube-apiserver-default-k8s-diff-port-990751" in "kube-system" namespace to be "Ready" ...
	E0806 08:34:30.662280   79614 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-990751" hosting pod "kube-apiserver-default-k8s-diff-port-990751" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-990751" has status "Ready":"False"
	I0806 08:34:30.662286   79614 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-990751" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:30.763246   79614 pod_ready.go:97] node "default-k8s-diff-port-990751" hosting pod "kube-controller-manager-default-k8s-diff-port-990751" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-990751" has status "Ready":"False"
	I0806 08:34:30.763286   79614 pod_ready.go:81] duration metric: took 100.991212ms for pod "kube-controller-manager-default-k8s-diff-port-990751" in "kube-system" namespace to be "Ready" ...
	E0806 08:34:30.763302   79614 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-990751" hosting pod "kube-controller-manager-default-k8s-diff-port-990751" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-990751" has status "Ready":"False"
	I0806 08:34:30.763311   79614 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-lpf88" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:31.162972   79614 pod_ready.go:97] node "default-k8s-diff-port-990751" hosting pod "kube-proxy-lpf88" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-990751" has status "Ready":"False"
	I0806 08:34:31.162998   79614 pod_ready.go:81] duration metric: took 399.678401ms for pod "kube-proxy-lpf88" in "kube-system" namespace to be "Ready" ...
	E0806 08:34:31.163006   79614 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-990751" hosting pod "kube-proxy-lpf88" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-990751" has status "Ready":"False"
	I0806 08:34:31.163012   79614 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-990751" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:31.562538   79614 pod_ready.go:97] node "default-k8s-diff-port-990751" hosting pod "kube-scheduler-default-k8s-diff-port-990751" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-990751" has status "Ready":"False"
	I0806 08:34:31.562564   79614 pod_ready.go:81] duration metric: took 399.544055ms for pod "kube-scheduler-default-k8s-diff-port-990751" in "kube-system" namespace to be "Ready" ...
	E0806 08:34:31.562575   79614 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-990751" hosting pod "kube-scheduler-default-k8s-diff-port-990751" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-990751" has status "Ready":"False"
	I0806 08:34:31.562582   79614 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:31.963410   79614 pod_ready.go:97] node "default-k8s-diff-port-990751" hosting pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-990751" has status "Ready":"False"
	I0806 08:34:31.963438   79614 pod_ready.go:81] duration metric: took 400.849931ms for pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace to be "Ready" ...
	E0806 08:34:31.963449   79614 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-990751" hosting pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-990751" has status "Ready":"False"
	I0806 08:34:31.963456   79614 pod_ready.go:38] duration metric: took 1.319483278s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 08:34:31.963472   79614 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0806 08:34:31.975982   79614 ops.go:34] apiserver oom_adj: -16
	I0806 08:34:31.976006   79614 kubeadm.go:597] duration metric: took 11.086801084s to restartPrimaryControlPlane
	I0806 08:34:31.976016   79614 kubeadm.go:394] duration metric: took 11.145668347s to StartCluster
	I0806 08:34:31.976035   79614 settings.go:142] acquiring lock: {Name:mkb0bfbcae3b4205ef43487a9de84cfd8afd2e5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:34:31.976131   79614 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19370-13057/kubeconfig
	I0806 08:34:31.977626   79614 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/kubeconfig: {Name:mk5c066914acf8e36556b29792f75b98076ca34a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:34:31.977871   79614 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.235 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0806 08:34:31.977927   79614 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0806 08:34:31.978004   79614 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-990751"
	I0806 08:34:31.978058   79614 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-990751"
	W0806 08:34:31.978070   79614 addons.go:243] addon storage-provisioner should already be in state true
	I0806 08:34:31.978060   79614 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-990751"
	I0806 08:34:31.978092   79614 config.go:182] Loaded profile config "default-k8s-diff-port-990751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 08:34:31.978076   79614 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-990751"
	I0806 08:34:31.978124   79614 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-990751"
	I0806 08:34:31.978154   79614 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-990751"
	I0806 08:34:31.978111   79614 host.go:66] Checking if "default-k8s-diff-port-990751" exists ...
	W0806 08:34:31.978172   79614 addons.go:243] addon metrics-server should already be in state true
	I0806 08:34:31.978250   79614 host.go:66] Checking if "default-k8s-diff-port-990751" exists ...
	I0806 08:34:31.978595   79614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:34:31.978651   79614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:34:31.978602   79614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:34:31.978691   79614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:34:31.978650   79614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:34:31.978806   79614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:34:31.979749   79614 out.go:177] * Verifying Kubernetes components...
	I0806 08:34:31.981485   79614 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 08:34:31.994362   79614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36289
	I0806 08:34:31.995043   79614 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:34:31.995594   79614 main.go:141] libmachine: Using API Version  1
	I0806 08:34:31.995610   79614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:34:31.995940   79614 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:34:31.996152   79614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36601
	I0806 08:34:31.996485   79614 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:34:31.996569   79614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:34:31.996613   79614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:34:31.996785   79614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42273
	I0806 08:34:31.997043   79614 main.go:141] libmachine: Using API Version  1
	I0806 08:34:31.997062   79614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:34:31.997335   79614 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:34:31.997424   79614 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:34:31.997601   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetState
	I0806 08:34:31.997759   79614 main.go:141] libmachine: Using API Version  1
	I0806 08:34:31.997775   79614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:34:31.998047   79614 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:34:31.998511   79614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:34:31.998546   79614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:34:32.001381   79614 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-990751"
	W0806 08:34:32.001397   79614 addons.go:243] addon default-storageclass should already be in state true
	I0806 08:34:32.001427   79614 host.go:66] Checking if "default-k8s-diff-port-990751" exists ...
	I0806 08:34:32.001690   79614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:34:32.001721   79614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:34:32.015095   79614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41633
	I0806 08:34:32.015597   79614 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:34:32.016197   79614 main.go:141] libmachine: Using API Version  1
	I0806 08:34:32.016220   79614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:34:32.016245   79614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33731
	I0806 08:34:32.016951   79614 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:34:32.016953   79614 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:34:32.017491   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetState
	I0806 08:34:32.017572   79614 main.go:141] libmachine: Using API Version  1
	I0806 08:34:32.017601   79614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:34:32.017951   79614 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:34:32.018222   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetState
	I0806 08:34:32.020339   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .DriverName
	I0806 08:34:32.020347   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .DriverName
	I0806 08:34:32.021417   79614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40949
	I0806 08:34:32.021814   79614 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:34:32.022232   79614 main.go:141] libmachine: Using API Version  1
	I0806 08:34:32.022254   79614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:34:32.022609   79614 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:34:32.022690   79614 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 08:34:32.022690   79614 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0806 08:34:32.023365   79614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:34:32.023410   79614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:34:32.024304   79614 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0806 08:34:32.024326   79614 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0806 08:34:32.024343   79614 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0806 08:34:32.024358   79614 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0806 08:34:32.024387   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHHostname
	I0806 08:34:32.024346   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHHostname
	I0806 08:34:32.029046   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:32.029292   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:32.029490   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:32.029516   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:32.029672   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:32.029676   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHPort
	I0806 08:34:32.029694   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:32.029840   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHPort
	I0806 08:34:32.029866   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHKeyPath
	I0806 08:34:32.030000   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHKeyPath
	I0806 08:34:32.030122   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHUsername
	I0806 08:34:32.030124   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHUsername
	I0806 08:34:32.030265   79614 sshutil.go:53] new ssh client: &{IP:192.168.50.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/default-k8s-diff-port-990751/id_rsa Username:docker}
	I0806 08:34:32.030265   79614 sshutil.go:53] new ssh client: &{IP:192.168.50.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/default-k8s-diff-port-990751/id_rsa Username:docker}
	I0806 08:34:32.043523   79614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40021
	I0806 08:34:32.043915   79614 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:34:32.044560   79614 main.go:141] libmachine: Using API Version  1
	I0806 08:34:32.044583   79614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:34:32.044969   79614 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:34:32.045214   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetState
	I0806 08:34:32.046980   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .DriverName
	I0806 08:34:32.047261   79614 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0806 08:34:32.047277   79614 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0806 08:34:32.047296   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHHostname
	I0806 08:34:32.050654   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:32.050991   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:32.051020   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:32.051152   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHPort
	I0806 08:34:32.051329   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHKeyPath
	I0806 08:34:32.051495   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHUsername
	I0806 08:34:32.051613   79614 sshutil.go:53] new ssh client: &{IP:192.168.50.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/default-k8s-diff-port-990751/id_rsa Username:docker}
	I0806 08:34:32.192056   79614 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 08:34:32.231428   79614 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-990751" to be "Ready" ...
	I0806 08:34:32.278049   79614 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0806 08:34:32.378290   79614 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0806 08:34:32.378317   79614 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0806 08:34:32.399728   79614 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0806 08:34:32.419412   79614 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0806 08:34:32.419441   79614 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0806 08:34:32.470691   79614 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0806 08:34:32.470726   79614 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0806 08:34:32.514191   79614 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0806 08:34:32.590737   79614 main.go:141] libmachine: Making call to close driver server
	I0806 08:34:32.590772   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .Close
	I0806 08:34:32.591110   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | Closing plugin on server side
	I0806 08:34:32.591177   79614 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:34:32.591190   79614 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:34:32.591202   79614 main.go:141] libmachine: Making call to close driver server
	I0806 08:34:32.591212   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .Close
	I0806 08:34:32.591465   79614 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:34:32.591492   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | Closing plugin on server side
	I0806 08:34:32.591505   79614 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:34:32.597276   79614 main.go:141] libmachine: Making call to close driver server
	I0806 08:34:32.597297   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .Close
	I0806 08:34:32.597547   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | Closing plugin on server side
	I0806 08:34:32.597564   79614 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:34:32.597579   79614 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:34:33.401000   79614 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.001229332s)
	I0806 08:34:33.401045   79614 main.go:141] libmachine: Making call to close driver server
	I0806 08:34:33.401062   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .Close
	I0806 08:34:33.401346   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | Closing plugin on server side
	I0806 08:34:33.401402   79614 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:34:33.401426   79614 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:34:33.401443   79614 main.go:141] libmachine: Making call to close driver server
	I0806 08:34:33.401452   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .Close
	I0806 08:34:33.401692   79614 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:34:33.401720   79614 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:34:33.455633   79614 main.go:141] libmachine: Making call to close driver server
	I0806 08:34:33.455663   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .Close
	I0806 08:34:33.455947   79614 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:34:33.455964   79614 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:34:33.455964   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | Closing plugin on server side
	I0806 08:34:33.455972   79614 main.go:141] libmachine: Making call to close driver server
	I0806 08:34:33.455980   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .Close
	I0806 08:34:33.456288   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | Closing plugin on server side
	I0806 08:34:33.456286   79614 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:34:33.456328   79614 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:34:33.456347   79614 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-990751"
	I0806 08:34:33.459411   79614 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0806 08:34:31.262960   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:31.263282   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:34:31.263304   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:34:31.263238   80849 retry.go:31] will retry after 4.76596973s: waiting for machine to come up
	I0806 08:34:33.460780   79614 addons.go:510] duration metric: took 1.482853001s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0806 08:34:29.611714   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:31.612385   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:34.111936   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:37.353538   78922 start.go:364] duration metric: took 58.142260904s to acquireMachinesLock for "no-preload-703340"
	I0806 08:34:37.353587   78922 start.go:96] Skipping create...Using existing machine configuration
	I0806 08:34:37.353598   78922 fix.go:54] fixHost starting: 
	I0806 08:34:37.354016   78922 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:34:37.354050   78922 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:34:37.373838   78922 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37209
	I0806 08:34:37.374283   78922 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:34:37.374874   78922 main.go:141] libmachine: Using API Version  1
	I0806 08:34:37.374910   78922 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:34:37.375250   78922 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:34:37.375448   78922 main.go:141] libmachine: (no-preload-703340) Calling .DriverName
	I0806 08:34:37.375584   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetState
	I0806 08:34:37.377251   78922 fix.go:112] recreateIfNeeded on no-preload-703340: state=Stopped err=<nil>
	I0806 08:34:37.377275   78922 main.go:141] libmachine: (no-preload-703340) Calling .DriverName
	W0806 08:34:37.377421   78922 fix.go:138] unexpected machine state, will restart: <nil>
	I0806 08:34:37.379343   78922 out.go:177] * Restarting existing kvm2 VM for "no-preload-703340" ...
	I0806 08:34:36.030512   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.030950   79927 main.go:141] libmachine: (old-k8s-version-409903) Found IP for machine: 192.168.61.158
	I0806 08:34:36.030972   79927 main.go:141] libmachine: (old-k8s-version-409903) Reserving static IP address...
	I0806 08:34:36.030989   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has current primary IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.031349   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "old-k8s-version-409903", mac: "52:54:00:12:30:88", ip: "192.168.61.158"} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:36.031376   79927 main.go:141] libmachine: (old-k8s-version-409903) Reserved static IP address: 192.168.61.158
	I0806 08:34:36.031398   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | skip adding static IP to network mk-old-k8s-version-409903 - found existing host DHCP lease matching {name: "old-k8s-version-409903", mac: "52:54:00:12:30:88", ip: "192.168.61.158"}
	I0806 08:34:36.031416   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | Getting to WaitForSSH function...
	I0806 08:34:36.031428   79927 main.go:141] libmachine: (old-k8s-version-409903) Waiting for SSH to be available...
	I0806 08:34:36.033684   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.034094   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:36.034124   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.034327   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | Using SSH client type: external
	I0806 08:34:36.034366   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | Using SSH private key: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/old-k8s-version-409903/id_rsa (-rw-------)
	I0806 08:34:36.034395   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.158 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19370-13057/.minikube/machines/old-k8s-version-409903/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0806 08:34:36.034407   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | About to run SSH command:
	I0806 08:34:36.034415   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | exit 0
	I0806 08:34:36.164605   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | SSH cmd err, output: <nil>: 
	I0806 08:34:36.164952   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetConfigRaw
	I0806 08:34:36.165509   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetIP
	I0806 08:34:36.168135   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.168471   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:36.168499   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.168756   79927 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/old-k8s-version-409903/config.json ...
	I0806 08:34:36.168956   79927 machine.go:94] provisionDockerMachine start ...
	I0806 08:34:36.168974   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .DriverName
	I0806 08:34:36.169186   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHHostname
	I0806 08:34:36.171367   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.171702   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:36.171723   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.171902   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHPort
	I0806 08:34:36.172080   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:34:36.172240   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:34:36.172400   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHUsername
	I0806 08:34:36.172569   79927 main.go:141] libmachine: Using SSH client type: native
	I0806 08:34:36.172798   79927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.158 22 <nil> <nil>}
	I0806 08:34:36.172812   79927 main.go:141] libmachine: About to run SSH command:
	hostname
	I0806 08:34:36.288738   79927 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0806 08:34:36.288770   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetMachineName
	I0806 08:34:36.289073   79927 buildroot.go:166] provisioning hostname "old-k8s-version-409903"
	I0806 08:34:36.289088   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetMachineName
	I0806 08:34:36.289272   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHHostname
	I0806 08:34:36.291958   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.292326   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:36.292354   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.292462   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHPort
	I0806 08:34:36.292662   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:34:36.292807   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:34:36.292953   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHUsername
	I0806 08:34:36.293122   79927 main.go:141] libmachine: Using SSH client type: native
	I0806 08:34:36.293287   79927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.158 22 <nil> <nil>}
	I0806 08:34:36.293300   79927 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-409903 && echo "old-k8s-version-409903" | sudo tee /etc/hostname
	I0806 08:34:36.427591   79927 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-409903
	
	I0806 08:34:36.427620   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHHostname
	I0806 08:34:36.430395   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.430724   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:36.430758   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.430900   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHPort
	I0806 08:34:36.431133   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:34:36.431311   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:34:36.431476   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHUsername
	I0806 08:34:36.431662   79927 main.go:141] libmachine: Using SSH client type: native
	I0806 08:34:36.431876   79927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.158 22 <nil> <nil>}
	I0806 08:34:36.431895   79927 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-409903' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-409903/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-409903' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0806 08:34:36.554542   79927 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 08:34:36.554574   79927 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19370-13057/.minikube CaCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19370-13057/.minikube}
	I0806 08:34:36.554613   79927 buildroot.go:174] setting up certificates
	I0806 08:34:36.554629   79927 provision.go:84] configureAuth start
	I0806 08:34:36.554646   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetMachineName
	I0806 08:34:36.554932   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetIP
	I0806 08:34:36.557899   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.558351   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:36.558373   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.558530   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHHostname
	I0806 08:34:36.560878   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.561153   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:36.561182   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.561339   79927 provision.go:143] copyHostCerts
	I0806 08:34:36.561400   79927 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem, removing ...
	I0806 08:34:36.561410   79927 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem
	I0806 08:34:36.561462   79927 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem (1078 bytes)
	I0806 08:34:36.561563   79927 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem, removing ...
	I0806 08:34:36.561572   79927 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem
	I0806 08:34:36.561594   79927 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem (1123 bytes)
	I0806 08:34:36.561647   79927 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem, removing ...
	I0806 08:34:36.561653   79927 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem
	I0806 08:34:36.561670   79927 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem (1675 bytes)
	I0806 08:34:36.561752   79927 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-409903 san=[127.0.0.1 192.168.61.158 localhost minikube old-k8s-version-409903]
	I0806 08:34:36.646180   79927 provision.go:177] copyRemoteCerts
	I0806 08:34:36.646233   79927 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0806 08:34:36.646257   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHHostname
	I0806 08:34:36.649020   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.649380   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:36.649405   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.649564   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHPort
	I0806 08:34:36.649791   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:34:36.649962   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHUsername
	I0806 08:34:36.650092   79927 sshutil.go:53] new ssh client: &{IP:192.168.61.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/old-k8s-version-409903/id_rsa Username:docker}
	I0806 08:34:36.738988   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0806 08:34:36.765104   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0806 08:34:36.791946   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0806 08:34:36.817128   79927 provision.go:87] duration metric: took 262.482885ms to configureAuth
	I0806 08:34:36.817157   79927 buildroot.go:189] setting minikube options for container-runtime
	I0806 08:34:36.817352   79927 config.go:182] Loaded profile config "old-k8s-version-409903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0806 08:34:36.817435   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHHostname
	I0806 08:34:36.820142   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.820480   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:36.820520   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.820659   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHPort
	I0806 08:34:36.820892   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:34:36.821095   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:34:36.821289   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHUsername
	I0806 08:34:36.821459   79927 main.go:141] libmachine: Using SSH client type: native
	I0806 08:34:36.821703   79927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.158 22 <nil> <nil>}
	I0806 08:34:36.821722   79927 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0806 08:34:37.099949   79927 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0806 08:34:37.099979   79927 machine.go:97] duration metric: took 931.010538ms to provisionDockerMachine
	I0806 08:34:37.099993   79927 start.go:293] postStartSetup for "old-k8s-version-409903" (driver="kvm2")
	I0806 08:34:37.100007   79927 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0806 08:34:37.100028   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .DriverName
	I0806 08:34:37.100379   79927 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0806 08:34:37.100414   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHHostname
	I0806 08:34:37.103297   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:37.103691   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:37.103717   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:37.103887   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHPort
	I0806 08:34:37.104106   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:34:37.104263   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHUsername
	I0806 08:34:37.104402   79927 sshutil.go:53] new ssh client: &{IP:192.168.61.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/old-k8s-version-409903/id_rsa Username:docker}
	I0806 08:34:37.192035   79927 ssh_runner.go:195] Run: cat /etc/os-release
	I0806 08:34:37.196693   79927 info.go:137] Remote host: Buildroot 2023.02.9
	I0806 08:34:37.196712   79927 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-13057/.minikube/addons for local assets ...
	I0806 08:34:37.196776   79927 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-13057/.minikube/files for local assets ...
	I0806 08:34:37.196861   79927 filesync.go:149] local asset: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem -> 202462.pem in /etc/ssl/certs
	I0806 08:34:37.196957   79927 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0806 08:34:37.206906   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem --> /etc/ssl/certs/202462.pem (1708 bytes)
	I0806 08:34:37.232270   79927 start.go:296] duration metric: took 132.262362ms for postStartSetup
	I0806 08:34:37.232311   79927 fix.go:56] duration metric: took 22.878683419s for fixHost
	I0806 08:34:37.232332   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHHostname
	I0806 08:34:37.235278   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:37.235721   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:37.235751   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:37.235946   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHPort
	I0806 08:34:37.236168   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:34:37.236337   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:34:37.236491   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHUsername
	I0806 08:34:37.236640   79927 main.go:141] libmachine: Using SSH client type: native
	I0806 08:34:37.236793   79927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.158 22 <nil> <nil>}
	I0806 08:34:37.236803   79927 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0806 08:34:37.353370   79927 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722933277.327882532
	
	I0806 08:34:37.353397   79927 fix.go:216] guest clock: 1722933277.327882532
	I0806 08:34:37.353408   79927 fix.go:229] Guest: 2024-08-06 08:34:37.327882532 +0000 UTC Remote: 2024-08-06 08:34:37.232315481 +0000 UTC m=+218.506631326 (delta=95.567051ms)
	I0806 08:34:37.353439   79927 fix.go:200] guest clock delta is within tolerance: 95.567051ms
	I0806 08:34:37.353451   79927 start.go:83] releasing machines lock for "old-k8s-version-409903", held for 22.999876259s
	I0806 08:34:37.353490   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .DriverName
	I0806 08:34:37.353775   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetIP
	I0806 08:34:37.356788   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:37.357197   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:37.357224   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:37.357434   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .DriverName
	I0806 08:34:37.358035   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .DriverName
	I0806 08:34:37.358238   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .DriverName
	I0806 08:34:37.358320   79927 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0806 08:34:37.358360   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHHostname
	I0806 08:34:37.358478   79927 ssh_runner.go:195] Run: cat /version.json
	I0806 08:34:37.358505   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHHostname
	I0806 08:34:37.361125   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:37.361429   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:37.361464   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:37.361490   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:37.361681   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHPort
	I0806 08:34:37.361846   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:34:37.361889   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:37.361913   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:37.362018   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHUsername
	I0806 08:34:37.362078   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHPort
	I0806 08:34:37.362188   79927 sshutil.go:53] new ssh client: &{IP:192.168.61.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/old-k8s-version-409903/id_rsa Username:docker}
	I0806 08:34:37.362319   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:34:37.362452   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHUsername
	I0806 08:34:37.362596   79927 sshutil.go:53] new ssh client: &{IP:192.168.61.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/old-k8s-version-409903/id_rsa Username:docker}
	I0806 08:34:37.468465   79927 ssh_runner.go:195] Run: systemctl --version
	I0806 08:34:37.475436   79927 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0806 08:34:37.627725   79927 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0806 08:34:37.635346   79927 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0806 08:34:37.635432   79927 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0806 08:34:37.652546   79927 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0806 08:34:37.652571   79927 start.go:495] detecting cgroup driver to use...
	I0806 08:34:37.652642   79927 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0806 08:34:37.671386   79927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 08:34:37.686408   79927 docker.go:217] disabling cri-docker service (if available) ...
	I0806 08:34:37.686473   79927 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0806 08:34:37.701731   79927 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0806 08:34:37.717528   79927 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0806 08:34:37.845160   79927 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0806 08:34:37.999278   79927 docker.go:233] disabling docker service ...
	I0806 08:34:37.999361   79927 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0806 08:34:38.017101   79927 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0806 08:34:38.033189   79927 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0806 08:34:38.213620   79927 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0806 08:34:38.374790   79927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0806 08:34:38.393394   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 08:34:38.419807   79927 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0806 08:34:38.419878   79927 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:38.434870   79927 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0806 08:34:38.434946   79927 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:38.449029   79927 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:38.460440   79927 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:38.473654   79927 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0806 08:34:38.486032   79927 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0806 08:34:38.495861   79927 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0806 08:34:38.495927   79927 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0806 08:34:38.511021   79927 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0806 08:34:38.521985   79927 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 08:34:38.651919   79927 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0806 08:34:34.235598   79614 node_ready.go:53] node "default-k8s-diff-port-990751" has status "Ready":"False"
	I0806 08:34:36.734741   79614 node_ready.go:53] node "default-k8s-diff-port-990751" has status "Ready":"False"
	I0806 08:34:37.235469   79614 node_ready.go:49] node "default-k8s-diff-port-990751" has status "Ready":"True"
	I0806 08:34:37.235496   79614 node_ready.go:38] duration metric: took 5.004029432s for node "default-k8s-diff-port-990751" to be "Ready" ...
	I0806 08:34:37.235506   79614 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 08:34:37.242299   79614 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-gx8tt" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:37.247917   79614 pod_ready.go:92] pod "coredns-7db6d8ff4d-gx8tt" in "kube-system" namespace has status "Ready":"True"
	I0806 08:34:37.247936   79614 pod_ready.go:81] duration metric: took 5.613775ms for pod "coredns-7db6d8ff4d-gx8tt" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:37.247944   79614 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-990751" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:38.822367   79927 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0806 08:34:38.822444   79927 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0806 08:34:38.828148   79927 start.go:563] Will wait 60s for crictl version
	I0806 08:34:38.828211   79927 ssh_runner.go:195] Run: which crictl
	I0806 08:34:38.832084   79927 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0806 08:34:38.872571   79927 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0806 08:34:38.872658   79927 ssh_runner.go:195] Run: crio --version
	I0806 08:34:38.903507   79927 ssh_runner.go:195] Run: crio --version
	I0806 08:34:38.934955   79927 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0806 08:34:36.612552   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:38.614644   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:37.380808   78922 main.go:141] libmachine: (no-preload-703340) Calling .Start
	I0806 08:34:37.381005   78922 main.go:141] libmachine: (no-preload-703340) Ensuring networks are active...
	I0806 08:34:37.381749   78922 main.go:141] libmachine: (no-preload-703340) Ensuring network default is active
	I0806 08:34:37.382052   78922 main.go:141] libmachine: (no-preload-703340) Ensuring network mk-no-preload-703340 is active
	I0806 08:34:37.382556   78922 main.go:141] libmachine: (no-preload-703340) Getting domain xml...
	I0806 08:34:37.383386   78922 main.go:141] libmachine: (no-preload-703340) Creating domain...
	I0806 08:34:38.779397   78922 main.go:141] libmachine: (no-preload-703340) Waiting to get IP...
	I0806 08:34:38.780262   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:38.780737   78922 main.go:141] libmachine: (no-preload-703340) DBG | unable to find current IP address of domain no-preload-703340 in network mk-no-preload-703340
	I0806 08:34:38.780809   78922 main.go:141] libmachine: (no-preload-703340) DBG | I0806 08:34:38.780723   81046 retry.go:31] will retry after 194.059215ms: waiting for machine to come up
	I0806 08:34:38.976254   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:38.976734   78922 main.go:141] libmachine: (no-preload-703340) DBG | unable to find current IP address of domain no-preload-703340 in network mk-no-preload-703340
	I0806 08:34:38.976769   78922 main.go:141] libmachine: (no-preload-703340) DBG | I0806 08:34:38.976653   81046 retry.go:31] will retry after 288.140112ms: waiting for machine to come up
	I0806 08:34:39.266297   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:39.267031   78922 main.go:141] libmachine: (no-preload-703340) DBG | unable to find current IP address of domain no-preload-703340 in network mk-no-preload-703340
	I0806 08:34:39.267060   78922 main.go:141] libmachine: (no-preload-703340) DBG | I0806 08:34:39.266948   81046 retry.go:31] will retry after 354.804756ms: waiting for machine to come up
	I0806 08:34:39.623548   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:39.624083   78922 main.go:141] libmachine: (no-preload-703340) DBG | unable to find current IP address of domain no-preload-703340 in network mk-no-preload-703340
	I0806 08:34:39.624111   78922 main.go:141] libmachine: (no-preload-703340) DBG | I0806 08:34:39.623994   81046 retry.go:31] will retry after 418.851823ms: waiting for machine to come up
	I0806 08:34:40.044734   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:40.045142   78922 main.go:141] libmachine: (no-preload-703340) DBG | unable to find current IP address of domain no-preload-703340 in network mk-no-preload-703340
	I0806 08:34:40.045165   78922 main.go:141] libmachine: (no-preload-703340) DBG | I0806 08:34:40.045103   81046 retry.go:31] will retry after 525.29917ms: waiting for machine to come up
	I0806 08:34:40.571928   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:40.572395   78922 main.go:141] libmachine: (no-preload-703340) DBG | unable to find current IP address of domain no-preload-703340 in network mk-no-preload-703340
	I0806 08:34:40.572431   78922 main.go:141] libmachine: (no-preload-703340) DBG | I0806 08:34:40.572332   81046 retry.go:31] will retry after 609.965809ms: waiting for machine to come up
	I0806 08:34:41.184209   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:41.184793   78922 main.go:141] libmachine: (no-preload-703340) DBG | unable to find current IP address of domain no-preload-703340 in network mk-no-preload-703340
	I0806 08:34:41.184820   78922 main.go:141] libmachine: (no-preload-703340) DBG | I0806 08:34:41.184739   81046 retry.go:31] will retry after 1.120596s: waiting for machine to come up
	I0806 08:34:38.936206   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetIP
	I0806 08:34:38.939581   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:38.940049   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:38.940076   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:38.940336   79927 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0806 08:34:38.944801   79927 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 08:34:38.959454   79927 kubeadm.go:883] updating cluster {Name:old-k8s-version-409903 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-409903 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.158 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0806 08:34:38.959581   79927 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0806 08:34:38.959647   79927 ssh_runner.go:195] Run: sudo crictl images --output json
	I0806 08:34:39.014798   79927 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0806 08:34:39.014908   79927 ssh_runner.go:195] Run: which lz4
	I0806 08:34:39.019642   79927 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0806 08:34:39.025125   79927 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0806 08:34:39.025165   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0806 08:34:40.772902   79927 crio.go:462] duration metric: took 1.753273519s to copy over tarball
	I0806 08:34:40.772994   79927 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0806 08:34:39.256627   79614 pod_ready.go:102] pod "etcd-default-k8s-diff-port-990751" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:41.257158   79614 pod_ready.go:102] pod "etcd-default-k8s-diff-port-990751" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:42.754699   79614 pod_ready.go:92] pod "etcd-default-k8s-diff-port-990751" in "kube-system" namespace has status "Ready":"True"
	I0806 08:34:42.754729   79614 pod_ready.go:81] duration metric: took 5.506777292s for pod "etcd-default-k8s-diff-port-990751" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:42.754742   79614 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-990751" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:42.761260   79614 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-990751" in "kube-system" namespace has status "Ready":"True"
	I0806 08:34:42.761282   79614 pod_ready.go:81] duration metric: took 6.531767ms for pod "kube-apiserver-default-k8s-diff-port-990751" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:42.761292   79614 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-990751" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:42.769860   79614 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-990751" in "kube-system" namespace has status "Ready":"True"
	I0806 08:34:42.769884   79614 pod_ready.go:81] duration metric: took 8.585216ms for pod "kube-controller-manager-default-k8s-diff-port-990751" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:42.769898   79614 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lpf88" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:42.775420   79614 pod_ready.go:92] pod "kube-proxy-lpf88" in "kube-system" namespace has status "Ready":"True"
	I0806 08:34:42.775445   79614 pod_ready.go:81] duration metric: took 5.540587ms for pod "kube-proxy-lpf88" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:42.775453   79614 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-990751" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:42.781317   79614 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-990751" in "kube-system" namespace has status "Ready":"True"
	I0806 08:34:42.781341   79614 pod_ready.go:81] duration metric: took 5.88019ms for pod "kube-scheduler-default-k8s-diff-port-990751" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:42.781353   79614 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:40.614686   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:43.114736   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:42.306869   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:42.307378   78922 main.go:141] libmachine: (no-preload-703340) DBG | unable to find current IP address of domain no-preload-703340 in network mk-no-preload-703340
	I0806 08:34:42.307408   78922 main.go:141] libmachine: (no-preload-703340) DBG | I0806 08:34:42.307341   81046 retry.go:31] will retry after 1.26489185s: waiting for machine to come up
	I0806 08:34:43.574072   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:43.574494   78922 main.go:141] libmachine: (no-preload-703340) DBG | unable to find current IP address of domain no-preload-703340 in network mk-no-preload-703340
	I0806 08:34:43.574517   78922 main.go:141] libmachine: (no-preload-703340) DBG | I0806 08:34:43.574440   81046 retry.go:31] will retry after 1.547737608s: waiting for machine to come up
	I0806 08:34:45.124236   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:45.124794   78922 main.go:141] libmachine: (no-preload-703340) DBG | unable to find current IP address of domain no-preload-703340 in network mk-no-preload-703340
	I0806 08:34:45.124822   78922 main.go:141] libmachine: (no-preload-703340) DBG | I0806 08:34:45.124745   81046 retry.go:31] will retry after 1.491615644s: waiting for machine to come up
	I0806 08:34:46.617663   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:46.618159   78922 main.go:141] libmachine: (no-preload-703340) DBG | unable to find current IP address of domain no-preload-703340 in network mk-no-preload-703340
	I0806 08:34:46.618183   78922 main.go:141] libmachine: (no-preload-703340) DBG | I0806 08:34:46.618117   81046 retry.go:31] will retry after 2.284217057s: waiting for machine to come up
	I0806 08:34:43.977375   79927 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.204351244s)
	I0806 08:34:43.977404   79927 crio.go:469] duration metric: took 3.204457213s to extract the tarball
	I0806 08:34:43.977412   79927 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0806 08:34:44.023819   79927 ssh_runner.go:195] Run: sudo crictl images --output json
	I0806 08:34:44.070991   79927 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0806 08:34:44.071016   79927 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0806 08:34:44.071089   79927 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0806 08:34:44.071129   79927 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0806 08:34:44.071138   79927 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0806 08:34:44.071090   79927 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 08:34:44.071184   79927 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0806 08:34:44.071192   79927 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0806 08:34:44.071210   79927 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0806 08:34:44.071255   79927 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0806 08:34:44.072641   79927 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0806 08:34:44.072728   79927 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 08:34:44.072743   79927 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0806 08:34:44.072642   79927 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0806 08:34:44.072749   79927 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0806 08:34:44.072811   79927 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0806 08:34:44.073085   79927 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0806 08:34:44.073298   79927 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0806 08:34:44.239872   79927 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0806 08:34:44.287412   79927 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0806 08:34:44.287458   79927 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0806 08:34:44.287504   79927 ssh_runner.go:195] Run: which crictl
	I0806 08:34:44.292050   79927 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0806 08:34:44.309229   79927 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0806 08:34:44.336611   79927 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0806 08:34:44.371389   79927 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0806 08:34:44.371447   79927 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0806 08:34:44.371500   79927 ssh_runner.go:195] Run: which crictl
	I0806 08:34:44.375821   79927 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0806 08:34:44.412139   79927 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0806 08:34:44.415159   79927 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0806 08:34:44.415339   79927 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0806 08:34:44.418296   79927 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0806 08:34:44.423718   79927 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0806 08:34:44.437866   79927 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0806 08:34:44.494716   79927 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0806 08:34:44.494765   79927 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0806 08:34:44.494819   79927 ssh_runner.go:195] Run: which crictl
	I0806 08:34:44.526849   79927 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0806 08:34:44.526898   79927 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0806 08:34:44.526961   79927 ssh_runner.go:195] Run: which crictl
	I0806 08:34:44.555530   79927 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0806 08:34:44.555587   79927 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0806 08:34:44.555620   79927 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0806 08:34:44.555634   79927 ssh_runner.go:195] Run: which crictl
	I0806 08:34:44.555655   79927 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0806 08:34:44.555692   79927 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0806 08:34:44.555707   79927 ssh_runner.go:195] Run: which crictl
	I0806 08:34:44.555714   79927 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0806 08:34:44.555743   79927 ssh_runner.go:195] Run: which crictl
	I0806 08:34:44.555748   79927 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0806 08:34:44.555792   79927 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0806 08:34:44.561072   79927 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0806 08:34:44.624216   79927 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0806 08:34:44.639877   79927 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0806 08:34:44.639895   79927 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0806 08:34:44.639979   79927 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0806 08:34:44.653684   79927 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0806 08:34:44.711774   79927 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0806 08:34:44.711804   79927 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0806 08:34:44.964733   79927 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 08:34:45.112727   79927 cache_images.go:92] duration metric: took 1.041682516s to LoadCachedImages
	W0806 08:34:45.112813   79927 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0806 08:34:45.112837   79927 kubeadm.go:934] updating node { 192.168.61.158 8443 v1.20.0 crio true true} ...
	I0806 08:34:45.112964   79927 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-409903 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.158
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-409903 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0806 08:34:45.113047   79927 ssh_runner.go:195] Run: crio config
	I0806 08:34:45.165383   79927 cni.go:84] Creating CNI manager for ""
	I0806 08:34:45.165413   79927 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0806 08:34:45.165429   79927 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0806 08:34:45.165455   79927 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.158 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-409903 NodeName:old-k8s-version-409903 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.158"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.158 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0806 08:34:45.165687   79927 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.158
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-409903"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.158
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.158"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0806 08:34:45.165760   79927 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0806 08:34:45.177296   79927 binaries.go:44] Found k8s binaries, skipping transfer
	I0806 08:34:45.177371   79927 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0806 08:34:45.187299   79927 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0806 08:34:45.205986   79927 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0806 08:34:45.225072   79927 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0806 08:34:45.244222   79927 ssh_runner.go:195] Run: grep 192.168.61.158	control-plane.minikube.internal$ /etc/hosts
	I0806 08:34:45.248728   79927 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.158	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 08:34:45.263094   79927 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 08:34:45.399106   79927 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 08:34:45.417589   79927 certs.go:68] Setting up /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/old-k8s-version-409903 for IP: 192.168.61.158
	I0806 08:34:45.417611   79927 certs.go:194] generating shared ca certs ...
	I0806 08:34:45.417630   79927 certs.go:226] acquiring lock for ca certs: {Name:mkf5c5aed91f4108054d9f2c742cad66c86afbed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:34:45.417779   79927 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.key
	I0806 08:34:45.417820   79927 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.key
	I0806 08:34:45.417829   79927 certs.go:256] generating profile certs ...
	I0806 08:34:45.417913   79927 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/old-k8s-version-409903/client.key
	I0806 08:34:45.417970   79927 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/old-k8s-version-409903/apiserver.key.1ffd6785
	I0806 08:34:45.418060   79927 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/old-k8s-version-409903/proxy-client.key
	I0806 08:34:45.418242   79927 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246.pem (1338 bytes)
	W0806 08:34:45.418273   79927 certs.go:480] ignoring /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246_empty.pem, impossibly tiny 0 bytes
	I0806 08:34:45.418282   79927 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem (1675 bytes)
	I0806 08:34:45.418302   79927 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem (1078 bytes)
	I0806 08:34:45.418322   79927 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem (1123 bytes)
	I0806 08:34:45.418350   79927 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem (1675 bytes)
	I0806 08:34:45.418389   79927 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem (1708 bytes)
	I0806 08:34:45.419018   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0806 08:34:45.464550   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0806 08:34:45.500764   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0806 08:34:45.549225   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0806 08:34:45.597480   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/old-k8s-version-409903/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0806 08:34:45.639834   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/old-k8s-version-409903/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0806 08:34:45.680789   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/old-k8s-version-409903/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0806 08:34:45.737986   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/old-k8s-version-409903/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0806 08:34:45.766471   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem --> /usr/share/ca-certificates/202462.pem (1708 bytes)
	I0806 08:34:45.797047   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0806 08:34:45.825789   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246.pem --> /usr/share/ca-certificates/20246.pem (1338 bytes)
	I0806 08:34:45.860217   79927 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0806 08:34:45.883841   79927 ssh_runner.go:195] Run: openssl version
	I0806 08:34:45.891103   79927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202462.pem && ln -fs /usr/share/ca-certificates/202462.pem /etc/ssl/certs/202462.pem"
	I0806 08:34:45.905260   79927 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202462.pem
	I0806 08:34:45.910759   79927 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  6 07:17 /usr/share/ca-certificates/202462.pem
	I0806 08:34:45.910837   79927 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202462.pem
	I0806 08:34:45.918440   79927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202462.pem /etc/ssl/certs/3ec20f2e.0"
	I0806 08:34:45.932479   79927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0806 08:34:45.945678   79927 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0806 08:34:45.951612   79927 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  6 07:07 /usr/share/ca-certificates/minikubeCA.pem
	I0806 08:34:45.951691   79927 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0806 08:34:45.957749   79927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0806 08:34:45.969987   79927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20246.pem && ln -fs /usr/share/ca-certificates/20246.pem /etc/ssl/certs/20246.pem"
	I0806 08:34:45.982878   79927 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20246.pem
	I0806 08:34:45.987953   79927 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  6 07:17 /usr/share/ca-certificates/20246.pem
	I0806 08:34:45.988032   79927 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20246.pem
	I0806 08:34:45.994219   79927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20246.pem /etc/ssl/certs/51391683.0"
	I0806 08:34:46.006666   79927 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0806 08:34:46.011624   79927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0806 08:34:46.018416   79927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0806 08:34:46.025220   79927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0806 08:34:46.031854   79927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0806 08:34:46.038570   79927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0806 08:34:46.045142   79927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0806 08:34:46.052198   79927 kubeadm.go:392] StartCluster: {Name:old-k8s-version-409903 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-409903 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.158 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 08:34:46.052313   79927 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0806 08:34:46.052372   79927 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0806 08:34:46.098382   79927 cri.go:89] found id: ""
	I0806 08:34:46.098454   79927 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0806 08:34:46.110861   79927 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0806 08:34:46.110888   79927 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0806 08:34:46.110956   79927 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0806 08:34:46.123420   79927 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0806 08:34:46.124617   79927 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-409903" does not appear in /home/jenkins/minikube-integration/19370-13057/kubeconfig
	I0806 08:34:46.125562   79927 kubeconfig.go:62] /home/jenkins/minikube-integration/19370-13057/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-409903" cluster setting kubeconfig missing "old-k8s-version-409903" context setting]
	I0806 08:34:46.127017   79927 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/kubeconfig: {Name:mk5c066914acf8e36556b29792f75b98076ca34a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:34:46.173767   79927 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0806 08:34:46.186472   79927 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.158
	I0806 08:34:46.186517   79927 kubeadm.go:1160] stopping kube-system containers ...
	I0806 08:34:46.186532   79927 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0806 08:34:46.186592   79927 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0806 08:34:46.227817   79927 cri.go:89] found id: ""
	I0806 08:34:46.227914   79927 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0806 08:34:46.248248   79927 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0806 08:34:46.259555   79927 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0806 08:34:46.259578   79927 kubeadm.go:157] found existing configuration files:
	
	I0806 08:34:46.259647   79927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0806 08:34:46.271246   79927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0806 08:34:46.271311   79927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0806 08:34:46.283559   79927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0806 08:34:46.295584   79927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0806 08:34:46.295660   79927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0806 08:34:46.307223   79927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0806 08:34:46.318046   79927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0806 08:34:46.318136   79927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0806 08:34:46.328827   79927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0806 08:34:46.339577   79927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0806 08:34:46.339665   79927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0806 08:34:46.352107   79927 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0806 08:34:46.369776   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:46.505082   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:47.036627   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:47.285779   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:47.397777   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:47.517913   79927 api_server.go:52] waiting for apiserver process to appear ...
	I0806 08:34:47.518028   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:48.018399   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:48.518708   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:44.788404   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:46.789746   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:45.115046   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:47.611343   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:48.905056   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:48.905522   78922 main.go:141] libmachine: (no-preload-703340) DBG | unable to find current IP address of domain no-preload-703340 in network mk-no-preload-703340
	I0806 08:34:48.905548   78922 main.go:141] libmachine: (no-preload-703340) DBG | I0806 08:34:48.905484   81046 retry.go:31] will retry after 2.647234658s: waiting for machine to come up
	I0806 08:34:51.556304   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:51.556689   78922 main.go:141] libmachine: (no-preload-703340) DBG | unable to find current IP address of domain no-preload-703340 in network mk-no-preload-703340
	I0806 08:34:51.556712   78922 main.go:141] libmachine: (no-preload-703340) DBG | I0806 08:34:51.556663   81046 retry.go:31] will retry after 4.275379711s: waiting for machine to come up
	I0806 08:34:49.018572   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:49.518975   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:50.018355   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:50.518866   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:51.018156   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:51.518855   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:52.018270   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:52.518584   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:53.018606   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:53.518118   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:49.288691   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:51.787435   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:53.787824   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:49.612621   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:52.112274   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:55.834423   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:55.834961   78922 main.go:141] libmachine: (no-preload-703340) Found IP for machine: 192.168.72.126
	I0806 08:34:55.834980   78922 main.go:141] libmachine: (no-preload-703340) Reserving static IP address...
	I0806 08:34:55.834991   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has current primary IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:55.835314   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "no-preload-703340", mac: "52:54:00:a1:fa:d3", ip: "192.168.72.126"} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:34:55.835348   78922 main.go:141] libmachine: (no-preload-703340) Reserved static IP address: 192.168.72.126
	I0806 08:34:55.835363   78922 main.go:141] libmachine: (no-preload-703340) DBG | skip adding static IP to network mk-no-preload-703340 - found existing host DHCP lease matching {name: "no-preload-703340", mac: "52:54:00:a1:fa:d3", ip: "192.168.72.126"}
	I0806 08:34:55.835376   78922 main.go:141] libmachine: (no-preload-703340) DBG | Getting to WaitForSSH function...
	I0806 08:34:55.835388   78922 main.go:141] libmachine: (no-preload-703340) Waiting for SSH to be available...
	I0806 08:34:55.837645   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:55.838001   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:34:55.838030   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:55.838167   78922 main.go:141] libmachine: (no-preload-703340) DBG | Using SSH client type: external
	I0806 08:34:55.838193   78922 main.go:141] libmachine: (no-preload-703340) DBG | Using SSH private key: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/no-preload-703340/id_rsa (-rw-------)
	I0806 08:34:55.838234   78922 main.go:141] libmachine: (no-preload-703340) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.126 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19370-13057/.minikube/machines/no-preload-703340/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0806 08:34:55.838256   78922 main.go:141] libmachine: (no-preload-703340) DBG | About to run SSH command:
	I0806 08:34:55.838272   78922 main.go:141] libmachine: (no-preload-703340) DBG | exit 0
	I0806 08:34:55.968557   78922 main.go:141] libmachine: (no-preload-703340) DBG | SSH cmd err, output: <nil>: 
	I0806 08:34:55.968893   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetConfigRaw
	I0806 08:34:55.969485   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetIP
	I0806 08:34:55.971998   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:55.972457   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:34:55.972487   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:55.972694   78922 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/no-preload-703340/config.json ...
	I0806 08:34:55.972889   78922 machine.go:94] provisionDockerMachine start ...
	I0806 08:34:55.972908   78922 main.go:141] libmachine: (no-preload-703340) Calling .DriverName
	I0806 08:34:55.973115   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHHostname
	I0806 08:34:55.975361   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:55.975734   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:34:55.975753   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:55.975905   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHPort
	I0806 08:34:55.976051   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHKeyPath
	I0806 08:34:55.976265   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHKeyPath
	I0806 08:34:55.976424   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHUsername
	I0806 08:34:55.976635   78922 main.go:141] libmachine: Using SSH client type: native
	I0806 08:34:55.976891   78922 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.126 22 <nil> <nil>}
	I0806 08:34:55.976905   78922 main.go:141] libmachine: About to run SSH command:
	hostname
	I0806 08:34:56.084844   78922 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0806 08:34:56.084905   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetMachineName
	I0806 08:34:56.085226   78922 buildroot.go:166] provisioning hostname "no-preload-703340"
	I0806 08:34:56.085253   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetMachineName
	I0806 08:34:56.085477   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHHostname
	I0806 08:34:56.088358   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:56.088775   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:34:56.088806   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:56.088904   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHPort
	I0806 08:34:56.089070   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHKeyPath
	I0806 08:34:56.089262   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHKeyPath
	I0806 08:34:56.089417   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHUsername
	I0806 08:34:56.089600   78922 main.go:141] libmachine: Using SSH client type: native
	I0806 08:34:56.089810   78922 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.126 22 <nil> <nil>}
	I0806 08:34:56.089840   78922 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-703340 && echo "no-preload-703340" | sudo tee /etc/hostname
	I0806 08:34:56.216217   78922 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-703340
	
	I0806 08:34:56.216257   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHHostname
	I0806 08:34:56.218949   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:56.219274   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:34:56.219309   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:56.219488   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHPort
	I0806 08:34:56.219699   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHKeyPath
	I0806 08:34:56.219899   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHKeyPath
	I0806 08:34:56.220054   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHUsername
	I0806 08:34:56.220231   78922 main.go:141] libmachine: Using SSH client type: native
	I0806 08:34:56.220513   78922 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.126 22 <nil> <nil>}
	I0806 08:34:56.220544   78922 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-703340' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-703340/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-703340' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0806 08:34:56.337357   78922 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 08:34:56.337390   78922 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19370-13057/.minikube CaCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19370-13057/.minikube}
	I0806 08:34:56.337428   78922 buildroot.go:174] setting up certificates
	I0806 08:34:56.337439   78922 provision.go:84] configureAuth start
	I0806 08:34:56.337454   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetMachineName
	I0806 08:34:56.337736   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetIP
	I0806 08:34:56.340297   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:56.340655   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:34:56.340680   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:56.340898   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHHostname
	I0806 08:34:56.343013   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:56.343356   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:34:56.343395   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:56.343544   78922 provision.go:143] copyHostCerts
	I0806 08:34:56.343613   78922 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem, removing ...
	I0806 08:34:56.343626   78922 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem
	I0806 08:34:56.343692   78922 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem (1078 bytes)
	I0806 08:34:56.343806   78922 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem, removing ...
	I0806 08:34:56.343815   78922 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem
	I0806 08:34:56.343838   78922 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem (1123 bytes)
	I0806 08:34:56.343915   78922 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem, removing ...
	I0806 08:34:56.343929   78922 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem
	I0806 08:34:56.343948   78922 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem (1675 bytes)
	I0806 08:34:56.344044   78922 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem org=jenkins.no-preload-703340 san=[127.0.0.1 192.168.72.126 localhost minikube no-preload-703340]
	I0806 08:34:56.441531   78922 provision.go:177] copyRemoteCerts
	I0806 08:34:56.441588   78922 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0806 08:34:56.441610   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHHostname
	I0806 08:34:56.444202   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:56.444549   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:34:56.444574   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:56.444786   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHPort
	I0806 08:34:56.444977   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHKeyPath
	I0806 08:34:56.445125   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHUsername
	I0806 08:34:56.445327   78922 sshutil.go:53] new ssh client: &{IP:192.168.72.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/no-preload-703340/id_rsa Username:docker}
	I0806 08:34:56.530980   78922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0806 08:34:56.557880   78922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0806 08:34:56.584108   78922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0806 08:34:56.608415   78922 provision.go:87] duration metric: took 270.961494ms to configureAuth
	I0806 08:34:56.608449   78922 buildroot.go:189] setting minikube options for container-runtime
	I0806 08:34:56.608675   78922 config.go:182] Loaded profile config "no-preload-703340": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-rc.0
	I0806 08:34:56.608794   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHHostname
	I0806 08:34:56.611937   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:56.612383   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:34:56.612409   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:56.612585   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHPort
	I0806 08:34:56.612774   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHKeyPath
	I0806 08:34:56.612953   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHKeyPath
	I0806 08:34:56.613134   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHUsername
	I0806 08:34:56.613282   78922 main.go:141] libmachine: Using SSH client type: native
	I0806 08:34:56.613436   78922 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.126 22 <nil> <nil>}
	I0806 08:34:56.613450   78922 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0806 08:34:56.904446   78922 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0806 08:34:56.904474   78922 machine.go:97] duration metric: took 931.572346ms to provisionDockerMachine
	I0806 08:34:56.904484   78922 start.go:293] postStartSetup for "no-preload-703340" (driver="kvm2")
	I0806 08:34:56.904496   78922 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0806 08:34:56.904517   78922 main.go:141] libmachine: (no-preload-703340) Calling .DriverName
	I0806 08:34:56.905043   78922 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0806 08:34:56.905076   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHHostname
	I0806 08:34:56.907707   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:56.908101   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:34:56.908124   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:56.908356   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHPort
	I0806 08:34:56.908586   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHKeyPath
	I0806 08:34:56.908753   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHUsername
	I0806 08:34:56.908924   78922 sshutil.go:53] new ssh client: &{IP:192.168.72.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/no-preload-703340/id_rsa Username:docker}
	I0806 08:34:56.995555   78922 ssh_runner.go:195] Run: cat /etc/os-release
	I0806 08:34:56.999971   78922 info.go:137] Remote host: Buildroot 2023.02.9
	I0806 08:34:57.000004   78922 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-13057/.minikube/addons for local assets ...
	I0806 08:34:57.000079   78922 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-13057/.minikube/files for local assets ...
	I0806 08:34:57.000150   78922 filesync.go:149] local asset: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem -> 202462.pem in /etc/ssl/certs
	I0806 08:34:57.000243   78922 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0806 08:34:57.010178   78922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem --> /etc/ssl/certs/202462.pem (1708 bytes)
	I0806 08:34:57.035704   78922 start.go:296] duration metric: took 131.206204ms for postStartSetup
	I0806 08:34:57.035751   78922 fix.go:56] duration metric: took 19.682153204s for fixHost
	I0806 08:34:57.035775   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHHostname
	I0806 08:34:57.038230   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:57.038565   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:34:57.038596   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:57.038809   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHPort
	I0806 08:34:57.039016   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHKeyPath
	I0806 08:34:57.039197   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHKeyPath
	I0806 08:34:57.039308   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHUsername
	I0806 08:34:57.039486   78922 main.go:141] libmachine: Using SSH client type: native
	I0806 08:34:57.039697   78922 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.126 22 <nil> <nil>}
	I0806 08:34:57.039712   78922 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0806 08:34:57.149272   78922 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722933297.121300330
	
	I0806 08:34:57.149291   78922 fix.go:216] guest clock: 1722933297.121300330
	I0806 08:34:57.149298   78922 fix.go:229] Guest: 2024-08-06 08:34:57.12130033 +0000 UTC Remote: 2024-08-06 08:34:57.035756139 +0000 UTC m=+360.415626547 (delta=85.544191ms)
	I0806 08:34:57.149319   78922 fix.go:200] guest clock delta is within tolerance: 85.544191ms
	I0806 08:34:57.149326   78922 start.go:83] releasing machines lock for "no-preload-703340", held for 19.795759576s
	I0806 08:34:57.149348   78922 main.go:141] libmachine: (no-preload-703340) Calling .DriverName
	I0806 08:34:57.149624   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetIP
	I0806 08:34:57.152286   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:57.152699   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:34:57.152731   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:57.152905   78922 main.go:141] libmachine: (no-preload-703340) Calling .DriverName
	I0806 08:34:57.153488   78922 main.go:141] libmachine: (no-preload-703340) Calling .DriverName
	I0806 08:34:57.153680   78922 main.go:141] libmachine: (no-preload-703340) Calling .DriverName
	I0806 08:34:57.153764   78922 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0806 08:34:57.153803   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHHostname
	I0806 08:34:57.154068   78922 ssh_runner.go:195] Run: cat /version.json
	I0806 08:34:57.154090   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHHostname
	I0806 08:34:57.156678   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:57.157026   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:34:57.157053   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:57.157123   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:57.157327   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHPort
	I0806 08:34:57.157492   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHKeyPath
	I0806 08:34:57.157580   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:34:57.157621   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHUsername
	I0806 08:34:57.157602   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:57.157701   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHPort
	I0806 08:34:57.157919   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHKeyPath
	I0806 08:34:57.157933   78922 sshutil.go:53] new ssh client: &{IP:192.168.72.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/no-preload-703340/id_rsa Username:docker}
	I0806 08:34:57.158089   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHUsername
	I0806 08:34:57.158243   78922 sshutil.go:53] new ssh client: &{IP:192.168.72.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/no-preload-703340/id_rsa Username:docker}
	I0806 08:34:57.237574   78922 ssh_runner.go:195] Run: systemctl --version
	I0806 08:34:57.260498   78922 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0806 08:34:57.403619   78922 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0806 08:34:57.410198   78922 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0806 08:34:57.410267   78922 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0806 08:34:57.426321   78922 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0806 08:34:57.426345   78922 start.go:495] detecting cgroup driver to use...
	I0806 08:34:57.426414   78922 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0806 08:34:57.442340   78922 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 08:34:57.458008   78922 docker.go:217] disabling cri-docker service (if available) ...
	I0806 08:34:57.458061   78922 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0806 08:34:57.471516   78922 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0806 08:34:57.484996   78922 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0806 08:34:57.604605   78922 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0806 08:34:57.753590   78922 docker.go:233] disabling docker service ...
	I0806 08:34:57.753651   78922 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0806 08:34:57.768534   78922 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0806 08:34:57.782187   78922 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0806 08:34:57.936046   78922 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0806 08:34:58.076402   78922 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0806 08:34:58.091386   78922 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 08:34:58.112810   78922 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0806 08:34:58.112873   78922 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:58.123376   78922 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0806 08:34:58.123438   78922 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:58.133981   78922 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:58.144891   78922 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:58.156276   78922 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0806 08:34:58.166927   78922 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:58.176956   78922 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:58.193969   78922 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:58.204122   78922 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0806 08:34:58.213745   78922 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0806 08:34:58.213859   78922 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0806 08:34:58.226556   78922 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0806 08:34:58.236413   78922 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 08:34:58.359776   78922 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0806 08:34:58.505947   78922 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0806 08:34:58.506016   78922 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0806 08:34:58.511591   78922 start.go:563] Will wait 60s for crictl version
	I0806 08:34:58.511649   78922 ssh_runner.go:195] Run: which crictl
	I0806 08:34:58.516259   78922 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0806 08:34:58.568580   78922 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0806 08:34:58.568658   78922 ssh_runner.go:195] Run: crio --version
	I0806 08:34:58.607964   78922 ssh_runner.go:195] Run: crio --version
	I0806 08:34:58.642564   78922 out.go:177] * Preparing Kubernetes v1.31.0-rc.0 on CRI-O 1.29.1 ...
	I0806 08:34:54.018818   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:54.518987   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:55.018584   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:55.518848   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:56.018094   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:56.518052   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:57.018765   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:57.518128   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:58.018977   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:58.518233   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:55.789183   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:58.288863   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:54.612520   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:56.614634   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:59.115966   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:58.643802   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetIP
	I0806 08:34:58.646395   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:58.646714   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:34:58.646744   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:58.646948   78922 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0806 08:34:58.651136   78922 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 08:34:58.664347   78922 kubeadm.go:883] updating cluster {Name:no-preload-703340 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-rc.0 ClusterName:no-preload-703340 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.126 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m
0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0806 08:34:58.664479   78922 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime crio
	I0806 08:34:58.664511   78922 ssh_runner.go:195] Run: sudo crictl images --output json
	I0806 08:34:58.705043   78922 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-rc.0". assuming images are not preloaded.
	I0806 08:34:58.705080   78922 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-rc.0 registry.k8s.io/kube-controller-manager:v1.31.0-rc.0 registry.k8s.io/kube-scheduler:v1.31.0-rc.0 registry.k8s.io/kube-proxy:v1.31.0-rc.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0806 08:34:58.705157   78922 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 08:34:58.705170   78922 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0806 08:34:58.705177   78922 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0806 08:34:58.705245   78922 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0806 08:34:58.705272   78922 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0806 08:34:58.705281   78922 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0806 08:34:58.705160   78922 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0806 08:34:58.705469   78922 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0806 08:34:58.706962   78922 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0806 08:34:58.706993   78922 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 08:34:58.706986   78922 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0806 08:34:58.707018   78922 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0806 08:34:58.706968   78922 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0806 08:34:58.706989   78922 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0806 08:34:58.707066   78922 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0806 08:34:58.707315   78922 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0806 08:34:58.835957   78922 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0806 08:34:58.839572   78922 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0806 08:34:58.846955   78922 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0806 08:34:58.865707   78922 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0806 08:34:58.872729   78922 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0806 08:34:58.878484   78922 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0806 08:34:58.898935   78922 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0806 08:34:58.919621   78922 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-rc.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-rc.0" does not exist at hash "fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c" in container runtime
	I0806 08:34:58.919654   78922 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0806 08:34:58.919691   78922 ssh_runner.go:195] Run: which crictl
	I0806 08:34:58.949493   78922 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0806 08:34:58.949544   78922 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0806 08:34:58.949597   78922 ssh_runner.go:195] Run: which crictl
	I0806 08:34:58.971429   78922 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0806 08:34:58.971476   78922 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0806 08:34:58.971522   78922 ssh_runner.go:195] Run: which crictl
	I0806 08:34:59.015321   78922 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-rc.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-rc.0" does not exist at hash "0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c" in container runtime
	I0806 08:34:59.015359   78922 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0806 08:34:59.015397   78922 ssh_runner.go:195] Run: which crictl
	I0806 08:34:59.107718   78922 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-rc.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-rc.0" does not exist at hash "41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318" in container runtime
	I0806 08:34:59.107761   78922 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0806 08:34:59.107797   78922 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-rc.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-rc.0" does not exist at hash "c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0" in container runtime
	I0806 08:34:59.107867   78922 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0806 08:34:59.107904   78922 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0806 08:34:59.107987   78922 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0806 08:34:59.107809   78922 ssh_runner.go:195] Run: which crictl
	I0806 08:34:59.108037   78922 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0806 08:34:59.107994   78922 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0806 08:34:59.108037   78922 ssh_runner.go:195] Run: which crictl
	I0806 08:34:59.120323   78922 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0806 08:34:59.218864   78922 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0
	I0806 08:34:59.218982   78922 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-rc.0
	I0806 08:34:59.222696   78922 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0806 08:34:59.222735   78922 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0806 08:34:59.222794   78922 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0
	I0806 08:34:59.222820   78922 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0806 08:34:59.222854   78922 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0806 08:34:59.222882   78922 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-rc.0
	I0806 08:34:59.222900   78922 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-rc.0
	I0806 08:34:59.222933   78922 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.15-0
	I0806 08:34:59.222964   78922 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-rc.0
	I0806 08:34:59.227260   78922 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-rc.0 (exists)
	I0806 08:34:59.227280   78922 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-rc.0
	I0806 08:34:59.227319   78922 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-rc.0
	I0806 08:34:59.268220   78922 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-rc.0 (exists)
	I0806 08:34:59.268240   78922 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0
	I0806 08:34:59.268291   78922 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0806 08:34:59.268325   78922 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0806 08:34:59.268332   78922 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-rc.0
	I0806 08:34:59.268349   78922 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-rc.0 (exists)
	I0806 08:34:59.613734   78922 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 08:35:01.623115   78922 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-rc.0: (2.395772878s)
	I0806 08:35:01.623140   78922 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-rc.0: (2.354795538s)
	I0806 08:35:01.623152   78922 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0 from cache
	I0806 08:35:01.623161   78922 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-rc.0 (exists)
	I0806 08:35:01.623168   78922 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-rc.0
	I0806 08:35:01.623217   78922 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-rc.0
	I0806 08:35:01.623218   78922 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.009459709s)
	I0806 08:35:01.623310   78922 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0806 08:35:01.623340   78922 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 08:35:01.623366   78922 ssh_runner.go:195] Run: which crictl
	I0806 08:34:59.019117   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:59.518121   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:00.018102   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:00.518402   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:01.019125   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:01.518141   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:02.018063   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:02.518834   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:03.018074   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:03.518904   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:00.789181   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:02.790704   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:01.612449   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:03.655402   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:03.781865   78922 ssh_runner.go:235] Completed: which crictl: (2.158478864s)
	I0806 08:35:03.781942   78922 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 08:35:03.782003   78922 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-rc.0: (2.158736373s)
	I0806 08:35:03.782023   78922 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0 from cache
	I0806 08:35:03.782042   78922 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-rc.0
	I0806 08:35:03.782112   78922 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-rc.0
	I0806 08:35:05.247976   78922 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.466007397s)
	I0806 08:35:05.248028   78922 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-rc.0: (1.465889339s)
	I0806 08:35:05.248049   78922 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0806 08:35:05.248054   78922 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0 from cache
	I0806 08:35:05.248065   78922 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0806 08:35:05.248117   78922 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0806 08:35:05.248166   78922 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0806 08:35:04.018047   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:04.518891   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:05.018447   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:05.518506   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:06.019133   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:06.518070   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:07.018362   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:07.518320   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:08.019034   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:08.518740   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:05.288137   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:07.288969   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:06.113473   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:08.612266   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:09.135551   78922 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.887407478s)
	I0806 08:35:09.135591   78922 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0806 08:35:09.135592   78922 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (3.887405091s)
	I0806 08:35:09.135605   78922 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0806 08:35:09.135618   78922 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0806 08:35:09.135645   78922 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0806 08:35:11.131984   78922 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.996319459s)
	I0806 08:35:11.132013   78922 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0806 08:35:11.132024   78922 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-rc.0
	I0806 08:35:11.132070   78922 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-rc.0
	I0806 08:35:09.018349   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:09.518189   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:10.018824   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:10.518260   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:11.018914   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:11.518709   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:12.018349   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:12.518719   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:13.019022   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:13.518318   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:09.787576   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:11.789165   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:10.614185   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:13.112391   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:13.395512   78922 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-rc.0: (2.263417409s)
	I0806 08:35:13.395548   78922 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-rc.0 from cache
	I0806 08:35:13.395560   78922 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0806 08:35:13.395608   78922 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0806 08:35:14.150250   78922 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0806 08:35:14.150301   78922 cache_images.go:123] Successfully loaded all cached images
	I0806 08:35:14.150309   78922 cache_images.go:92] duration metric: took 15.445214072s to LoadCachedImages
	I0806 08:35:14.150322   78922 kubeadm.go:934] updating node { 192.168.72.126 8443 v1.31.0-rc.0 crio true true} ...
	I0806 08:35:14.150540   78922 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-703340 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.126
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-rc.0 ClusterName:no-preload-703340 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0806 08:35:14.150640   78922 ssh_runner.go:195] Run: crio config
	I0806 08:35:14.210763   78922 cni.go:84] Creating CNI manager for ""
	I0806 08:35:14.210794   78922 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0806 08:35:14.210806   78922 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0806 08:35:14.210842   78922 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.126 APIServerPort:8443 KubernetesVersion:v1.31.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-703340 NodeName:no-preload-703340 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.126"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.126 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0806 08:35:14.210979   78922 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.126
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-703340"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.126
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.126"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0806 08:35:14.211036   78922 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-rc.0
	I0806 08:35:14.222106   78922 binaries.go:44] Found k8s binaries, skipping transfer
	I0806 08:35:14.222189   78922 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0806 08:35:14.231944   78922 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0806 08:35:14.250491   78922 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0806 08:35:14.268955   78922 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0806 08:35:14.289661   78922 ssh_runner.go:195] Run: grep 192.168.72.126	control-plane.minikube.internal$ /etc/hosts
	I0806 08:35:14.294005   78922 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.126	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 08:35:14.307700   78922 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 08:35:14.451462   78922 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 08:35:14.478155   78922 certs.go:68] Setting up /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/no-preload-703340 for IP: 192.168.72.126
	I0806 08:35:14.478177   78922 certs.go:194] generating shared ca certs ...
	I0806 08:35:14.478191   78922 certs.go:226] acquiring lock for ca certs: {Name:mkf5c5aed91f4108054d9f2c742cad66c86afbed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:35:14.478365   78922 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.key
	I0806 08:35:14.478420   78922 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.key
	I0806 08:35:14.478434   78922 certs.go:256] generating profile certs ...
	I0806 08:35:14.478529   78922 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/no-preload-703340/client.key
	I0806 08:35:14.478615   78922 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/no-preload-703340/apiserver.key.0b170e14
	I0806 08:35:14.478665   78922 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/no-preload-703340/proxy-client.key
	I0806 08:35:14.478822   78922 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246.pem (1338 bytes)
	W0806 08:35:14.478871   78922 certs.go:480] ignoring /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246_empty.pem, impossibly tiny 0 bytes
	I0806 08:35:14.478885   78922 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem (1675 bytes)
	I0806 08:35:14.478941   78922 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem (1078 bytes)
	I0806 08:35:14.478978   78922 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem (1123 bytes)
	I0806 08:35:14.479014   78922 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem (1675 bytes)
	I0806 08:35:14.479078   78922 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem (1708 bytes)
	I0806 08:35:14.479773   78922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0806 08:35:14.527838   78922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0806 08:35:14.579788   78922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0806 08:35:14.614739   78922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0806 08:35:14.657559   78922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/no-preload-703340/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0806 08:35:14.699354   78922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/no-preload-703340/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0806 08:35:14.724875   78922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/no-preload-703340/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0806 08:35:14.751020   78922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/no-preload-703340/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0806 08:35:14.778300   78922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem --> /usr/share/ca-certificates/202462.pem (1708 bytes)
	I0806 08:35:14.805018   78922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0806 08:35:14.830791   78922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246.pem --> /usr/share/ca-certificates/20246.pem (1338 bytes)
	I0806 08:35:14.856120   78922 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0806 08:35:14.874909   78922 ssh_runner.go:195] Run: openssl version
	I0806 08:35:14.880931   78922 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202462.pem && ln -fs /usr/share/ca-certificates/202462.pem /etc/ssl/certs/202462.pem"
	I0806 08:35:14.892741   78922 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202462.pem
	I0806 08:35:14.897430   78922 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  6 07:17 /usr/share/ca-certificates/202462.pem
	I0806 08:35:14.897489   78922 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202462.pem
	I0806 08:35:14.903324   78922 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202462.pem /etc/ssl/certs/3ec20f2e.0"
	I0806 08:35:14.914505   78922 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0806 08:35:14.925465   78922 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0806 08:35:14.930490   78922 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  6 07:07 /usr/share/ca-certificates/minikubeCA.pem
	I0806 08:35:14.930559   78922 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0806 08:35:14.936384   78922 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0806 08:35:14.947836   78922 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20246.pem && ln -fs /usr/share/ca-certificates/20246.pem /etc/ssl/certs/20246.pem"
	I0806 08:35:14.959376   78922 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20246.pem
	I0806 08:35:14.964118   78922 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  6 07:17 /usr/share/ca-certificates/20246.pem
	I0806 08:35:14.964168   78922 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20246.pem
	I0806 08:35:14.970266   78922 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20246.pem /etc/ssl/certs/51391683.0"
	I0806 08:35:14.982244   78922 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0806 08:35:14.987367   78922 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0806 08:35:14.993508   78922 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0806 08:35:14.999400   78922 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0806 08:35:15.005751   78922 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0806 08:35:15.011588   78922 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0806 08:35:15.017766   78922 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0806 08:35:15.023853   78922 kubeadm.go:392] StartCluster: {Name:no-preload-703340 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-rc.0 ClusterName:no-preload-703340 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.126 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 08:35:15.024060   78922 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0806 08:35:15.024113   78922 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0806 08:35:15.072318   78922 cri.go:89] found id: ""
	I0806 08:35:15.072415   78922 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0806 08:35:15.083158   78922 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0806 08:35:15.083182   78922 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0806 08:35:15.083225   78922 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0806 08:35:15.093301   78922 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0806 08:35:15.094369   78922 kubeconfig.go:125] found "no-preload-703340" server: "https://192.168.72.126:8443"
	I0806 08:35:15.096545   78922 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0806 08:35:15.106190   78922 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.126
	I0806 08:35:15.106226   78922 kubeadm.go:1160] stopping kube-system containers ...
	I0806 08:35:15.106237   78922 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0806 08:35:15.106288   78922 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0806 08:35:15.145974   78922 cri.go:89] found id: ""
	I0806 08:35:15.146060   78922 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0806 08:35:15.163821   78922 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0806 08:35:15.174473   78922 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0806 08:35:15.174498   78922 kubeadm.go:157] found existing configuration files:
	
	I0806 08:35:15.174562   78922 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0806 08:35:15.185227   78922 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0806 08:35:15.185309   78922 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0806 08:35:15.195614   78922 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0806 08:35:15.205197   78922 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0806 08:35:15.205261   78922 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0806 08:35:15.215511   78922 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0806 08:35:15.225354   78922 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0806 08:35:15.225412   78922 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0806 08:35:15.234912   78922 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0806 08:35:15.244638   78922 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0806 08:35:15.244703   78922 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0806 08:35:15.255080   78922 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0806 08:35:15.265861   78922 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:35:15.399192   78922 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:35:16.182816   78922 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:35:16.397386   78922 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:35:16.463980   78922 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:35:16.550229   78922 api_server.go:52] waiting for apiserver process to appear ...
	I0806 08:35:16.550347   78922 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:14.018930   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:14.518251   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:15.018981   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:15.518764   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:16.018114   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:16.518157   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:17.018124   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:17.518604   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:18.018742   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:18.518755   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:14.290050   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:16.788472   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:18.788526   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:15.112664   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:17.113214   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:17.050447   78922 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:17.551220   78922 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:17.603538   78922 api_server.go:72] duration metric: took 1.053308623s to wait for apiserver process to appear ...
	I0806 08:35:17.603574   78922 api_server.go:88] waiting for apiserver healthz status ...
	I0806 08:35:17.603595   78922 api_server.go:253] Checking apiserver healthz at https://192.168.72.126:8443/healthz ...
	I0806 08:35:17.604064   78922 api_server.go:269] stopped: https://192.168.72.126:8443/healthz: Get "https://192.168.72.126:8443/healthz": dial tcp 192.168.72.126:8443: connect: connection refused
	I0806 08:35:18.104696   78922 api_server.go:253] Checking apiserver healthz at https://192.168.72.126:8443/healthz ...
	I0806 08:35:20.458841   78922 api_server.go:279] https://192.168.72.126:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0806 08:35:20.458876   78922 api_server.go:103] status: https://192.168.72.126:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0806 08:35:20.458893   78922 api_server.go:253] Checking apiserver healthz at https://192.168.72.126:8443/healthz ...
	I0806 08:35:20.495570   78922 api_server.go:279] https://192.168.72.126:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0806 08:35:20.495605   78922 api_server.go:103] status: https://192.168.72.126:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0806 08:35:20.603728   78922 api_server.go:253] Checking apiserver healthz at https://192.168.72.126:8443/healthz ...
	I0806 08:35:20.628351   78922 api_server.go:279] https://192.168.72.126:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0806 08:35:20.628409   78922 api_server.go:103] status: https://192.168.72.126:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0806 08:35:21.103717   78922 api_server.go:253] Checking apiserver healthz at https://192.168.72.126:8443/healthz ...
	I0806 08:35:21.109031   78922 api_server.go:279] https://192.168.72.126:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0806 08:35:21.109064   78922 api_server.go:103] status: https://192.168.72.126:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0806 08:35:21.604400   78922 api_server.go:253] Checking apiserver healthz at https://192.168.72.126:8443/healthz ...
	I0806 08:35:21.613667   78922 api_server.go:279] https://192.168.72.126:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0806 08:35:21.613706   78922 api_server.go:103] status: https://192.168.72.126:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0806 08:35:22.103921   78922 api_server.go:253] Checking apiserver healthz at https://192.168.72.126:8443/healthz ...
	I0806 08:35:22.111933   78922 api_server.go:279] https://192.168.72.126:8443/healthz returned 200:
	ok
	I0806 08:35:22.118406   78922 api_server.go:141] control plane version: v1.31.0-rc.0
	I0806 08:35:22.118430   78922 api_server.go:131] duration metric: took 4.514848902s to wait for apiserver health ...
	I0806 08:35:22.118438   78922 cni.go:84] Creating CNI manager for ""
	I0806 08:35:22.118444   78922 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0806 08:35:22.119689   78922 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0806 08:35:19.018302   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:19.518693   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:20.018804   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:20.518878   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:21.018974   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:21.518747   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:22.018530   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:22.519082   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:23.019088   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:23.518932   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:20.788766   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:23.288382   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:19.614878   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:22.113841   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:22.121321   78922 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0806 08:35:22.132760   78922 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0806 08:35:22.151770   78922 system_pods.go:43] waiting for kube-system pods to appear ...
	I0806 08:35:22.161775   78922 system_pods.go:59] 8 kube-system pods found
	I0806 08:35:22.161806   78922 system_pods.go:61] "coredns-6f6b679f8f-ft9vg" [0f9ea981-764b-4777-bf78-8798570b20d8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0806 08:35:22.161818   78922 system_pods.go:61] "etcd-no-preload-703340" [6adb80b2-0d33-4d53-a868-f83226cc7c26] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0806 08:35:22.161826   78922 system_pods.go:61] "kube-apiserver-no-preload-703340" [4a415305-2f94-42c5-901d-49f0b0114cf6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0806 08:35:22.161832   78922 system_pods.go:61] "kube-controller-manager-no-preload-703340" [0351f8f4-5046-4068-ada3-103394ee9ae2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0806 08:35:22.161837   78922 system_pods.go:61] "kube-proxy-7t2sd" [27f3aa37-eca2-450b-a98d-b3246a7e48ac] Running
	I0806 08:35:22.161841   78922 system_pods.go:61] "kube-scheduler-no-preload-703340" [ad4f6239-f5e1-4a76-91ea-8da5478e8992] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0806 08:35:22.161845   78922 system_pods.go:61] "metrics-server-6867b74b74-qz4vx" [800676b3-4940-4ae2-ad37-7adbd67ea8fa] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0806 08:35:22.161849   78922 system_pods.go:61] "storage-provisioner" [8fa414bf-3923-4c20-acd7-a48fba6f4044] Running
	I0806 08:35:22.161855   78922 system_pods.go:74] duration metric: took 10.062633ms to wait for pod list to return data ...
	I0806 08:35:22.161865   78922 node_conditions.go:102] verifying NodePressure condition ...
	I0806 08:35:22.165604   78922 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0806 08:35:22.165626   78922 node_conditions.go:123] node cpu capacity is 2
	I0806 08:35:22.165637   78922 node_conditions.go:105] duration metric: took 3.767899ms to run NodePressure ...
	I0806 08:35:22.165656   78922 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:35:22.445769   78922 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0806 08:35:22.451056   78922 kubeadm.go:739] kubelet initialised
	I0806 08:35:22.451078   78922 kubeadm.go:740] duration metric: took 5.279835ms waiting for restarted kubelet to initialise ...
	I0806 08:35:22.451085   78922 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 08:35:22.458199   78922 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6f6b679f8f-ft9vg" in "kube-system" namespace to be "Ready" ...
	I0806 08:35:22.463913   78922 pod_ready.go:97] node "no-preload-703340" hosting pod "coredns-6f6b679f8f-ft9vg" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-703340" has status "Ready":"False"
	I0806 08:35:22.463936   78922 pod_ready.go:81] duration metric: took 5.712258ms for pod "coredns-6f6b679f8f-ft9vg" in "kube-system" namespace to be "Ready" ...
	E0806 08:35:22.463945   78922 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-703340" hosting pod "coredns-6f6b679f8f-ft9vg" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-703340" has status "Ready":"False"
	I0806 08:35:22.463952   78922 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-703340" in "kube-system" namespace to be "Ready" ...
	I0806 08:35:22.471896   78922 pod_ready.go:97] node "no-preload-703340" hosting pod "etcd-no-preload-703340" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-703340" has status "Ready":"False"
	I0806 08:35:22.471921   78922 pod_ready.go:81] duration metric: took 7.962236ms for pod "etcd-no-preload-703340" in "kube-system" namespace to be "Ready" ...
	E0806 08:35:22.471930   78922 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-703340" hosting pod "etcd-no-preload-703340" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-703340" has status "Ready":"False"
	I0806 08:35:22.471936   78922 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-703340" in "kube-system" namespace to be "Ready" ...
	I0806 08:35:22.478365   78922 pod_ready.go:97] node "no-preload-703340" hosting pod "kube-apiserver-no-preload-703340" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-703340" has status "Ready":"False"
	I0806 08:35:22.478394   78922 pod_ready.go:81] duration metric: took 6.451038ms for pod "kube-apiserver-no-preload-703340" in "kube-system" namespace to be "Ready" ...
	E0806 08:35:22.478405   78922 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-703340" hosting pod "kube-apiserver-no-preload-703340" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-703340" has status "Ready":"False"
	I0806 08:35:22.478414   78922 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-703340" in "kube-system" namespace to be "Ready" ...
	I0806 08:35:22.559022   78922 pod_ready.go:97] node "no-preload-703340" hosting pod "kube-controller-manager-no-preload-703340" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-703340" has status "Ready":"False"
	I0806 08:35:22.559058   78922 pod_ready.go:81] duration metric: took 80.632686ms for pod "kube-controller-manager-no-preload-703340" in "kube-system" namespace to be "Ready" ...
	E0806 08:35:22.559071   78922 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-703340" hosting pod "kube-controller-manager-no-preload-703340" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-703340" has status "Ready":"False"
	I0806 08:35:22.559080   78922 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-7t2sd" in "kube-system" namespace to be "Ready" ...
	I0806 08:35:22.955121   78922 pod_ready.go:92] pod "kube-proxy-7t2sd" in "kube-system" namespace has status "Ready":"True"
	I0806 08:35:22.955142   78922 pod_ready.go:81] duration metric: took 396.053517ms for pod "kube-proxy-7t2sd" in "kube-system" namespace to be "Ready" ...
	I0806 08:35:22.955152   78922 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-703340" in "kube-system" namespace to be "Ready" ...
	I0806 08:35:24.961266   78922 pod_ready.go:102] pod "kube-scheduler-no-preload-703340" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:24.018764   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:24.519010   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:25.019090   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:25.518260   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:26.018784   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:26.518903   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:27.019011   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:27.518740   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:28.018299   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:28.518391   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:25.787771   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:27.788719   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:24.612790   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:26.612992   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:29.112518   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:26.962028   78922 pod_ready.go:102] pod "kube-scheduler-no-preload-703340" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:29.461941   78922 pod_ready.go:102] pod "kube-scheduler-no-preload-703340" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:29.018449   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:29.518446   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:30.018179   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:30.518270   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:31.018913   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:31.518140   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:32.018621   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:32.518573   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:33.018512   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:33.518604   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:29.790220   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:32.287668   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:31.112680   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:33.612484   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:31.961716   78922 pod_ready.go:102] pod "kube-scheduler-no-preload-703340" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:34.461368   78922 pod_ready.go:102] pod "kube-scheduler-no-preload-703340" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:34.961460   78922 pod_ready.go:92] pod "kube-scheduler-no-preload-703340" in "kube-system" namespace has status "Ready":"True"
	I0806 08:35:34.961483   78922 pod_ready.go:81] duration metric: took 12.006324097s for pod "kube-scheduler-no-preload-703340" in "kube-system" namespace to be "Ready" ...
	I0806 08:35:34.961495   78922 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace to be "Ready" ...
	I0806 08:35:34.018132   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:34.518476   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:35.018797   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:35.518507   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:36.018976   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:36.518062   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:37.018463   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:37.518648   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:38.018861   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:38.518236   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:34.288032   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:36.787530   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:38.789235   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:35.612530   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:38.114833   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:36.967646   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:38.969106   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:41.469056   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:39.018424   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:39.518065   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:40.018381   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:40.518979   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:41.018310   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:41.518277   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:42.018881   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:42.518157   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:43.018166   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:43.518740   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:41.287534   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:43.288666   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:40.115187   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:42.611895   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:43.967648   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:45.968277   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:44.018075   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:44.518545   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:45.018342   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:45.519066   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:46.018796   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:46.519031   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:47.018876   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:47.518649   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:35:47.518746   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:35:47.559141   79927 cri.go:89] found id: ""
	I0806 08:35:47.559175   79927 logs.go:276] 0 containers: []
	W0806 08:35:47.559188   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:35:47.559195   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:35:47.559253   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:35:47.596840   79927 cri.go:89] found id: ""
	I0806 08:35:47.596900   79927 logs.go:276] 0 containers: []
	W0806 08:35:47.596913   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:35:47.596921   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:35:47.596995   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:35:47.634932   79927 cri.go:89] found id: ""
	I0806 08:35:47.634961   79927 logs.go:276] 0 containers: []
	W0806 08:35:47.634969   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:35:47.634975   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:35:47.635031   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:35:47.672170   79927 cri.go:89] found id: ""
	I0806 08:35:47.672202   79927 logs.go:276] 0 containers: []
	W0806 08:35:47.672213   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:35:47.672220   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:35:47.672284   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:35:47.709643   79927 cri.go:89] found id: ""
	I0806 08:35:47.709672   79927 logs.go:276] 0 containers: []
	W0806 08:35:47.709683   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:35:47.709690   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:35:47.709752   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:35:47.750036   79927 cri.go:89] found id: ""
	I0806 08:35:47.750067   79927 logs.go:276] 0 containers: []
	W0806 08:35:47.750076   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:35:47.750082   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:35:47.750149   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:35:47.791837   79927 cri.go:89] found id: ""
	I0806 08:35:47.791862   79927 logs.go:276] 0 containers: []
	W0806 08:35:47.791872   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:35:47.791879   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:35:47.791938   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:35:47.831972   79927 cri.go:89] found id: ""
	I0806 08:35:47.831997   79927 logs.go:276] 0 containers: []
	W0806 08:35:47.832005   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:35:47.832014   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:35:47.832028   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:35:47.962228   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:35:47.962252   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:35:47.962267   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:35:48.031276   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:35:48.031314   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:35:48.075869   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:35:48.075912   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:35:48.131817   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:35:48.131859   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:35:45.788042   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:47.789501   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:44.612796   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:47.112009   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:48.468289   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:50.469101   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:50.647351   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:50.661916   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:35:50.662002   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:35:50.707294   79927 cri.go:89] found id: ""
	I0806 08:35:50.707323   79927 logs.go:276] 0 containers: []
	W0806 08:35:50.707335   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:35:50.707342   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:35:50.707396   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:35:50.744111   79927 cri.go:89] found id: ""
	I0806 08:35:50.744133   79927 logs.go:276] 0 containers: []
	W0806 08:35:50.744149   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:35:50.744160   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:35:50.744205   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:35:50.781482   79927 cri.go:89] found id: ""
	I0806 08:35:50.781513   79927 logs.go:276] 0 containers: []
	W0806 08:35:50.781524   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:35:50.781531   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:35:50.781591   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:35:50.828788   79927 cri.go:89] found id: ""
	I0806 08:35:50.828817   79927 logs.go:276] 0 containers: []
	W0806 08:35:50.828827   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:35:50.828834   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:35:50.828896   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:35:50.865168   79927 cri.go:89] found id: ""
	I0806 08:35:50.865199   79927 logs.go:276] 0 containers: []
	W0806 08:35:50.865211   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:35:50.865219   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:35:50.865273   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:35:50.902814   79927 cri.go:89] found id: ""
	I0806 08:35:50.902842   79927 logs.go:276] 0 containers: []
	W0806 08:35:50.902851   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:35:50.902856   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:35:50.902907   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:35:50.939861   79927 cri.go:89] found id: ""
	I0806 08:35:50.939891   79927 logs.go:276] 0 containers: []
	W0806 08:35:50.939899   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:35:50.939905   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:35:50.939961   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:35:50.984743   79927 cri.go:89] found id: ""
	I0806 08:35:50.984773   79927 logs.go:276] 0 containers: []
	W0806 08:35:50.984782   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:35:50.984793   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:35:50.984809   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:35:50.998476   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:35:50.998515   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:35:51.081100   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:35:51.081126   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:35:51.081139   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:35:51.157634   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:35:51.157679   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:35:51.210685   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:35:51.210716   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:35:50.288343   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:52.804477   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:49.612661   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:52.113950   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:54.114200   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:52.969251   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:55.468041   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:53.778620   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:53.793090   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:35:53.793154   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:35:53.837558   79927 cri.go:89] found id: ""
	I0806 08:35:53.837584   79927 logs.go:276] 0 containers: []
	W0806 08:35:53.837594   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:35:53.837599   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:35:53.837670   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:35:53.878855   79927 cri.go:89] found id: ""
	I0806 08:35:53.878884   79927 logs.go:276] 0 containers: []
	W0806 08:35:53.878893   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:35:53.878898   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:35:53.878945   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:35:53.918330   79927 cri.go:89] found id: ""
	I0806 08:35:53.918357   79927 logs.go:276] 0 containers: []
	W0806 08:35:53.918366   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:35:53.918371   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:35:53.918422   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:35:53.959156   79927 cri.go:89] found id: ""
	I0806 08:35:53.959186   79927 logs.go:276] 0 containers: []
	W0806 08:35:53.959196   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:35:53.959204   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:35:53.959309   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:35:53.996775   79927 cri.go:89] found id: ""
	I0806 08:35:53.996802   79927 logs.go:276] 0 containers: []
	W0806 08:35:53.996810   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:35:53.996816   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:35:53.996874   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:35:54.033047   79927 cri.go:89] found id: ""
	I0806 08:35:54.033072   79927 logs.go:276] 0 containers: []
	W0806 08:35:54.033082   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:35:54.033089   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:35:54.033152   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:35:54.069426   79927 cri.go:89] found id: ""
	I0806 08:35:54.069458   79927 logs.go:276] 0 containers: []
	W0806 08:35:54.069468   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:35:54.069475   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:35:54.069536   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:35:54.109114   79927 cri.go:89] found id: ""
	I0806 08:35:54.109143   79927 logs.go:276] 0 containers: []
	W0806 08:35:54.109153   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:35:54.109164   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:35:54.109179   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:35:54.161264   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:35:54.161298   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:35:54.175216   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:35:54.175247   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:35:54.254970   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:35:54.254999   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:35:54.255014   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:35:54.331561   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:35:54.331596   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:35:56.873110   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:56.889302   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:35:56.889378   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:35:56.931875   79927 cri.go:89] found id: ""
	I0806 08:35:56.931904   79927 logs.go:276] 0 containers: []
	W0806 08:35:56.931913   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:35:56.931919   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:35:56.931975   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:35:56.995549   79927 cri.go:89] found id: ""
	I0806 08:35:56.995587   79927 logs.go:276] 0 containers: []
	W0806 08:35:56.995601   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:35:56.995608   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:35:56.995675   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:35:57.042708   79927 cri.go:89] found id: ""
	I0806 08:35:57.042735   79927 logs.go:276] 0 containers: []
	W0806 08:35:57.042743   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:35:57.042749   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:35:57.042833   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:35:57.079160   79927 cri.go:89] found id: ""
	I0806 08:35:57.079187   79927 logs.go:276] 0 containers: []
	W0806 08:35:57.079197   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:35:57.079203   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:35:57.079260   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:35:57.116358   79927 cri.go:89] found id: ""
	I0806 08:35:57.116409   79927 logs.go:276] 0 containers: []
	W0806 08:35:57.116421   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:35:57.116429   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:35:57.116489   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:35:57.151051   79927 cri.go:89] found id: ""
	I0806 08:35:57.151079   79927 logs.go:276] 0 containers: []
	W0806 08:35:57.151087   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:35:57.151096   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:35:57.151148   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:35:57.189747   79927 cri.go:89] found id: ""
	I0806 08:35:57.189791   79927 logs.go:276] 0 containers: []
	W0806 08:35:57.189800   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:35:57.189806   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:35:57.189871   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:35:57.225251   79927 cri.go:89] found id: ""
	I0806 08:35:57.225277   79927 logs.go:276] 0 containers: []
	W0806 08:35:57.225287   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:35:57.225298   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:35:57.225314   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:35:57.238746   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:35:57.238773   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:35:57.312931   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:35:57.312959   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:35:57.312976   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:35:57.394104   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:35:57.394139   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:35:57.435993   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:35:57.436028   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:35:55.288712   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:57.789512   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:56.612875   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:58.613571   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:57.468429   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:59.969618   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:59.989556   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:00.004129   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:00.004205   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:00.042192   79927 cri.go:89] found id: ""
	I0806 08:36:00.042215   79927 logs.go:276] 0 containers: []
	W0806 08:36:00.042223   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:00.042228   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:00.042273   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:00.080921   79927 cri.go:89] found id: ""
	I0806 08:36:00.080956   79927 logs.go:276] 0 containers: []
	W0806 08:36:00.080966   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:00.080973   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:00.081033   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:00.123186   79927 cri.go:89] found id: ""
	I0806 08:36:00.123206   79927 logs.go:276] 0 containers: []
	W0806 08:36:00.123216   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:00.123223   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:00.123274   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:00.165252   79927 cri.go:89] found id: ""
	I0806 08:36:00.165281   79927 logs.go:276] 0 containers: []
	W0806 08:36:00.165289   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:00.165294   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:00.165340   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:00.201601   79927 cri.go:89] found id: ""
	I0806 08:36:00.201639   79927 logs.go:276] 0 containers: []
	W0806 08:36:00.201653   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:00.201662   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:00.201730   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:00.236933   79927 cri.go:89] found id: ""
	I0806 08:36:00.236964   79927 logs.go:276] 0 containers: []
	W0806 08:36:00.236975   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:00.236983   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:00.237043   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:00.273215   79927 cri.go:89] found id: ""
	I0806 08:36:00.273247   79927 logs.go:276] 0 containers: []
	W0806 08:36:00.273256   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:00.273261   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:00.273319   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:00.308075   79927 cri.go:89] found id: ""
	I0806 08:36:00.308105   79927 logs.go:276] 0 containers: []
	W0806 08:36:00.308116   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:00.308126   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:00.308141   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:00.360096   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:00.360146   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:00.374484   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:00.374521   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:00.454297   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:00.454318   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:00.454334   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:00.542665   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:00.542702   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:03.084297   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:03.098661   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:03.098737   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:03.134751   79927 cri.go:89] found id: ""
	I0806 08:36:03.134786   79927 logs.go:276] 0 containers: []
	W0806 08:36:03.134797   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:03.134805   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:03.134873   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:03.177412   79927 cri.go:89] found id: ""
	I0806 08:36:03.177442   79927 logs.go:276] 0 containers: []
	W0806 08:36:03.177453   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:03.177461   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:03.177516   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:03.213177   79927 cri.go:89] found id: ""
	I0806 08:36:03.213209   79927 logs.go:276] 0 containers: []
	W0806 08:36:03.213221   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:03.213229   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:03.213291   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:03.254569   79927 cri.go:89] found id: ""
	I0806 08:36:03.254594   79927 logs.go:276] 0 containers: []
	W0806 08:36:03.254603   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:03.254608   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:03.254656   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:03.292564   79927 cri.go:89] found id: ""
	I0806 08:36:03.292590   79927 logs.go:276] 0 containers: []
	W0806 08:36:03.292597   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:03.292610   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:03.292671   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:03.331572   79927 cri.go:89] found id: ""
	I0806 08:36:03.331596   79927 logs.go:276] 0 containers: []
	W0806 08:36:03.331605   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:03.331610   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:03.331658   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:03.366368   79927 cri.go:89] found id: ""
	I0806 08:36:03.366398   79927 logs.go:276] 0 containers: []
	W0806 08:36:03.366409   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:03.366417   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:03.366469   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:03.404345   79927 cri.go:89] found id: ""
	I0806 08:36:03.404387   79927 logs.go:276] 0 containers: []
	W0806 08:36:03.404399   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:03.404411   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:03.404425   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:03.454825   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:03.454864   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:03.469437   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:03.469466   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:03.553247   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:03.553274   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:03.553290   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:03.635461   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:03.635509   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:00.287469   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:02.787634   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:01.111573   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:03.613750   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:02.468084   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:04.468169   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:06.469036   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:06.175216   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:06.190083   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:06.190158   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:06.226428   79927 cri.go:89] found id: ""
	I0806 08:36:06.226459   79927 logs.go:276] 0 containers: []
	W0806 08:36:06.226468   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:06.226473   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:06.226529   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:06.263781   79927 cri.go:89] found id: ""
	I0806 08:36:06.263805   79927 logs.go:276] 0 containers: []
	W0806 08:36:06.263813   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:06.263819   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:06.263895   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:06.304335   79927 cri.go:89] found id: ""
	I0806 08:36:06.304386   79927 logs.go:276] 0 containers: []
	W0806 08:36:06.304397   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:06.304404   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:06.304466   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:06.341138   79927 cri.go:89] found id: ""
	I0806 08:36:06.341166   79927 logs.go:276] 0 containers: []
	W0806 08:36:06.341177   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:06.341184   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:06.341248   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:06.381547   79927 cri.go:89] found id: ""
	I0806 08:36:06.381577   79927 logs.go:276] 0 containers: []
	W0806 08:36:06.381587   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:06.381594   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:06.381662   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:06.417606   79927 cri.go:89] found id: ""
	I0806 08:36:06.417630   79927 logs.go:276] 0 containers: []
	W0806 08:36:06.417637   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:06.417643   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:06.417700   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:06.456258   79927 cri.go:89] found id: ""
	I0806 08:36:06.456287   79927 logs.go:276] 0 containers: []
	W0806 08:36:06.456298   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:06.456307   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:06.456404   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:06.494268   79927 cri.go:89] found id: ""
	I0806 08:36:06.494309   79927 logs.go:276] 0 containers: []
	W0806 08:36:06.494321   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:06.494332   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:06.494346   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:06.547670   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:06.547710   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:06.562020   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:06.562049   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:06.632691   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:06.632717   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:06.632729   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:06.721737   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:06.721780   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:04.787931   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:07.287463   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:06.111545   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:08.113637   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:08.968767   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:11.468847   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:09.265154   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:09.278664   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:09.278743   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:09.313464   79927 cri.go:89] found id: ""
	I0806 08:36:09.313494   79927 logs.go:276] 0 containers: []
	W0806 08:36:09.313506   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:09.313513   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:09.313561   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:09.352107   79927 cri.go:89] found id: ""
	I0806 08:36:09.352133   79927 logs.go:276] 0 containers: []
	W0806 08:36:09.352146   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:09.352153   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:09.352215   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:09.387547   79927 cri.go:89] found id: ""
	I0806 08:36:09.387581   79927 logs.go:276] 0 containers: []
	W0806 08:36:09.387594   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:09.387601   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:09.387684   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:09.422157   79927 cri.go:89] found id: ""
	I0806 08:36:09.422193   79927 logs.go:276] 0 containers: []
	W0806 08:36:09.422202   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:09.422207   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:09.422275   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:09.463065   79927 cri.go:89] found id: ""
	I0806 08:36:09.463106   79927 logs.go:276] 0 containers: []
	W0806 08:36:09.463122   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:09.463145   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:09.463209   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:09.499450   79927 cri.go:89] found id: ""
	I0806 08:36:09.499474   79927 logs.go:276] 0 containers: []
	W0806 08:36:09.499484   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:09.499491   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:09.499550   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:09.536411   79927 cri.go:89] found id: ""
	I0806 08:36:09.536442   79927 logs.go:276] 0 containers: []
	W0806 08:36:09.536453   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:09.536460   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:09.536533   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:09.572044   79927 cri.go:89] found id: ""
	I0806 08:36:09.572070   79927 logs.go:276] 0 containers: []
	W0806 08:36:09.572079   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:09.572087   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:09.572138   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:09.612385   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:09.612418   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:09.666790   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:09.666852   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:09.680707   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:09.680753   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:09.753446   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:09.753473   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:09.753491   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:12.336824   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:12.352545   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:12.352626   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:12.393158   79927 cri.go:89] found id: ""
	I0806 08:36:12.393188   79927 logs.go:276] 0 containers: []
	W0806 08:36:12.393199   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:12.393206   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:12.393255   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:12.428501   79927 cri.go:89] found id: ""
	I0806 08:36:12.428525   79927 logs.go:276] 0 containers: []
	W0806 08:36:12.428534   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:12.428539   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:12.428596   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:12.463033   79927 cri.go:89] found id: ""
	I0806 08:36:12.463062   79927 logs.go:276] 0 containers: []
	W0806 08:36:12.463072   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:12.463079   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:12.463136   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:12.500350   79927 cri.go:89] found id: ""
	I0806 08:36:12.500402   79927 logs.go:276] 0 containers: []
	W0806 08:36:12.500413   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:12.500421   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:12.500477   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:12.537323   79927 cri.go:89] found id: ""
	I0806 08:36:12.537350   79927 logs.go:276] 0 containers: []
	W0806 08:36:12.537360   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:12.537366   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:12.537414   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:12.575525   79927 cri.go:89] found id: ""
	I0806 08:36:12.575551   79927 logs.go:276] 0 containers: []
	W0806 08:36:12.575561   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:12.575568   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:12.575629   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:12.615375   79927 cri.go:89] found id: ""
	I0806 08:36:12.615436   79927 logs.go:276] 0 containers: []
	W0806 08:36:12.615451   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:12.615459   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:12.615531   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:12.655269   79927 cri.go:89] found id: ""
	I0806 08:36:12.655300   79927 logs.go:276] 0 containers: []
	W0806 08:36:12.655311   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:12.655324   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:12.655342   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:12.709097   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:12.709134   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:12.724330   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:12.724380   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:12.801075   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:12.801099   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:12.801111   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:12.885670   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:12.885704   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:09.288192   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:11.789794   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:10.613121   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:12.613812   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:13.969367   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:16.468276   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:15.427222   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:15.442092   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:15.442206   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:15.485634   79927 cri.go:89] found id: ""
	I0806 08:36:15.485657   79927 logs.go:276] 0 containers: []
	W0806 08:36:15.485665   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:15.485671   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:15.485715   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:15.520791   79927 cri.go:89] found id: ""
	I0806 08:36:15.520820   79927 logs.go:276] 0 containers: []
	W0806 08:36:15.520830   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:15.520837   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:15.520897   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:15.554041   79927 cri.go:89] found id: ""
	I0806 08:36:15.554065   79927 logs.go:276] 0 containers: []
	W0806 08:36:15.554073   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:15.554079   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:15.554143   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:15.587550   79927 cri.go:89] found id: ""
	I0806 08:36:15.587579   79927 logs.go:276] 0 containers: []
	W0806 08:36:15.587591   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:15.587598   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:15.587659   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:15.628657   79927 cri.go:89] found id: ""
	I0806 08:36:15.628681   79927 logs.go:276] 0 containers: []
	W0806 08:36:15.628689   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:15.628695   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:15.628750   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:15.665212   79927 cri.go:89] found id: ""
	I0806 08:36:15.665239   79927 logs.go:276] 0 containers: []
	W0806 08:36:15.665247   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:15.665252   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:15.665298   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:15.705675   79927 cri.go:89] found id: ""
	I0806 08:36:15.705701   79927 logs.go:276] 0 containers: []
	W0806 08:36:15.705709   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:15.705714   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:15.705762   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:15.739835   79927 cri.go:89] found id: ""
	I0806 08:36:15.739862   79927 logs.go:276] 0 containers: []
	W0806 08:36:15.739871   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:15.739879   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:15.739892   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:15.795719   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:15.795749   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:15.809940   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:15.809967   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:15.888710   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:15.888737   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:15.888753   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:15.972728   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:15.972772   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:18.513565   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:18.527480   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:18.527538   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:18.562783   79927 cri.go:89] found id: ""
	I0806 08:36:18.562812   79927 logs.go:276] 0 containers: []
	W0806 08:36:18.562824   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:18.562832   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:18.562896   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:18.600613   79927 cri.go:89] found id: ""
	I0806 08:36:18.600645   79927 logs.go:276] 0 containers: []
	W0806 08:36:18.600659   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:18.600668   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:18.600735   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:18.638074   79927 cri.go:89] found id: ""
	I0806 08:36:18.638104   79927 logs.go:276] 0 containers: []
	W0806 08:36:18.638114   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:18.638119   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:18.638181   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:18.679795   79927 cri.go:89] found id: ""
	I0806 08:36:18.679825   79927 logs.go:276] 0 containers: []
	W0806 08:36:18.679836   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:18.679844   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:18.679907   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:18.715535   79927 cri.go:89] found id: ""
	I0806 08:36:18.715568   79927 logs.go:276] 0 containers: []
	W0806 08:36:18.715577   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:18.715585   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:18.715650   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:18.752705   79927 cri.go:89] found id: ""
	I0806 08:36:18.752736   79927 logs.go:276] 0 containers: []
	W0806 08:36:18.752748   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:18.752759   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:18.752821   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:14.288048   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:16.288422   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:18.788928   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:15.112857   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:17.114585   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:18.468697   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:20.967985   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:18.791653   79927 cri.go:89] found id: ""
	I0806 08:36:18.791678   79927 logs.go:276] 0 containers: []
	W0806 08:36:18.791689   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:18.791696   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:18.791763   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:18.828328   79927 cri.go:89] found id: ""
	I0806 08:36:18.828357   79927 logs.go:276] 0 containers: []
	W0806 08:36:18.828378   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:18.828389   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:18.828408   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:18.888111   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:18.888148   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:18.902718   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:18.902750   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:18.986327   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:18.986352   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:18.986365   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:19.072858   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:19.072899   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:21.615497   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:21.631434   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:21.631492   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:21.671446   79927 cri.go:89] found id: ""
	I0806 08:36:21.671471   79927 logs.go:276] 0 containers: []
	W0806 08:36:21.671479   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:21.671484   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:21.671540   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:21.707598   79927 cri.go:89] found id: ""
	I0806 08:36:21.707624   79927 logs.go:276] 0 containers: []
	W0806 08:36:21.707634   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:21.707640   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:21.707686   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:21.753647   79927 cri.go:89] found id: ""
	I0806 08:36:21.753675   79927 logs.go:276] 0 containers: []
	W0806 08:36:21.753686   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:21.753692   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:21.753749   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:21.795284   79927 cri.go:89] found id: ""
	I0806 08:36:21.795311   79927 logs.go:276] 0 containers: []
	W0806 08:36:21.795321   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:21.795329   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:21.795388   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:21.832436   79927 cri.go:89] found id: ""
	I0806 08:36:21.832466   79927 logs.go:276] 0 containers: []
	W0806 08:36:21.832476   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:21.832483   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:21.832550   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:21.870295   79927 cri.go:89] found id: ""
	I0806 08:36:21.870327   79927 logs.go:276] 0 containers: []
	W0806 08:36:21.870338   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:21.870345   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:21.870412   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:21.908210   79927 cri.go:89] found id: ""
	I0806 08:36:21.908242   79927 logs.go:276] 0 containers: []
	W0806 08:36:21.908253   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:21.908261   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:21.908322   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:21.944055   79927 cri.go:89] found id: ""
	I0806 08:36:21.944084   79927 logs.go:276] 0 containers: []
	W0806 08:36:21.944095   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:21.944104   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:21.944125   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:22.000467   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:22.000506   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:22.014590   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:22.014629   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:22.083873   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:22.083907   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:22.083927   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:22.175993   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:22.176056   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:21.288435   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:23.788424   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:19.612021   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:21.612092   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:23.612557   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:22.968059   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:24.968111   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:24.720476   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:24.734469   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:24.734551   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:24.770663   79927 cri.go:89] found id: ""
	I0806 08:36:24.770692   79927 logs.go:276] 0 containers: []
	W0806 08:36:24.770705   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:24.770712   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:24.770768   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:24.807861   79927 cri.go:89] found id: ""
	I0806 08:36:24.807889   79927 logs.go:276] 0 containers: []
	W0806 08:36:24.807897   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:24.807908   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:24.807957   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:24.849827   79927 cri.go:89] found id: ""
	I0806 08:36:24.849857   79927 logs.go:276] 0 containers: []
	W0806 08:36:24.849869   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:24.849877   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:24.849937   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:24.885260   79927 cri.go:89] found id: ""
	I0806 08:36:24.885291   79927 logs.go:276] 0 containers: []
	W0806 08:36:24.885303   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:24.885315   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:24.885381   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:24.919918   79927 cri.go:89] found id: ""
	I0806 08:36:24.919949   79927 logs.go:276] 0 containers: []
	W0806 08:36:24.919957   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:24.919964   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:24.920026   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:24.956010   79927 cri.go:89] found id: ""
	I0806 08:36:24.956036   79927 logs.go:276] 0 containers: []
	W0806 08:36:24.956045   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:24.956050   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:24.956109   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:24.993126   79927 cri.go:89] found id: ""
	I0806 08:36:24.993154   79927 logs.go:276] 0 containers: []
	W0806 08:36:24.993165   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:24.993173   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:24.993240   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:25.026732   79927 cri.go:89] found id: ""
	I0806 08:36:25.026756   79927 logs.go:276] 0 containers: []
	W0806 08:36:25.026763   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:25.026771   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:25.026781   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:25.064076   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:25.064113   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:25.118820   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:25.118856   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:25.133640   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:25.133671   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:25.201686   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:25.201714   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:25.201729   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:27.801947   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:27.815472   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:27.815539   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:27.851073   79927 cri.go:89] found id: ""
	I0806 08:36:27.851103   79927 logs.go:276] 0 containers: []
	W0806 08:36:27.851119   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:27.851127   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:27.851184   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:27.886121   79927 cri.go:89] found id: ""
	I0806 08:36:27.886151   79927 logs.go:276] 0 containers: []
	W0806 08:36:27.886162   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:27.886169   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:27.886231   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:27.920950   79927 cri.go:89] found id: ""
	I0806 08:36:27.920979   79927 logs.go:276] 0 containers: []
	W0806 08:36:27.920990   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:27.920997   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:27.921055   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:27.956529   79927 cri.go:89] found id: ""
	I0806 08:36:27.956562   79927 logs.go:276] 0 containers: []
	W0806 08:36:27.956574   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:27.956581   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:27.956631   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:27.992272   79927 cri.go:89] found id: ""
	I0806 08:36:27.992298   79927 logs.go:276] 0 containers: []
	W0806 08:36:27.992306   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:27.992322   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:27.992405   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:28.027242   79927 cri.go:89] found id: ""
	I0806 08:36:28.027269   79927 logs.go:276] 0 containers: []
	W0806 08:36:28.027277   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:28.027283   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:28.027335   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:28.063043   79927 cri.go:89] found id: ""
	I0806 08:36:28.063068   79927 logs.go:276] 0 containers: []
	W0806 08:36:28.063076   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:28.063083   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:28.063148   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:28.099743   79927 cri.go:89] found id: ""
	I0806 08:36:28.099769   79927 logs.go:276] 0 containers: []
	W0806 08:36:28.099780   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:28.099791   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:28.099806   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:28.151354   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:28.151392   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:28.166704   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:28.166731   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:28.242621   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:28.242642   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:28.242656   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:28.324352   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:28.324405   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:25.789745   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:28.290656   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:26.112264   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:28.112769   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:27.466228   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:29.469998   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:30.869063   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:30.882309   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:30.882377   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:30.916264   79927 cri.go:89] found id: ""
	I0806 08:36:30.916296   79927 logs.go:276] 0 containers: []
	W0806 08:36:30.916306   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:30.916313   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:30.916390   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:30.951995   79927 cri.go:89] found id: ""
	I0806 08:36:30.952021   79927 logs.go:276] 0 containers: []
	W0806 08:36:30.952028   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:30.952034   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:30.952111   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:31.005229   79927 cri.go:89] found id: ""
	I0806 08:36:31.005270   79927 logs.go:276] 0 containers: []
	W0806 08:36:31.005281   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:31.005288   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:31.005350   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:31.040439   79927 cri.go:89] found id: ""
	I0806 08:36:31.040465   79927 logs.go:276] 0 containers: []
	W0806 08:36:31.040476   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:31.040482   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:31.040534   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:31.078880   79927 cri.go:89] found id: ""
	I0806 08:36:31.078910   79927 logs.go:276] 0 containers: []
	W0806 08:36:31.078920   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:31.078934   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:31.078996   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:31.115520   79927 cri.go:89] found id: ""
	I0806 08:36:31.115551   79927 logs.go:276] 0 containers: []
	W0806 08:36:31.115562   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:31.115569   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:31.115615   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:31.150450   79927 cri.go:89] found id: ""
	I0806 08:36:31.150483   79927 logs.go:276] 0 containers: []
	W0806 08:36:31.150497   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:31.150506   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:31.150565   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:31.187449   79927 cri.go:89] found id: ""
	I0806 08:36:31.187476   79927 logs.go:276] 0 containers: []
	W0806 08:36:31.187485   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:31.187493   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:31.187507   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:31.246232   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:31.246272   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:31.260691   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:31.260720   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:31.342356   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:31.342383   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:31.342401   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:31.426820   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:31.426865   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:30.787370   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:32.787711   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:30.611466   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:32.611550   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:31.968048   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:34.469062   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:33.974768   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:33.988377   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:33.988442   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:34.023284   79927 cri.go:89] found id: ""
	I0806 08:36:34.023316   79927 logs.go:276] 0 containers: []
	W0806 08:36:34.023328   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:34.023336   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:34.023397   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:34.058633   79927 cri.go:89] found id: ""
	I0806 08:36:34.058676   79927 logs.go:276] 0 containers: []
	W0806 08:36:34.058695   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:34.058704   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:34.058767   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:34.098343   79927 cri.go:89] found id: ""
	I0806 08:36:34.098379   79927 logs.go:276] 0 containers: []
	W0806 08:36:34.098390   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:34.098398   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:34.098471   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:34.134638   79927 cri.go:89] found id: ""
	I0806 08:36:34.134667   79927 logs.go:276] 0 containers: []
	W0806 08:36:34.134677   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:34.134683   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:34.134729   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:34.170118   79927 cri.go:89] found id: ""
	I0806 08:36:34.170144   79927 logs.go:276] 0 containers: []
	W0806 08:36:34.170153   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:34.170159   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:34.170207   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:34.204789   79927 cri.go:89] found id: ""
	I0806 08:36:34.204815   79927 logs.go:276] 0 containers: []
	W0806 08:36:34.204823   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:34.204829   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:34.204877   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:34.245170   79927 cri.go:89] found id: ""
	I0806 08:36:34.245194   79927 logs.go:276] 0 containers: []
	W0806 08:36:34.245203   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:34.245208   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:34.245264   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:34.280026   79927 cri.go:89] found id: ""
	I0806 08:36:34.280063   79927 logs.go:276] 0 containers: []
	W0806 08:36:34.280076   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:34.280086   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:34.280101   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:34.335363   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:34.335408   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:34.350201   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:34.350226   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:34.425205   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:34.425231   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:34.425244   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:34.506385   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:34.506419   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:37.047263   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:37.060179   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:37.060254   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:37.093168   79927 cri.go:89] found id: ""
	I0806 08:36:37.093201   79927 logs.go:276] 0 containers: []
	W0806 08:36:37.093209   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:37.093216   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:37.093264   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:37.131315   79927 cri.go:89] found id: ""
	I0806 08:36:37.131343   79927 logs.go:276] 0 containers: []
	W0806 08:36:37.131353   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:37.131360   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:37.131426   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:37.169799   79927 cri.go:89] found id: ""
	I0806 08:36:37.169829   79927 logs.go:276] 0 containers: []
	W0806 08:36:37.169839   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:37.169845   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:37.169898   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:37.205852   79927 cri.go:89] found id: ""
	I0806 08:36:37.205878   79927 logs.go:276] 0 containers: []
	W0806 08:36:37.205885   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:37.205891   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:37.205938   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:37.242469   79927 cri.go:89] found id: ""
	I0806 08:36:37.242493   79927 logs.go:276] 0 containers: []
	W0806 08:36:37.242501   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:37.242507   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:37.242554   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:37.278515   79927 cri.go:89] found id: ""
	I0806 08:36:37.278545   79927 logs.go:276] 0 containers: []
	W0806 08:36:37.278556   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:37.278563   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:37.278627   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:37.316157   79927 cri.go:89] found id: ""
	I0806 08:36:37.316181   79927 logs.go:276] 0 containers: []
	W0806 08:36:37.316189   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:37.316195   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:37.316249   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:37.353752   79927 cri.go:89] found id: ""
	I0806 08:36:37.353778   79927 logs.go:276] 0 containers: []
	W0806 08:36:37.353785   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:37.353796   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:37.353807   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:37.439435   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:37.439471   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:37.488866   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:37.488903   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:37.540031   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:37.540072   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:37.556109   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:37.556140   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:37.629023   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:35.288565   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:37.788336   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:35.113182   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:37.615523   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:36.967665   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:38.973884   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:41.467445   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:40.129579   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:40.143971   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:40.144050   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:40.180490   79927 cri.go:89] found id: ""
	I0806 08:36:40.180515   79927 logs.go:276] 0 containers: []
	W0806 08:36:40.180524   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:40.180530   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:40.180586   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:40.219812   79927 cri.go:89] found id: ""
	I0806 08:36:40.219860   79927 logs.go:276] 0 containers: []
	W0806 08:36:40.219869   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:40.219877   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:40.219939   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:40.256048   79927 cri.go:89] found id: ""
	I0806 08:36:40.256076   79927 logs.go:276] 0 containers: []
	W0806 08:36:40.256088   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:40.256106   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:40.256166   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:40.294693   79927 cri.go:89] found id: ""
	I0806 08:36:40.294720   79927 logs.go:276] 0 containers: []
	W0806 08:36:40.294729   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:40.294734   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:40.294792   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:40.331246   79927 cri.go:89] found id: ""
	I0806 08:36:40.331276   79927 logs.go:276] 0 containers: []
	W0806 08:36:40.331287   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:40.331295   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:40.331351   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:40.373039   79927 cri.go:89] found id: ""
	I0806 08:36:40.373068   79927 logs.go:276] 0 containers: []
	W0806 08:36:40.373095   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:40.373103   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:40.373163   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:40.408137   79927 cri.go:89] found id: ""
	I0806 08:36:40.408167   79927 logs.go:276] 0 containers: []
	W0806 08:36:40.408177   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:40.408184   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:40.408249   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:40.442996   79927 cri.go:89] found id: ""
	I0806 08:36:40.443035   79927 logs.go:276] 0 containers: []
	W0806 08:36:40.443043   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:40.443051   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:40.443063   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:40.499559   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:40.499599   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:40.514687   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:40.514716   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:40.586034   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:40.586059   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:40.586071   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:40.669756   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:40.669796   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:43.213555   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:43.229683   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:43.229751   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:43.267107   79927 cri.go:89] found id: ""
	I0806 08:36:43.267140   79927 logs.go:276] 0 containers: []
	W0806 08:36:43.267153   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:43.267161   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:43.267235   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:43.303405   79927 cri.go:89] found id: ""
	I0806 08:36:43.303435   79927 logs.go:276] 0 containers: []
	W0806 08:36:43.303443   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:43.303449   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:43.303509   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:43.338583   79927 cri.go:89] found id: ""
	I0806 08:36:43.338610   79927 logs.go:276] 0 containers: []
	W0806 08:36:43.338619   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:43.338628   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:43.338692   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:43.377413   79927 cri.go:89] found id: ""
	I0806 08:36:43.377441   79927 logs.go:276] 0 containers: []
	W0806 08:36:43.377449   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:43.377454   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:43.377501   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:43.413036   79927 cri.go:89] found id: ""
	I0806 08:36:43.413133   79927 logs.go:276] 0 containers: []
	W0806 08:36:43.413152   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:43.413160   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:43.413221   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:43.450216   79927 cri.go:89] found id: ""
	I0806 08:36:43.450245   79927 logs.go:276] 0 containers: []
	W0806 08:36:43.450253   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:43.450259   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:43.450305   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:43.485811   79927 cri.go:89] found id: ""
	I0806 08:36:43.485856   79927 logs.go:276] 0 containers: []
	W0806 08:36:43.485864   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:43.485870   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:43.485919   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:43.521586   79927 cri.go:89] found id: ""
	I0806 08:36:43.521617   79927 logs.go:276] 0 containers: []
	W0806 08:36:43.521628   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:43.521650   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:43.521673   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:43.561842   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:43.561889   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:43.617652   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:43.617683   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:43.631239   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:43.631267   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:43.707297   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:43.707317   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:43.707330   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:39.789019   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:42.287817   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:40.111420   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:42.112503   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:43.468602   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:45.974699   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:46.287309   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:46.301917   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:46.301991   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:46.335040   79927 cri.go:89] found id: ""
	I0806 08:36:46.335069   79927 logs.go:276] 0 containers: []
	W0806 08:36:46.335080   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:46.335088   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:46.335177   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:46.371902   79927 cri.go:89] found id: ""
	I0806 08:36:46.371930   79927 logs.go:276] 0 containers: []
	W0806 08:36:46.371939   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:46.371944   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:46.372000   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:46.411945   79927 cri.go:89] found id: ""
	I0806 08:36:46.412119   79927 logs.go:276] 0 containers: []
	W0806 08:36:46.412134   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:46.412145   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:46.412248   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:46.448578   79927 cri.go:89] found id: ""
	I0806 08:36:46.448612   79927 logs.go:276] 0 containers: []
	W0806 08:36:46.448624   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:46.448631   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:46.448691   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:46.492231   79927 cri.go:89] found id: ""
	I0806 08:36:46.492263   79927 logs.go:276] 0 containers: []
	W0806 08:36:46.492276   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:46.492283   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:46.492338   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:46.538502   79927 cri.go:89] found id: ""
	I0806 08:36:46.538528   79927 logs.go:276] 0 containers: []
	W0806 08:36:46.538547   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:46.538555   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:46.538610   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:46.577750   79927 cri.go:89] found id: ""
	I0806 08:36:46.577778   79927 logs.go:276] 0 containers: []
	W0806 08:36:46.577785   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:46.577790   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:46.577872   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:46.611717   79927 cri.go:89] found id: ""
	I0806 08:36:46.611742   79927 logs.go:276] 0 containers: []
	W0806 08:36:46.611750   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:46.611758   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:46.611770   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:46.629144   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:46.629178   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:46.714703   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:46.714723   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:46.714736   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:46.798978   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:46.799025   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:46.837373   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:46.837407   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:44.288387   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:46.288470   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:48.788762   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:44.611739   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:46.615083   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:49.113041   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:48.469024   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:50.967906   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:49.393231   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:49.407372   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:49.407436   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:49.444210   79927 cri.go:89] found id: ""
	I0806 08:36:49.444240   79927 logs.go:276] 0 containers: []
	W0806 08:36:49.444249   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:49.444254   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:49.444313   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:49.484165   79927 cri.go:89] found id: ""
	I0806 08:36:49.484190   79927 logs.go:276] 0 containers: []
	W0806 08:36:49.484200   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:49.484206   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:49.484278   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:49.519880   79927 cri.go:89] found id: ""
	I0806 08:36:49.519921   79927 logs.go:276] 0 containers: []
	W0806 08:36:49.519932   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:49.519939   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:49.519999   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:49.554954   79927 cri.go:89] found id: ""
	I0806 08:36:49.554979   79927 logs.go:276] 0 containers: []
	W0806 08:36:49.554988   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:49.554995   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:49.555045   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:49.593332   79927 cri.go:89] found id: ""
	I0806 08:36:49.593359   79927 logs.go:276] 0 containers: []
	W0806 08:36:49.593367   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:49.593373   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:49.593432   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:49.630736   79927 cri.go:89] found id: ""
	I0806 08:36:49.630766   79927 logs.go:276] 0 containers: []
	W0806 08:36:49.630774   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:49.630780   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:49.630839   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:49.667460   79927 cri.go:89] found id: ""
	I0806 08:36:49.667487   79927 logs.go:276] 0 containers: []
	W0806 08:36:49.667499   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:49.667505   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:49.667566   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:49.700161   79927 cri.go:89] found id: ""
	I0806 08:36:49.700190   79927 logs.go:276] 0 containers: []
	W0806 08:36:49.700199   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:49.700207   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:49.700218   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:49.786656   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:49.786704   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:49.825711   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:49.825745   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:49.879696   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:49.879731   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:49.893336   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:49.893369   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:49.969462   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:52.469939   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:52.483906   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:52.483974   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:52.523575   79927 cri.go:89] found id: ""
	I0806 08:36:52.523603   79927 logs.go:276] 0 containers: []
	W0806 08:36:52.523615   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:52.523622   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:52.523684   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:52.561590   79927 cri.go:89] found id: ""
	I0806 08:36:52.561620   79927 logs.go:276] 0 containers: []
	W0806 08:36:52.561630   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:52.561639   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:52.561698   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:52.598204   79927 cri.go:89] found id: ""
	I0806 08:36:52.598227   79927 logs.go:276] 0 containers: []
	W0806 08:36:52.598236   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:52.598241   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:52.598302   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:52.636993   79927 cri.go:89] found id: ""
	I0806 08:36:52.637019   79927 logs.go:276] 0 containers: []
	W0806 08:36:52.637027   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:52.637034   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:52.637085   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:52.675090   79927 cri.go:89] found id: ""
	I0806 08:36:52.675118   79927 logs.go:276] 0 containers: []
	W0806 08:36:52.675130   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:52.675138   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:52.675199   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:52.711045   79927 cri.go:89] found id: ""
	I0806 08:36:52.711072   79927 logs.go:276] 0 containers: []
	W0806 08:36:52.711081   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:52.711086   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:52.711151   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:52.749001   79927 cri.go:89] found id: ""
	I0806 08:36:52.749029   79927 logs.go:276] 0 containers: []
	W0806 08:36:52.749039   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:52.749046   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:52.749113   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:52.783692   79927 cri.go:89] found id: ""
	I0806 08:36:52.783720   79927 logs.go:276] 0 containers: []
	W0806 08:36:52.783737   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:52.783748   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:52.783762   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:52.863240   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:52.863281   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:52.902471   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:52.902496   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:52.953299   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:52.953341   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:52.968786   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:52.968818   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:53.045904   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:51.288280   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:53.289276   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:51.612945   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:54.112574   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:52.968486   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:55.468458   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:55.546407   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:55.560166   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:55.560243   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:55.592961   79927 cri.go:89] found id: ""
	I0806 08:36:55.592992   79927 logs.go:276] 0 containers: []
	W0806 08:36:55.593004   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:55.593012   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:55.593075   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:55.629546   79927 cri.go:89] found id: ""
	I0806 08:36:55.629568   79927 logs.go:276] 0 containers: []
	W0806 08:36:55.629578   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:55.629585   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:55.629639   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:55.668677   79927 cri.go:89] found id: ""
	I0806 08:36:55.668707   79927 logs.go:276] 0 containers: []
	W0806 08:36:55.668718   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:55.668726   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:55.668784   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:55.704141   79927 cri.go:89] found id: ""
	I0806 08:36:55.704173   79927 logs.go:276] 0 containers: []
	W0806 08:36:55.704183   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:55.704188   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:55.704257   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:55.740125   79927 cri.go:89] found id: ""
	I0806 08:36:55.740152   79927 logs.go:276] 0 containers: []
	W0806 08:36:55.740160   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:55.740166   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:55.740210   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:55.773709   79927 cri.go:89] found id: ""
	I0806 08:36:55.773739   79927 logs.go:276] 0 containers: []
	W0806 08:36:55.773750   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:55.773759   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:55.773827   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:55.813913   79927 cri.go:89] found id: ""
	I0806 08:36:55.813940   79927 logs.go:276] 0 containers: []
	W0806 08:36:55.813953   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:55.813959   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:55.814004   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:55.850807   79927 cri.go:89] found id: ""
	I0806 08:36:55.850841   79927 logs.go:276] 0 containers: []
	W0806 08:36:55.850853   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:55.850864   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:55.850878   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:55.908202   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:55.908247   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:55.922278   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:55.922311   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:55.996448   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:55.996476   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:55.996489   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:56.076612   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:56.076647   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:58.625610   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:58.640939   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:58.641008   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:58.675638   79927 cri.go:89] found id: ""
	I0806 08:36:58.675666   79927 logs.go:276] 0 containers: []
	W0806 08:36:58.675674   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:58.675680   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:58.675728   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:58.712020   79927 cri.go:89] found id: ""
	I0806 08:36:58.712055   79927 logs.go:276] 0 containers: []
	W0806 08:36:58.712065   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:58.712072   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:58.712126   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:58.749683   79927 cri.go:89] found id: ""
	I0806 08:36:58.749712   79927 logs.go:276] 0 containers: []
	W0806 08:36:58.749724   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:58.749731   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:58.749792   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:55.788883   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:58.288825   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:56.613051   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:59.113176   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:57.968553   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:59.968580   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:58.790005   79927 cri.go:89] found id: ""
	I0806 08:36:58.790026   79927 logs.go:276] 0 containers: []
	W0806 08:36:58.790033   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:58.790039   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:58.790089   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:58.828781   79927 cri.go:89] found id: ""
	I0806 08:36:58.828806   79927 logs.go:276] 0 containers: []
	W0806 08:36:58.828815   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:58.828821   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:58.828873   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:58.867039   79927 cri.go:89] found id: ""
	I0806 08:36:58.867067   79927 logs.go:276] 0 containers: []
	W0806 08:36:58.867075   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:58.867081   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:58.867125   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:58.901362   79927 cri.go:89] found id: ""
	I0806 08:36:58.901396   79927 logs.go:276] 0 containers: []
	W0806 08:36:58.901415   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:58.901423   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:58.901472   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:58.940351   79927 cri.go:89] found id: ""
	I0806 08:36:58.940388   79927 logs.go:276] 0 containers: []
	W0806 08:36:58.940399   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:58.940410   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:58.940425   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:58.995737   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:58.995776   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:59.009191   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:59.009222   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:59.085109   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:59.085133   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:59.085150   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:59.164778   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:59.164819   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:01.712282   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:01.725701   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:01.725759   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:01.762005   79927 cri.go:89] found id: ""
	I0806 08:37:01.762029   79927 logs.go:276] 0 containers: []
	W0806 08:37:01.762038   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:01.762044   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:01.762092   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:01.799277   79927 cri.go:89] found id: ""
	I0806 08:37:01.799304   79927 logs.go:276] 0 containers: []
	W0806 08:37:01.799316   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:01.799322   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:01.799391   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:01.845100   79927 cri.go:89] found id: ""
	I0806 08:37:01.845133   79927 logs.go:276] 0 containers: []
	W0806 08:37:01.845147   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:01.845154   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:01.845214   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:01.882019   79927 cri.go:89] found id: ""
	I0806 08:37:01.882050   79927 logs.go:276] 0 containers: []
	W0806 08:37:01.882061   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:01.882069   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:01.882160   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:01.919224   79927 cri.go:89] found id: ""
	I0806 08:37:01.919251   79927 logs.go:276] 0 containers: []
	W0806 08:37:01.919262   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:01.919269   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:01.919327   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:01.957300   79927 cri.go:89] found id: ""
	I0806 08:37:01.957327   79927 logs.go:276] 0 containers: []
	W0806 08:37:01.957335   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:01.957342   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:01.957389   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:02.000717   79927 cri.go:89] found id: ""
	I0806 08:37:02.000740   79927 logs.go:276] 0 containers: []
	W0806 08:37:02.000749   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:02.000754   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:02.000801   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:02.034710   79927 cri.go:89] found id: ""
	I0806 08:37:02.034739   79927 logs.go:276] 0 containers: []
	W0806 08:37:02.034753   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:02.034763   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:02.034776   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:02.090553   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:02.090593   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:02.104247   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:02.104276   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:02.181990   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:02.182015   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:02.182030   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:02.268127   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:02.268167   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:00.789157   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:03.288510   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:01.611956   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:04.111772   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:02.467718   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:04.968898   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:04.814144   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:04.829120   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:04.829194   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:04.873503   79927 cri.go:89] found id: ""
	I0806 08:37:04.873537   79927 logs.go:276] 0 containers: []
	W0806 08:37:04.873548   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:04.873557   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:04.873614   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:04.913878   79927 cri.go:89] found id: ""
	I0806 08:37:04.913910   79927 logs.go:276] 0 containers: []
	W0806 08:37:04.913918   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:04.913924   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:04.913984   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:04.950273   79927 cri.go:89] found id: ""
	I0806 08:37:04.950303   79927 logs.go:276] 0 containers: []
	W0806 08:37:04.950316   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:04.950323   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:04.950388   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:04.985108   79927 cri.go:89] found id: ""
	I0806 08:37:04.985135   79927 logs.go:276] 0 containers: []
	W0806 08:37:04.985145   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:04.985151   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:04.985215   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:05.021033   79927 cri.go:89] found id: ""
	I0806 08:37:05.021056   79927 logs.go:276] 0 containers: []
	W0806 08:37:05.021064   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:05.021069   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:05.021120   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:05.058416   79927 cri.go:89] found id: ""
	I0806 08:37:05.058440   79927 logs.go:276] 0 containers: []
	W0806 08:37:05.058448   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:05.058454   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:05.058510   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:05.093709   79927 cri.go:89] found id: ""
	I0806 08:37:05.093737   79927 logs.go:276] 0 containers: []
	W0806 08:37:05.093745   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:05.093751   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:05.093815   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:05.129391   79927 cri.go:89] found id: ""
	I0806 08:37:05.129416   79927 logs.go:276] 0 containers: []
	W0806 08:37:05.129424   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:05.129432   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:05.129444   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:05.172149   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:05.172176   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:05.224745   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:05.224781   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:05.238967   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:05.238997   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:05.314834   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:05.314868   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:05.314883   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:07.903732   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:07.918551   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:07.918619   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:07.966068   79927 cri.go:89] found id: ""
	I0806 08:37:07.966093   79927 logs.go:276] 0 containers: []
	W0806 08:37:07.966104   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:07.966112   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:07.966180   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:08.001557   79927 cri.go:89] found id: ""
	I0806 08:37:08.001586   79927 logs.go:276] 0 containers: []
	W0806 08:37:08.001595   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:08.001602   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:08.001663   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:08.034214   79927 cri.go:89] found id: ""
	I0806 08:37:08.034239   79927 logs.go:276] 0 containers: []
	W0806 08:37:08.034249   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:08.034256   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:08.034316   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:08.070645   79927 cri.go:89] found id: ""
	I0806 08:37:08.070673   79927 logs.go:276] 0 containers: []
	W0806 08:37:08.070683   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:08.070689   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:08.070761   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:08.110537   79927 cri.go:89] found id: ""
	I0806 08:37:08.110565   79927 logs.go:276] 0 containers: []
	W0806 08:37:08.110574   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:08.110581   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:08.110635   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:08.146179   79927 cri.go:89] found id: ""
	I0806 08:37:08.146203   79927 logs.go:276] 0 containers: []
	W0806 08:37:08.146211   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:08.146224   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:08.146272   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:08.182924   79927 cri.go:89] found id: ""
	I0806 08:37:08.182953   79927 logs.go:276] 0 containers: []
	W0806 08:37:08.182962   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:08.182968   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:08.183024   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:08.219547   79927 cri.go:89] found id: ""
	I0806 08:37:08.219570   79927 logs.go:276] 0 containers: []
	W0806 08:37:08.219576   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:08.219585   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:08.219599   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:08.298919   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:08.298956   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:08.338475   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:08.338512   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:08.390859   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:08.390898   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:08.404854   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:08.404891   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:08.473553   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:05.788534   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:08.288635   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:06.112204   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:08.612960   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:06.969444   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:09.468423   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:11.469280   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:10.973824   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:10.987720   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:10.987777   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:11.022637   79927 cri.go:89] found id: ""
	I0806 08:37:11.022667   79927 logs.go:276] 0 containers: []
	W0806 08:37:11.022677   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:11.022683   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:11.022739   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:11.062625   79927 cri.go:89] found id: ""
	I0806 08:37:11.062655   79927 logs.go:276] 0 containers: []
	W0806 08:37:11.062673   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:11.062681   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:11.062736   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:11.099906   79927 cri.go:89] found id: ""
	I0806 08:37:11.099937   79927 logs.go:276] 0 containers: []
	W0806 08:37:11.099945   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:11.099951   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:11.100019   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:11.139461   79927 cri.go:89] found id: ""
	I0806 08:37:11.139485   79927 logs.go:276] 0 containers: []
	W0806 08:37:11.139493   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:11.139498   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:11.139541   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:11.177916   79927 cri.go:89] found id: ""
	I0806 08:37:11.177940   79927 logs.go:276] 0 containers: []
	W0806 08:37:11.177948   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:11.177954   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:11.178007   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:11.216335   79927 cri.go:89] found id: ""
	I0806 08:37:11.216377   79927 logs.go:276] 0 containers: []
	W0806 08:37:11.216389   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:11.216398   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:11.216453   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:11.278457   79927 cri.go:89] found id: ""
	I0806 08:37:11.278479   79927 logs.go:276] 0 containers: []
	W0806 08:37:11.278487   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:11.278493   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:11.278541   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:11.316325   79927 cri.go:89] found id: ""
	I0806 08:37:11.316348   79927 logs.go:276] 0 containers: []
	W0806 08:37:11.316356   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:11.316381   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:11.316397   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:11.357435   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:11.357469   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:11.413151   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:11.413184   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:11.427149   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:11.427182   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:11.500691   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:11.500723   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:11.500738   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:10.787717   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:12.788263   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:11.113471   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:13.612216   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:13.968959   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:16.468099   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:14.077817   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:14.090888   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:14.090960   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:14.126454   79927 cri.go:89] found id: ""
	I0806 08:37:14.126476   79927 logs.go:276] 0 containers: []
	W0806 08:37:14.126484   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:14.126489   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:14.126535   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:14.163910   79927 cri.go:89] found id: ""
	I0806 08:37:14.163939   79927 logs.go:276] 0 containers: []
	W0806 08:37:14.163950   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:14.163957   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:14.164025   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:14.202918   79927 cri.go:89] found id: ""
	I0806 08:37:14.202942   79927 logs.go:276] 0 containers: []
	W0806 08:37:14.202950   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:14.202956   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:14.202999   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:14.240997   79927 cri.go:89] found id: ""
	I0806 08:37:14.241021   79927 logs.go:276] 0 containers: []
	W0806 08:37:14.241030   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:14.241036   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:14.241083   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:14.277424   79927 cri.go:89] found id: ""
	I0806 08:37:14.277448   79927 logs.go:276] 0 containers: []
	W0806 08:37:14.277465   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:14.277473   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:14.277545   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:14.317402   79927 cri.go:89] found id: ""
	I0806 08:37:14.317426   79927 logs.go:276] 0 containers: []
	W0806 08:37:14.317435   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:14.317440   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:14.317487   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:14.364310   79927 cri.go:89] found id: ""
	I0806 08:37:14.364341   79927 logs.go:276] 0 containers: []
	W0806 08:37:14.364351   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:14.364358   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:14.364445   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:14.398836   79927 cri.go:89] found id: ""
	I0806 08:37:14.398866   79927 logs.go:276] 0 containers: []
	W0806 08:37:14.398874   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:14.398891   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:14.398904   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:14.450550   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:14.450586   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:14.467190   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:14.467220   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:14.538786   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:14.538809   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:14.538824   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:14.615049   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:14.615080   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:17.155906   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:17.170401   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:17.170479   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:17.208276   79927 cri.go:89] found id: ""
	I0806 08:37:17.208311   79927 logs.go:276] 0 containers: []
	W0806 08:37:17.208322   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:17.208330   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:17.208403   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:17.258671   79927 cri.go:89] found id: ""
	I0806 08:37:17.258696   79927 logs.go:276] 0 containers: []
	W0806 08:37:17.258705   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:17.258711   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:17.258771   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:17.296999   79927 cri.go:89] found id: ""
	I0806 08:37:17.297027   79927 logs.go:276] 0 containers: []
	W0806 08:37:17.297035   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:17.297041   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:17.297098   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:17.335818   79927 cri.go:89] found id: ""
	I0806 08:37:17.335841   79927 logs.go:276] 0 containers: []
	W0806 08:37:17.335849   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:17.335855   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:17.335903   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:17.371650   79927 cri.go:89] found id: ""
	I0806 08:37:17.371674   79927 logs.go:276] 0 containers: []
	W0806 08:37:17.371682   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:17.371688   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:17.371737   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:17.408413   79927 cri.go:89] found id: ""
	I0806 08:37:17.408440   79927 logs.go:276] 0 containers: []
	W0806 08:37:17.408448   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:17.408454   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:17.408502   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:17.443877   79927 cri.go:89] found id: ""
	I0806 08:37:17.443909   79927 logs.go:276] 0 containers: []
	W0806 08:37:17.443920   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:17.443928   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:17.443988   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:17.484292   79927 cri.go:89] found id: ""
	I0806 08:37:17.484321   79927 logs.go:276] 0 containers: []
	W0806 08:37:17.484331   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:17.484341   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:17.484356   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:17.541365   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:17.541418   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:17.555961   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:17.555993   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:17.625904   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:17.625923   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:17.625941   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:17.708997   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:17.709045   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:14.788822   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:17.288567   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:16.112034   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:18.612595   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:18.468580   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:20.967605   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:20.250351   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:20.264560   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:20.264625   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:20.301216   79927 cri.go:89] found id: ""
	I0806 08:37:20.301244   79927 logs.go:276] 0 containers: []
	W0806 08:37:20.301252   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:20.301257   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:20.301317   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:20.345820   79927 cri.go:89] found id: ""
	I0806 08:37:20.345849   79927 logs.go:276] 0 containers: []
	W0806 08:37:20.345860   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:20.345867   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:20.345923   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:20.382333   79927 cri.go:89] found id: ""
	I0806 08:37:20.382357   79927 logs.go:276] 0 containers: []
	W0806 08:37:20.382365   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:20.382371   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:20.382426   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:20.420193   79927 cri.go:89] found id: ""
	I0806 08:37:20.420228   79927 logs.go:276] 0 containers: []
	W0806 08:37:20.420238   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:20.420249   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:20.420316   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:20.456200   79927 cri.go:89] found id: ""
	I0806 08:37:20.456227   79927 logs.go:276] 0 containers: []
	W0806 08:37:20.456235   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:20.456241   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:20.456298   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:20.493321   79927 cri.go:89] found id: ""
	I0806 08:37:20.493349   79927 logs.go:276] 0 containers: []
	W0806 08:37:20.493356   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:20.493362   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:20.493414   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:20.531007   79927 cri.go:89] found id: ""
	I0806 08:37:20.531034   79927 logs.go:276] 0 containers: []
	W0806 08:37:20.531044   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:20.531051   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:20.531118   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:20.572014   79927 cri.go:89] found id: ""
	I0806 08:37:20.572044   79927 logs.go:276] 0 containers: []
	W0806 08:37:20.572062   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:20.572071   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:20.572083   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:20.623368   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:20.623399   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:20.638941   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:20.638984   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:20.717287   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:20.717313   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:20.717324   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:20.800157   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:20.800192   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:23.342061   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:23.356954   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:23.357026   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:23.391415   79927 cri.go:89] found id: ""
	I0806 08:37:23.391445   79927 logs.go:276] 0 containers: []
	W0806 08:37:23.391456   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:23.391463   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:23.391526   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:23.426460   79927 cri.go:89] found id: ""
	I0806 08:37:23.426482   79927 logs.go:276] 0 containers: []
	W0806 08:37:23.426491   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:23.426496   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:23.426541   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:23.461078   79927 cri.go:89] found id: ""
	I0806 08:37:23.461111   79927 logs.go:276] 0 containers: []
	W0806 08:37:23.461122   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:23.461129   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:23.461183   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:23.496566   79927 cri.go:89] found id: ""
	I0806 08:37:23.496594   79927 logs.go:276] 0 containers: []
	W0806 08:37:23.496602   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:23.496607   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:23.496665   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:23.529097   79927 cri.go:89] found id: ""
	I0806 08:37:23.529123   79927 logs.go:276] 0 containers: []
	W0806 08:37:23.529134   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:23.529153   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:23.529243   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:23.563690   79927 cri.go:89] found id: ""
	I0806 08:37:23.563720   79927 logs.go:276] 0 containers: []
	W0806 08:37:23.563730   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:23.563737   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:23.563796   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:23.598787   79927 cri.go:89] found id: ""
	I0806 08:37:23.598861   79927 logs.go:276] 0 containers: []
	W0806 08:37:23.598875   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:23.598884   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:23.598942   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:23.655675   79927 cri.go:89] found id: ""
	I0806 08:37:23.655705   79927 logs.go:276] 0 containers: []
	W0806 08:37:23.655715   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:23.655723   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:23.655734   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:23.715853   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:23.715887   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:23.735805   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:23.735844   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 08:37:19.288768   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:21.788292   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:23.788808   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:20.612865   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:22.613432   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:22.968064   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:24.968171   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	W0806 08:37:23.815238   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:23.815261   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:23.815274   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:23.899436   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:23.899479   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:26.441837   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:26.456410   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:26.456472   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:26.497707   79927 cri.go:89] found id: ""
	I0806 08:37:26.497733   79927 logs.go:276] 0 containers: []
	W0806 08:37:26.497741   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:26.497746   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:26.497795   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:26.536101   79927 cri.go:89] found id: ""
	I0806 08:37:26.536126   79927 logs.go:276] 0 containers: []
	W0806 08:37:26.536133   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:26.536145   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:26.536190   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:26.573067   79927 cri.go:89] found id: ""
	I0806 08:37:26.573091   79927 logs.go:276] 0 containers: []
	W0806 08:37:26.573102   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:26.573110   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:26.573162   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:26.609820   79927 cri.go:89] found id: ""
	I0806 08:37:26.609846   79927 logs.go:276] 0 containers: []
	W0806 08:37:26.609854   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:26.609860   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:26.609914   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:26.645721   79927 cri.go:89] found id: ""
	I0806 08:37:26.645748   79927 logs.go:276] 0 containers: []
	W0806 08:37:26.645758   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:26.645764   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:26.645825   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:26.679941   79927 cri.go:89] found id: ""
	I0806 08:37:26.679988   79927 logs.go:276] 0 containers: []
	W0806 08:37:26.679999   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:26.680006   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:26.680068   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:26.717247   79927 cri.go:89] found id: ""
	I0806 08:37:26.717277   79927 logs.go:276] 0 containers: []
	W0806 08:37:26.717288   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:26.717295   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:26.717382   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:26.760157   79927 cri.go:89] found id: ""
	I0806 08:37:26.760184   79927 logs.go:276] 0 containers: []
	W0806 08:37:26.760194   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:26.760205   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:26.760221   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:26.816034   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:26.816071   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:26.830141   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:26.830170   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:26.900078   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:26.900103   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:26.900121   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:26.984460   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:26.984501   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:26.287692   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:28.288990   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:25.112666   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:27.611170   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:27.466925   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:29.469989   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:29.527184   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:29.541974   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:29.542052   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:29.576062   79927 cri.go:89] found id: ""
	I0806 08:37:29.576095   79927 logs.go:276] 0 containers: []
	W0806 08:37:29.576107   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:29.576114   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:29.576176   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:29.610841   79927 cri.go:89] found id: ""
	I0806 08:37:29.610865   79927 logs.go:276] 0 containers: []
	W0806 08:37:29.610881   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:29.610892   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:29.610948   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:29.650326   79927 cri.go:89] found id: ""
	I0806 08:37:29.650350   79927 logs.go:276] 0 containers: []
	W0806 08:37:29.650360   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:29.650367   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:29.650434   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:29.689382   79927 cri.go:89] found id: ""
	I0806 08:37:29.689418   79927 logs.go:276] 0 containers: []
	W0806 08:37:29.689429   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:29.689436   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:29.689495   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:29.725893   79927 cri.go:89] found id: ""
	I0806 08:37:29.725918   79927 logs.go:276] 0 containers: []
	W0806 08:37:29.725926   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:29.725932   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:29.725980   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:29.767354   79927 cri.go:89] found id: ""
	I0806 08:37:29.767379   79927 logs.go:276] 0 containers: []
	W0806 08:37:29.767386   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:29.767392   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:29.767441   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:29.811139   79927 cri.go:89] found id: ""
	I0806 08:37:29.811161   79927 logs.go:276] 0 containers: []
	W0806 08:37:29.811169   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:29.811174   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:29.811219   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:29.848252   79927 cri.go:89] found id: ""
	I0806 08:37:29.848285   79927 logs.go:276] 0 containers: []
	W0806 08:37:29.848294   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:29.848303   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:29.848315   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:29.899850   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:29.899884   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:29.913744   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:29.913774   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:29.988016   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:29.988039   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:29.988054   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:30.069701   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:30.069735   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:32.606958   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:32.622621   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:32.622684   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:32.658816   79927 cri.go:89] found id: ""
	I0806 08:37:32.658844   79927 logs.go:276] 0 containers: []
	W0806 08:37:32.658855   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:32.658862   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:32.658928   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:32.697734   79927 cri.go:89] found id: ""
	I0806 08:37:32.697769   79927 logs.go:276] 0 containers: []
	W0806 08:37:32.697780   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:32.697786   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:32.697868   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:32.735540   79927 cri.go:89] found id: ""
	I0806 08:37:32.735572   79927 logs.go:276] 0 containers: []
	W0806 08:37:32.735584   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:32.735591   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:32.735658   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:32.771319   79927 cri.go:89] found id: ""
	I0806 08:37:32.771343   79927 logs.go:276] 0 containers: []
	W0806 08:37:32.771357   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:32.771365   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:32.771421   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:32.808137   79927 cri.go:89] found id: ""
	I0806 08:37:32.808162   79927 logs.go:276] 0 containers: []
	W0806 08:37:32.808172   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:32.808179   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:32.808234   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:32.843307   79927 cri.go:89] found id: ""
	I0806 08:37:32.843333   79927 logs.go:276] 0 containers: []
	W0806 08:37:32.843343   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:32.843350   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:32.843404   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:32.883855   79927 cri.go:89] found id: ""
	I0806 08:37:32.883922   79927 logs.go:276] 0 containers: []
	W0806 08:37:32.883935   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:32.883944   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:32.884014   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:32.920965   79927 cri.go:89] found id: ""
	I0806 08:37:32.921008   79927 logs.go:276] 0 containers: []
	W0806 08:37:32.921019   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:32.921027   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:32.921039   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:32.974933   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:32.974966   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:32.989326   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:32.989352   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:33.058274   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:33.058302   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:33.058314   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:33.136632   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:33.136670   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:30.789771   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:33.289001   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:29.613189   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:32.111086   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:34.112398   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:31.973697   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:34.469358   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:35.680315   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:35.694438   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:35.694497   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:35.729541   79927 cri.go:89] found id: ""
	I0806 08:37:35.729569   79927 logs.go:276] 0 containers: []
	W0806 08:37:35.729580   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:35.729587   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:35.729650   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:35.765783   79927 cri.go:89] found id: ""
	I0806 08:37:35.765813   79927 logs.go:276] 0 containers: []
	W0806 08:37:35.765822   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:35.765827   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:35.765885   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:35.802975   79927 cri.go:89] found id: ""
	I0806 08:37:35.803002   79927 logs.go:276] 0 containers: []
	W0806 08:37:35.803010   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:35.803016   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:35.803062   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:35.838588   79927 cri.go:89] found id: ""
	I0806 08:37:35.838614   79927 logs.go:276] 0 containers: []
	W0806 08:37:35.838623   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:35.838629   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:35.838675   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:35.874413   79927 cri.go:89] found id: ""
	I0806 08:37:35.874442   79927 logs.go:276] 0 containers: []
	W0806 08:37:35.874452   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:35.874460   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:35.874528   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:35.913816   79927 cri.go:89] found id: ""
	I0806 08:37:35.913852   79927 logs.go:276] 0 containers: []
	W0806 08:37:35.913871   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:35.913885   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:35.913963   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:35.951719   79927 cri.go:89] found id: ""
	I0806 08:37:35.951750   79927 logs.go:276] 0 containers: []
	W0806 08:37:35.951761   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:35.951770   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:35.951828   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:35.987720   79927 cri.go:89] found id: ""
	I0806 08:37:35.987749   79927 logs.go:276] 0 containers: []
	W0806 08:37:35.987760   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:35.987769   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:35.987780   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:36.043300   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:36.043339   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:36.057138   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:36.057169   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:36.126382   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:36.126399   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:36.126415   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:36.205243   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:36.205284   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:38.747978   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:35.289346   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:37.788785   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:36.114392   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:38.612174   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:36.967654   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:38.969141   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:40.969752   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:38.762123   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:38.762190   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:38.805074   79927 cri.go:89] found id: ""
	I0806 08:37:38.805101   79927 logs.go:276] 0 containers: []
	W0806 08:37:38.805111   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:38.805117   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:38.805185   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:38.839305   79927 cri.go:89] found id: ""
	I0806 08:37:38.839330   79927 logs.go:276] 0 containers: []
	W0806 08:37:38.839338   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:38.839344   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:38.839406   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:38.874174   79927 cri.go:89] found id: ""
	I0806 08:37:38.874203   79927 logs.go:276] 0 containers: []
	W0806 08:37:38.874214   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:38.874221   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:38.874277   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:38.909607   79927 cri.go:89] found id: ""
	I0806 08:37:38.909630   79927 logs.go:276] 0 containers: []
	W0806 08:37:38.909639   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:38.909645   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:38.909702   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:38.945584   79927 cri.go:89] found id: ""
	I0806 08:37:38.945615   79927 logs.go:276] 0 containers: []
	W0806 08:37:38.945626   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:38.945634   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:38.945696   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:38.988993   79927 cri.go:89] found id: ""
	I0806 08:37:38.989022   79927 logs.go:276] 0 containers: []
	W0806 08:37:38.989033   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:38.989041   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:38.989100   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:39.028902   79927 cri.go:89] found id: ""
	I0806 08:37:39.028934   79927 logs.go:276] 0 containers: []
	W0806 08:37:39.028942   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:39.028948   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:39.029005   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:39.070163   79927 cri.go:89] found id: ""
	I0806 08:37:39.070192   79927 logs.go:276] 0 containers: []
	W0806 08:37:39.070202   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:39.070213   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:39.070229   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:39.124891   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:39.124927   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:39.139898   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:39.139926   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:39.216157   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:39.216181   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:39.216196   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:39.301089   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:39.301136   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:41.840165   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:41.855659   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:41.855792   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:41.892620   79927 cri.go:89] found id: ""
	I0806 08:37:41.892648   79927 logs.go:276] 0 containers: []
	W0806 08:37:41.892660   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:41.892668   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:41.892728   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:41.929064   79927 cri.go:89] found id: ""
	I0806 08:37:41.929097   79927 logs.go:276] 0 containers: []
	W0806 08:37:41.929107   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:41.929114   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:41.929165   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:41.968334   79927 cri.go:89] found id: ""
	I0806 08:37:41.968371   79927 logs.go:276] 0 containers: []
	W0806 08:37:41.968382   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:41.968389   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:41.968450   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:42.008566   79927 cri.go:89] found id: ""
	I0806 08:37:42.008587   79927 logs.go:276] 0 containers: []
	W0806 08:37:42.008595   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:42.008601   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:42.008649   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:42.044281   79927 cri.go:89] found id: ""
	I0806 08:37:42.044308   79927 logs.go:276] 0 containers: []
	W0806 08:37:42.044315   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:42.044321   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:42.044392   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:42.081987   79927 cri.go:89] found id: ""
	I0806 08:37:42.082012   79927 logs.go:276] 0 containers: []
	W0806 08:37:42.082020   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:42.082026   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:42.082072   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:42.120818   79927 cri.go:89] found id: ""
	I0806 08:37:42.120847   79927 logs.go:276] 0 containers: []
	W0806 08:37:42.120857   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:42.120865   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:42.120926   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:42.160685   79927 cri.go:89] found id: ""
	I0806 08:37:42.160719   79927 logs.go:276] 0 containers: []
	W0806 08:37:42.160731   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:42.160742   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:42.160759   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:42.236322   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:42.236344   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:42.236357   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:42.328488   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:42.328526   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:42.369520   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:42.369545   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:42.425449   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:42.425490   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:39.788990   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:41.789236   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:43.789318   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:40.614326   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:43.112061   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:43.469284   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:45.469725   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:44.941557   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:44.954984   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:44.955043   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:44.992186   79927 cri.go:89] found id: ""
	I0806 08:37:44.992216   79927 logs.go:276] 0 containers: []
	W0806 08:37:44.992228   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:44.992234   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:44.992285   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:45.027114   79927 cri.go:89] found id: ""
	I0806 08:37:45.027143   79927 logs.go:276] 0 containers: []
	W0806 08:37:45.027154   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:45.027161   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:45.027222   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:45.062629   79927 cri.go:89] found id: ""
	I0806 08:37:45.062660   79927 logs.go:276] 0 containers: []
	W0806 08:37:45.062672   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:45.062679   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:45.062741   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:45.097499   79927 cri.go:89] found id: ""
	I0806 08:37:45.097530   79927 logs.go:276] 0 containers: []
	W0806 08:37:45.097541   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:45.097548   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:45.097605   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:45.137130   79927 cri.go:89] found id: ""
	I0806 08:37:45.137162   79927 logs.go:276] 0 containers: []
	W0806 08:37:45.137184   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:45.137191   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:45.137264   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:45.170617   79927 cri.go:89] found id: ""
	I0806 08:37:45.170642   79927 logs.go:276] 0 containers: []
	W0806 08:37:45.170659   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:45.170667   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:45.170726   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:45.205729   79927 cri.go:89] found id: ""
	I0806 08:37:45.205758   79927 logs.go:276] 0 containers: []
	W0806 08:37:45.205768   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:45.205774   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:45.205827   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:45.242345   79927 cri.go:89] found id: ""
	I0806 08:37:45.242373   79927 logs.go:276] 0 containers: []
	W0806 08:37:45.242384   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:45.242397   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:45.242414   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:45.283778   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:45.283810   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:45.341421   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:45.341457   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:45.356581   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:45.356606   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:45.428512   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:45.428541   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:45.428559   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:48.011744   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:48.025452   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:48.025533   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:48.064315   79927 cri.go:89] found id: ""
	I0806 08:37:48.064339   79927 logs.go:276] 0 containers: []
	W0806 08:37:48.064348   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:48.064353   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:48.064430   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:48.103855   79927 cri.go:89] found id: ""
	I0806 08:37:48.103891   79927 logs.go:276] 0 containers: []
	W0806 08:37:48.103902   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:48.103909   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:48.103968   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:48.141402   79927 cri.go:89] found id: ""
	I0806 08:37:48.141427   79927 logs.go:276] 0 containers: []
	W0806 08:37:48.141435   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:48.141440   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:48.141491   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:48.177449   79927 cri.go:89] found id: ""
	I0806 08:37:48.177479   79927 logs.go:276] 0 containers: []
	W0806 08:37:48.177490   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:48.177497   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:48.177566   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:48.214613   79927 cri.go:89] found id: ""
	I0806 08:37:48.214637   79927 logs.go:276] 0 containers: []
	W0806 08:37:48.214644   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:48.214650   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:48.214695   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:48.254049   79927 cri.go:89] found id: ""
	I0806 08:37:48.254075   79927 logs.go:276] 0 containers: []
	W0806 08:37:48.254083   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:48.254089   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:48.254147   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:48.296206   79927 cri.go:89] found id: ""
	I0806 08:37:48.296230   79927 logs.go:276] 0 containers: []
	W0806 08:37:48.296240   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:48.296246   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:48.296306   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:48.337614   79927 cri.go:89] found id: ""
	I0806 08:37:48.337641   79927 logs.go:276] 0 containers: []
	W0806 08:37:48.337651   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:48.337662   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:48.337679   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:48.391932   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:48.391971   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:48.405422   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:48.405450   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:48.477374   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:48.477402   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:48.477419   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:48.559252   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:48.559288   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:46.287167   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:48.288652   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:45.113308   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:47.611780   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:47.968109   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:50.469370   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:51.104881   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:51.120113   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:51.120181   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:51.155703   79927 cri.go:89] found id: ""
	I0806 08:37:51.155733   79927 logs.go:276] 0 containers: []
	W0806 08:37:51.155741   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:51.155747   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:51.155792   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:51.191211   79927 cri.go:89] found id: ""
	I0806 08:37:51.191240   79927 logs.go:276] 0 containers: []
	W0806 08:37:51.191250   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:51.191257   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:51.191324   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:51.226740   79927 cri.go:89] found id: ""
	I0806 08:37:51.226771   79927 logs.go:276] 0 containers: []
	W0806 08:37:51.226783   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:51.226790   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:51.226852   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:51.269101   79927 cri.go:89] found id: ""
	I0806 08:37:51.269132   79927 logs.go:276] 0 containers: []
	W0806 08:37:51.269141   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:51.269146   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:51.269196   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:51.307176   79927 cri.go:89] found id: ""
	I0806 08:37:51.307206   79927 logs.go:276] 0 containers: []
	W0806 08:37:51.307216   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:51.307224   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:51.307290   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:51.342824   79927 cri.go:89] found id: ""
	I0806 08:37:51.342854   79927 logs.go:276] 0 containers: []
	W0806 08:37:51.342864   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:51.342871   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:51.342935   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:51.383309   79927 cri.go:89] found id: ""
	I0806 08:37:51.383337   79927 logs.go:276] 0 containers: []
	W0806 08:37:51.383348   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:51.383355   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:51.383420   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:51.424851   79927 cri.go:89] found id: ""
	I0806 08:37:51.424896   79927 logs.go:276] 0 containers: []
	W0806 08:37:51.424908   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:51.424918   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:51.424950   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:51.507286   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:51.507307   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:51.507318   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:51.590512   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:51.590556   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:51.640453   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:51.640482   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:51.694699   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:51.694740   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:50.788163   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:52.788396   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:49.616347   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:52.112323   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:54.112911   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:52.470121   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:54.473264   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:54.211106   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:54.225858   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:54.225936   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:54.267219   79927 cri.go:89] found id: ""
	I0806 08:37:54.267247   79927 logs.go:276] 0 containers: []
	W0806 08:37:54.267258   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:54.267266   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:54.267323   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:54.306734   79927 cri.go:89] found id: ""
	I0806 08:37:54.306763   79927 logs.go:276] 0 containers: []
	W0806 08:37:54.306774   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:54.306781   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:54.306839   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:54.342257   79927 cri.go:89] found id: ""
	I0806 08:37:54.342294   79927 logs.go:276] 0 containers: []
	W0806 08:37:54.342306   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:54.342314   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:54.342383   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:54.380661   79927 cri.go:89] found id: ""
	I0806 08:37:54.380685   79927 logs.go:276] 0 containers: []
	W0806 08:37:54.380693   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:54.380699   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:54.380743   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:54.416503   79927 cri.go:89] found id: ""
	I0806 08:37:54.416525   79927 logs.go:276] 0 containers: []
	W0806 08:37:54.416533   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:54.416539   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:54.416589   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:54.453470   79927 cri.go:89] found id: ""
	I0806 08:37:54.453508   79927 logs.go:276] 0 containers: []
	W0806 08:37:54.453520   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:54.453528   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:54.453591   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:54.497630   79927 cri.go:89] found id: ""
	I0806 08:37:54.497671   79927 logs.go:276] 0 containers: []
	W0806 08:37:54.497682   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:54.497691   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:54.497750   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:54.536516   79927 cri.go:89] found id: ""
	I0806 08:37:54.536546   79927 logs.go:276] 0 containers: []
	W0806 08:37:54.536557   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:54.536568   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:54.536587   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:54.592997   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:54.593032   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:54.607568   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:54.607598   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:54.679369   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:54.679402   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:54.679423   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:54.760001   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:54.760041   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:57.305405   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:57.319427   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:57.319493   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:57.357465   79927 cri.go:89] found id: ""
	I0806 08:37:57.357493   79927 logs.go:276] 0 containers: []
	W0806 08:37:57.357504   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:57.357513   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:57.357571   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:57.390131   79927 cri.go:89] found id: ""
	I0806 08:37:57.390155   79927 logs.go:276] 0 containers: []
	W0806 08:37:57.390165   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:57.390172   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:57.390238   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:57.428571   79927 cri.go:89] found id: ""
	I0806 08:37:57.428600   79927 logs.go:276] 0 containers: []
	W0806 08:37:57.428608   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:57.428613   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:57.428675   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:57.462688   79927 cri.go:89] found id: ""
	I0806 08:37:57.462717   79927 logs.go:276] 0 containers: []
	W0806 08:37:57.462727   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:57.462733   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:57.462791   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:57.525008   79927 cri.go:89] found id: ""
	I0806 08:37:57.525033   79927 logs.go:276] 0 containers: []
	W0806 08:37:57.525041   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:57.525046   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:57.525108   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:57.565992   79927 cri.go:89] found id: ""
	I0806 08:37:57.566024   79927 logs.go:276] 0 containers: []
	W0806 08:37:57.566035   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:57.566045   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:57.566123   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:57.610170   79927 cri.go:89] found id: ""
	I0806 08:37:57.610198   79927 logs.go:276] 0 containers: []
	W0806 08:37:57.610208   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:57.610215   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:57.610262   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:57.647054   79927 cri.go:89] found id: ""
	I0806 08:37:57.647085   79927 logs.go:276] 0 containers: []
	W0806 08:37:57.647096   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:57.647106   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:57.647121   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:57.698177   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:57.698214   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:57.712244   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:57.712271   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:57.781683   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:57.781707   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:57.781723   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:57.861097   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:57.861133   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:54.789102   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:57.287564   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:56.613662   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:59.112248   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:56.967534   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:59.469122   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:00.401186   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:00.414546   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:00.414602   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:00.454163   79927 cri.go:89] found id: ""
	I0806 08:38:00.454190   79927 logs.go:276] 0 containers: []
	W0806 08:38:00.454201   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:00.454208   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:00.454269   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:00.497427   79927 cri.go:89] found id: ""
	I0806 08:38:00.497455   79927 logs.go:276] 0 containers: []
	W0806 08:38:00.497466   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:00.497473   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:00.497538   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:00.537590   79927 cri.go:89] found id: ""
	I0806 08:38:00.537619   79927 logs.go:276] 0 containers: []
	W0806 08:38:00.537627   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:00.537633   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:00.537695   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:00.572337   79927 cri.go:89] found id: ""
	I0806 08:38:00.572383   79927 logs.go:276] 0 containers: []
	W0806 08:38:00.572396   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:00.572403   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:00.572460   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:00.607729   79927 cri.go:89] found id: ""
	I0806 08:38:00.607772   79927 logs.go:276] 0 containers: []
	W0806 08:38:00.607783   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:00.607791   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:00.607852   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:00.644777   79927 cri.go:89] found id: ""
	I0806 08:38:00.644843   79927 logs.go:276] 0 containers: []
	W0806 08:38:00.644856   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:00.644864   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:00.644935   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:00.684698   79927 cri.go:89] found id: ""
	I0806 08:38:00.684727   79927 logs.go:276] 0 containers: []
	W0806 08:38:00.684739   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:00.684747   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:00.684863   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:00.718472   79927 cri.go:89] found id: ""
	I0806 08:38:00.718506   79927 logs.go:276] 0 containers: []
	W0806 08:38:00.718518   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:00.718529   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:00.718546   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:00.770265   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:00.770309   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:00.787438   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:00.787469   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:00.854538   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:00.854558   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:00.854570   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:00.936810   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:00.936847   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:03.477109   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:03.494379   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:03.494439   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:03.536166   79927 cri.go:89] found id: ""
	I0806 08:38:03.536193   79927 logs.go:276] 0 containers: []
	W0806 08:38:03.536202   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:03.536208   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:03.536255   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:03.577497   79927 cri.go:89] found id: ""
	I0806 08:38:03.577522   79927 logs.go:276] 0 containers: []
	W0806 08:38:03.577531   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:03.577536   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:03.577585   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:03.614534   79927 cri.go:89] found id: ""
	I0806 08:38:03.614560   79927 logs.go:276] 0 containers: []
	W0806 08:38:03.614569   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:03.614576   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:03.614634   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:03.655824   79927 cri.go:89] found id: ""
	I0806 08:38:03.655860   79927 logs.go:276] 0 containers: []
	W0806 08:38:03.655871   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:03.655878   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:03.655952   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:03.699843   79927 cri.go:89] found id: ""
	I0806 08:38:03.699875   79927 logs.go:276] 0 containers: []
	W0806 08:38:03.699888   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:03.699896   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:03.699957   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:03.740131   79927 cri.go:89] found id: ""
	I0806 08:38:03.740159   79927 logs.go:276] 0 containers: []
	W0806 08:38:03.740168   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:03.740174   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:03.740223   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:59.787634   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:01.788942   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:03.790134   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:01.112582   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:03.113004   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:01.972073   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:04.468917   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:03.774103   79927 cri.go:89] found id: ""
	I0806 08:38:03.774149   79927 logs.go:276] 0 containers: []
	W0806 08:38:03.774163   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:03.774170   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:03.774230   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:03.814051   79927 cri.go:89] found id: ""
	I0806 08:38:03.814075   79927 logs.go:276] 0 containers: []
	W0806 08:38:03.814083   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:03.814091   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:03.814102   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:03.854869   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:03.854898   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:03.908284   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:03.908318   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:03.923265   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:03.923303   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:04.001785   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:04.001806   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:04.001819   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:06.583809   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:06.596764   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:06.596827   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:06.636611   79927 cri.go:89] found id: ""
	I0806 08:38:06.636639   79927 logs.go:276] 0 containers: []
	W0806 08:38:06.636650   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:06.636657   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:06.636707   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:06.675392   79927 cri.go:89] found id: ""
	I0806 08:38:06.675419   79927 logs.go:276] 0 containers: []
	W0806 08:38:06.675430   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:06.675438   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:06.675492   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:06.711270   79927 cri.go:89] found id: ""
	I0806 08:38:06.711302   79927 logs.go:276] 0 containers: []
	W0806 08:38:06.711313   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:06.711320   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:06.711388   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:06.747252   79927 cri.go:89] found id: ""
	I0806 08:38:06.747280   79927 logs.go:276] 0 containers: []
	W0806 08:38:06.747291   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:06.747297   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:06.747360   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:06.781999   79927 cri.go:89] found id: ""
	I0806 08:38:06.782028   79927 logs.go:276] 0 containers: []
	W0806 08:38:06.782038   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:06.782043   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:06.782095   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:06.818810   79927 cri.go:89] found id: ""
	I0806 08:38:06.818847   79927 logs.go:276] 0 containers: []
	W0806 08:38:06.818860   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:06.818869   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:06.818935   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:06.857937   79927 cri.go:89] found id: ""
	I0806 08:38:06.857972   79927 logs.go:276] 0 containers: []
	W0806 08:38:06.857983   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:06.857989   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:06.858049   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:06.897920   79927 cri.go:89] found id: ""
	I0806 08:38:06.897949   79927 logs.go:276] 0 containers: []
	W0806 08:38:06.897957   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:06.897965   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:06.897979   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:06.912199   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:06.912228   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:06.983129   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:06.983157   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:06.983173   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:07.067783   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:07.067853   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:07.107756   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:07.107787   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:06.286856   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:08.287070   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:05.113664   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:07.611129   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:06.973447   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:09.468163   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:11.469075   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:09.657958   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:09.670726   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:09.670807   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:09.709184   79927 cri.go:89] found id: ""
	I0806 08:38:09.709220   79927 logs.go:276] 0 containers: []
	W0806 08:38:09.709232   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:09.709239   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:09.709299   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:09.747032   79927 cri.go:89] found id: ""
	I0806 08:38:09.747061   79927 logs.go:276] 0 containers: []
	W0806 08:38:09.747073   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:09.747080   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:09.747141   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:09.785134   79927 cri.go:89] found id: ""
	I0806 08:38:09.785170   79927 logs.go:276] 0 containers: []
	W0806 08:38:09.785180   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:09.785188   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:09.785248   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:09.825136   79927 cri.go:89] found id: ""
	I0806 08:38:09.825168   79927 logs.go:276] 0 containers: []
	W0806 08:38:09.825179   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:09.825186   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:09.825249   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:09.865292   79927 cri.go:89] found id: ""
	I0806 08:38:09.865327   79927 logs.go:276] 0 containers: []
	W0806 08:38:09.865336   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:09.865345   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:09.865415   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:09.911145   79927 cri.go:89] found id: ""
	I0806 08:38:09.911172   79927 logs.go:276] 0 containers: []
	W0806 08:38:09.911183   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:09.911190   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:09.911247   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:09.961259   79927 cri.go:89] found id: ""
	I0806 08:38:09.961289   79927 logs.go:276] 0 containers: []
	W0806 08:38:09.961299   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:09.961307   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:09.961364   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:10.009572   79927 cri.go:89] found id: ""
	I0806 08:38:10.009594   79927 logs.go:276] 0 containers: []
	W0806 08:38:10.009602   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:10.009610   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:10.009623   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:10.063733   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:10.063770   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:10.080004   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:10.080044   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:10.158792   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:10.158823   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:10.158840   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:10.242126   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:10.242162   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:12.781497   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:12.796715   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:12.796804   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:12.831223   79927 cri.go:89] found id: ""
	I0806 08:38:12.831254   79927 logs.go:276] 0 containers: []
	W0806 08:38:12.831265   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:12.831273   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:12.831332   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:12.872202   79927 cri.go:89] found id: ""
	I0806 08:38:12.872224   79927 logs.go:276] 0 containers: []
	W0806 08:38:12.872231   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:12.872236   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:12.872284   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:12.913457   79927 cri.go:89] found id: ""
	I0806 08:38:12.913486   79927 logs.go:276] 0 containers: []
	W0806 08:38:12.913497   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:12.913505   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:12.913563   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:12.950382   79927 cri.go:89] found id: ""
	I0806 08:38:12.950413   79927 logs.go:276] 0 containers: []
	W0806 08:38:12.950424   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:12.950432   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:12.950497   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:12.988956   79927 cri.go:89] found id: ""
	I0806 08:38:12.988984   79927 logs.go:276] 0 containers: []
	W0806 08:38:12.988992   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:12.988998   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:12.989054   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:13.025308   79927 cri.go:89] found id: ""
	I0806 08:38:13.025334   79927 logs.go:276] 0 containers: []
	W0806 08:38:13.025342   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:13.025348   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:13.025401   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:13.060156   79927 cri.go:89] found id: ""
	I0806 08:38:13.060184   79927 logs.go:276] 0 containers: []
	W0806 08:38:13.060196   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:13.060204   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:13.060268   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:13.096302   79927 cri.go:89] found id: ""
	I0806 08:38:13.096331   79927 logs.go:276] 0 containers: []
	W0806 08:38:13.096344   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:13.096360   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:13.096387   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:13.149567   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:13.149603   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:13.164680   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:13.164715   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:13.236695   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:13.236718   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:13.236733   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:13.323043   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:13.323081   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:10.288582   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:12.788556   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:09.611677   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:12.111364   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:14.112265   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:13.469940   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:15.969381   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:15.867128   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:15.885504   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:15.885579   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:15.929060   79927 cri.go:89] found id: ""
	I0806 08:38:15.929087   79927 logs.go:276] 0 containers: []
	W0806 08:38:15.929098   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:15.929106   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:15.929157   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:15.996919   79927 cri.go:89] found id: ""
	I0806 08:38:15.996946   79927 logs.go:276] 0 containers: []
	W0806 08:38:15.996954   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:15.996959   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:15.997013   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:16.033299   79927 cri.go:89] found id: ""
	I0806 08:38:16.033323   79927 logs.go:276] 0 containers: []
	W0806 08:38:16.033331   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:16.033338   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:16.033399   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:16.070754   79927 cri.go:89] found id: ""
	I0806 08:38:16.070781   79927 logs.go:276] 0 containers: []
	W0806 08:38:16.070789   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:16.070795   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:16.070853   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:16.107906   79927 cri.go:89] found id: ""
	I0806 08:38:16.107931   79927 logs.go:276] 0 containers: []
	W0806 08:38:16.107943   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:16.107951   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:16.108011   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:16.145814   79927 cri.go:89] found id: ""
	I0806 08:38:16.145844   79927 logs.go:276] 0 containers: []
	W0806 08:38:16.145853   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:16.145861   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:16.145921   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:16.183752   79927 cri.go:89] found id: ""
	I0806 08:38:16.183782   79927 logs.go:276] 0 containers: []
	W0806 08:38:16.183793   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:16.183828   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:16.183890   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:16.220973   79927 cri.go:89] found id: ""
	I0806 08:38:16.221000   79927 logs.go:276] 0 containers: []
	W0806 08:38:16.221010   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:16.221020   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:16.221034   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:16.274514   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:16.274555   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:16.291387   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:16.291417   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:16.365737   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:16.365769   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:16.365785   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:16.446578   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:16.446617   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:14.789440   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:17.288354   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:16.112334   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:18.612150   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:18.468700   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:20.469520   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:18.989406   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:19.003113   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:19.003174   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:19.038794   79927 cri.go:89] found id: ""
	I0806 08:38:19.038821   79927 logs.go:276] 0 containers: []
	W0806 08:38:19.038832   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:19.038839   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:19.038899   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:19.072521   79927 cri.go:89] found id: ""
	I0806 08:38:19.072546   79927 logs.go:276] 0 containers: []
	W0806 08:38:19.072554   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:19.072561   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:19.072606   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:19.105949   79927 cri.go:89] found id: ""
	I0806 08:38:19.105982   79927 logs.go:276] 0 containers: []
	W0806 08:38:19.105994   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:19.106002   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:19.106053   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:19.141277   79927 cri.go:89] found id: ""
	I0806 08:38:19.141303   79927 logs.go:276] 0 containers: []
	W0806 08:38:19.141311   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:19.141317   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:19.141373   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:19.179144   79927 cri.go:89] found id: ""
	I0806 08:38:19.179173   79927 logs.go:276] 0 containers: []
	W0806 08:38:19.179184   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:19.179191   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:19.179252   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:19.218970   79927 cri.go:89] found id: ""
	I0806 08:38:19.218998   79927 logs.go:276] 0 containers: []
	W0806 08:38:19.219010   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:19.219022   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:19.219084   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:19.257623   79927 cri.go:89] found id: ""
	I0806 08:38:19.257652   79927 logs.go:276] 0 containers: []
	W0806 08:38:19.257663   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:19.257671   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:19.257740   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:19.294222   79927 cri.go:89] found id: ""
	I0806 08:38:19.294250   79927 logs.go:276] 0 containers: []
	W0806 08:38:19.294258   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:19.294267   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:19.294279   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:19.374327   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:19.374352   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:19.374368   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:19.458570   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:19.458604   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:19.504329   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:19.504382   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:19.554199   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:19.554237   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:22.069574   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:22.082919   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:22.082981   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:22.115952   79927 cri.go:89] found id: ""
	I0806 08:38:22.115974   79927 logs.go:276] 0 containers: []
	W0806 08:38:22.115981   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:22.115987   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:22.116031   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:22.158322   79927 cri.go:89] found id: ""
	I0806 08:38:22.158349   79927 logs.go:276] 0 containers: []
	W0806 08:38:22.158359   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:22.158366   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:22.158425   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:22.203375   79927 cri.go:89] found id: ""
	I0806 08:38:22.203403   79927 logs.go:276] 0 containers: []
	W0806 08:38:22.203412   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:22.203418   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:22.203473   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:22.240808   79927 cri.go:89] found id: ""
	I0806 08:38:22.240837   79927 logs.go:276] 0 containers: []
	W0806 08:38:22.240849   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:22.240857   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:22.240913   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:22.277885   79927 cri.go:89] found id: ""
	I0806 08:38:22.277914   79927 logs.go:276] 0 containers: []
	W0806 08:38:22.277925   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:22.277932   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:22.277993   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:22.314990   79927 cri.go:89] found id: ""
	I0806 08:38:22.315020   79927 logs.go:276] 0 containers: []
	W0806 08:38:22.315031   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:22.315039   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:22.315098   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:22.352159   79927 cri.go:89] found id: ""
	I0806 08:38:22.352189   79927 logs.go:276] 0 containers: []
	W0806 08:38:22.352198   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:22.352204   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:22.352267   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:22.387349   79927 cri.go:89] found id: ""
	I0806 08:38:22.387382   79927 logs.go:276] 0 containers: []
	W0806 08:38:22.387393   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:22.387404   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:22.387416   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:22.425834   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:22.425871   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:22.479680   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:22.479712   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:22.495104   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:22.495140   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:22.567013   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:22.567030   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:22.567043   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:19.288896   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:21.291455   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:23.789282   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:21.111745   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:23.111513   79219 pod_ready.go:81] duration metric: took 4m0.005888372s for pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace to be "Ready" ...
	E0806 08:38:23.111545   79219 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0806 08:38:23.111555   79219 pod_ready.go:38] duration metric: took 4m6.006750463s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 08:38:23.111579   79219 api_server.go:52] waiting for apiserver process to appear ...
	I0806 08:38:23.111610   79219 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:23.111669   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:23.182880   79219 cri.go:89] found id: "f7edb6bece3347d88a263663a1d6549165dafdcb67ad723494b937766cba6da6"
	I0806 08:38:23.182907   79219 cri.go:89] found id: ""
	I0806 08:38:23.182918   79219 logs.go:276] 1 containers: [f7edb6bece3347d88a263663a1d6549165dafdcb67ad723494b937766cba6da6]
	I0806 08:38:23.182974   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:23.188039   79219 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:23.188103   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:23.235090   79219 cri.go:89] found id: "df2efdb62a4af10f89a1b37599d4258ac3b72660353795f8c2141f3ef70c1f65"
	I0806 08:38:23.235113   79219 cri.go:89] found id: ""
	I0806 08:38:23.235135   79219 logs.go:276] 1 containers: [df2efdb62a4af10f89a1b37599d4258ac3b72660353795f8c2141f3ef70c1f65]
	I0806 08:38:23.235195   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:23.240260   79219 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:23.240335   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:23.280619   79219 cri.go:89] found id: "9e1c91bc3a920b6f7db4477b8219f13fa092c68ff53abe2390325a68dad253e5"
	I0806 08:38:23.280645   79219 cri.go:89] found id: ""
	I0806 08:38:23.280655   79219 logs.go:276] 1 containers: [9e1c91bc3a920b6f7db4477b8219f13fa092c68ff53abe2390325a68dad253e5]
	I0806 08:38:23.280715   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:23.286155   79219 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:23.286234   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:23.329583   79219 cri.go:89] found id: "a25c5660a3a2b7226bf308b44307a3912efa7a9a73e79ec63f83133b80a44683"
	I0806 08:38:23.329609   79219 cri.go:89] found id: ""
	I0806 08:38:23.329618   79219 logs.go:276] 1 containers: [a25c5660a3a2b7226bf308b44307a3912efa7a9a73e79ec63f83133b80a44683]
	I0806 08:38:23.329678   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:23.334342   79219 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:23.334407   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:23.372955   79219 cri.go:89] found id: "612fc6a8f2c1208097a6d6287c5fecb20026608a278af5d0d4efc9662e065eb9"
	I0806 08:38:23.372974   79219 cri.go:89] found id: ""
	I0806 08:38:23.372981   79219 logs.go:276] 1 containers: [612fc6a8f2c1208097a6d6287c5fecb20026608a278af5d0d4efc9662e065eb9]
	I0806 08:38:23.373037   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:23.377637   79219 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:23.377692   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:23.416165   79219 cri.go:89] found id: "d8651a7ed6615e826b950eab003841c2ec872b48160f207e77dd32a684125a23"
	I0806 08:38:23.416188   79219 cri.go:89] found id: ""
	I0806 08:38:23.416197   79219 logs.go:276] 1 containers: [d8651a7ed6615e826b950eab003841c2ec872b48160f207e77dd32a684125a23]
	I0806 08:38:23.416248   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:23.420970   79219 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:23.421020   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:23.459247   79219 cri.go:89] found id: ""
	I0806 08:38:23.459277   79219 logs.go:276] 0 containers: []
	W0806 08:38:23.459287   79219 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:23.459295   79219 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0806 08:38:23.459354   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0806 08:38:23.503329   79219 cri.go:89] found id: "367eb6487187a37acafb1fa74364a2d9ec5d9241888ed2e871fb14f2af161975"
	I0806 08:38:23.503354   79219 cri.go:89] found id: "78abe2efa92d2d6c9171ed65bd4d941693be3207554f9318a7cefb91544eeb20"
	I0806 08:38:23.503360   79219 cri.go:89] found id: ""
	I0806 08:38:23.503369   79219 logs.go:276] 2 containers: [367eb6487187a37acafb1fa74364a2d9ec5d9241888ed2e871fb14f2af161975 78abe2efa92d2d6c9171ed65bd4d941693be3207554f9318a7cefb91544eeb20]
	I0806 08:38:23.503431   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:23.508375   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:23.512355   79219 logs.go:123] Gathering logs for kube-scheduler [a25c5660a3a2b7226bf308b44307a3912efa7a9a73e79ec63f83133b80a44683] ...
	I0806 08:38:23.512401   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a25c5660a3a2b7226bf308b44307a3912efa7a9a73e79ec63f83133b80a44683"
	I0806 08:38:23.548545   79219 logs.go:123] Gathering logs for kube-controller-manager [d8651a7ed6615e826b950eab003841c2ec872b48160f207e77dd32a684125a23] ...
	I0806 08:38:23.548574   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8651a7ed6615e826b950eab003841c2ec872b48160f207e77dd32a684125a23"
	I0806 08:38:23.607970   79219 logs.go:123] Gathering logs for storage-provisioner [78abe2efa92d2d6c9171ed65bd4d941693be3207554f9318a7cefb91544eeb20] ...
	I0806 08:38:23.608004   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78abe2efa92d2d6c9171ed65bd4d941693be3207554f9318a7cefb91544eeb20"
	I0806 08:38:23.646553   79219 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:23.646582   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:24.180521   79219 logs.go:123] Gathering logs for container status ...
	I0806 08:38:24.180558   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:24.228240   79219 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:24.228273   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:24.285975   79219 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:24.286007   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:24.301117   79219 logs.go:123] Gathering logs for etcd [df2efdb62a4af10f89a1b37599d4258ac3b72660353795f8c2141f3ef70c1f65] ...
	I0806 08:38:24.301152   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df2efdb62a4af10f89a1b37599d4258ac3b72660353795f8c2141f3ef70c1f65"
	I0806 08:38:22.968463   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:25.469138   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:25.146383   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:25.160013   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:25.160078   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:25.200981   79927 cri.go:89] found id: ""
	I0806 08:38:25.201006   79927 logs.go:276] 0 containers: []
	W0806 08:38:25.201018   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:25.201026   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:25.201091   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:25.239852   79927 cri.go:89] found id: ""
	I0806 08:38:25.239895   79927 logs.go:276] 0 containers: []
	W0806 08:38:25.239907   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:25.239914   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:25.239975   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:25.275521   79927 cri.go:89] found id: ""
	I0806 08:38:25.275549   79927 logs.go:276] 0 containers: []
	W0806 08:38:25.275561   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:25.275568   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:25.275630   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:25.311921   79927 cri.go:89] found id: ""
	I0806 08:38:25.311952   79927 logs.go:276] 0 containers: []
	W0806 08:38:25.311962   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:25.311969   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:25.312036   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:25.353047   79927 cri.go:89] found id: ""
	I0806 08:38:25.353081   79927 logs.go:276] 0 containers: []
	W0806 08:38:25.353093   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:25.353102   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:25.353177   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:25.389442   79927 cri.go:89] found id: ""
	I0806 08:38:25.389470   79927 logs.go:276] 0 containers: []
	W0806 08:38:25.389479   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:25.389485   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:25.389529   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:25.426435   79927 cri.go:89] found id: ""
	I0806 08:38:25.426464   79927 logs.go:276] 0 containers: []
	W0806 08:38:25.426474   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:25.426481   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:25.426541   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:25.460095   79927 cri.go:89] found id: ""
	I0806 08:38:25.460132   79927 logs.go:276] 0 containers: []
	W0806 08:38:25.460157   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:25.460169   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:25.460192   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:25.511363   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:25.511398   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:25.525953   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:25.525980   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:25.598690   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:25.598711   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:25.598723   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:25.674996   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:25.675034   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:28.216533   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:28.230080   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:28.230171   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:28.269239   79927 cri.go:89] found id: ""
	I0806 08:38:28.269274   79927 logs.go:276] 0 containers: []
	W0806 08:38:28.269288   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:28.269296   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:28.269361   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:28.311773   79927 cri.go:89] found id: ""
	I0806 08:38:28.311809   79927 logs.go:276] 0 containers: []
	W0806 08:38:28.311820   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:28.311828   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:28.311896   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:28.351492   79927 cri.go:89] found id: ""
	I0806 08:38:28.351522   79927 logs.go:276] 0 containers: []
	W0806 08:38:28.351530   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:28.351535   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:28.351581   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:28.389039   79927 cri.go:89] found id: ""
	I0806 08:38:28.389067   79927 logs.go:276] 0 containers: []
	W0806 08:38:28.389079   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:28.389086   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:28.389175   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:28.432249   79927 cri.go:89] found id: ""
	I0806 08:38:28.432277   79927 logs.go:276] 0 containers: []
	W0806 08:38:28.432287   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:28.432296   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:28.432354   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:28.478277   79927 cri.go:89] found id: ""
	I0806 08:38:28.478303   79927 logs.go:276] 0 containers: []
	W0806 08:38:28.478313   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:28.478321   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:28.478380   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:28.516167   79927 cri.go:89] found id: ""
	I0806 08:38:28.516199   79927 logs.go:276] 0 containers: []
	W0806 08:38:28.516211   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:28.516219   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:28.516286   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:28.555493   79927 cri.go:89] found id: ""
	I0806 08:38:28.555523   79927 logs.go:276] 0 containers: []
	W0806 08:38:28.555534   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:28.555545   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:28.555560   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:28.627101   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:28.627124   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:28.627147   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:28.716655   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:28.716709   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:28.755469   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:28.755510   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:26.286925   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:28.288654   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:24.355686   79219 logs.go:123] Gathering logs for coredns [9e1c91bc3a920b6f7db4477b8219f13fa092c68ff53abe2390325a68dad253e5] ...
	I0806 08:38:24.355729   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e1c91bc3a920b6f7db4477b8219f13fa092c68ff53abe2390325a68dad253e5"
	I0806 08:38:24.396350   79219 logs.go:123] Gathering logs for kube-proxy [612fc6a8f2c1208097a6d6287c5fecb20026608a278af5d0d4efc9662e065eb9] ...
	I0806 08:38:24.396402   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 612fc6a8f2c1208097a6d6287c5fecb20026608a278af5d0d4efc9662e065eb9"
	I0806 08:38:24.435066   79219 logs.go:123] Gathering logs for storage-provisioner [367eb6487187a37acafb1fa74364a2d9ec5d9241888ed2e871fb14f2af161975] ...
	I0806 08:38:24.435098   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 367eb6487187a37acafb1fa74364a2d9ec5d9241888ed2e871fb14f2af161975"
	I0806 08:38:24.474159   79219 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:24.474188   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 08:38:24.601081   79219 logs.go:123] Gathering logs for kube-apiserver [f7edb6bece3347d88a263663a1d6549165dafdcb67ad723494b937766cba6da6] ...
	I0806 08:38:24.601112   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7edb6bece3347d88a263663a1d6549165dafdcb67ad723494b937766cba6da6"
	I0806 08:38:27.170209   79219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:27.187573   79219 api_server.go:72] duration metric: took 4m15.285938182s to wait for apiserver process to appear ...
	I0806 08:38:27.187597   79219 api_server.go:88] waiting for apiserver healthz status ...
	I0806 08:38:27.187626   79219 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:27.187673   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:27.224204   79219 cri.go:89] found id: "f7edb6bece3347d88a263663a1d6549165dafdcb67ad723494b937766cba6da6"
	I0806 08:38:27.224227   79219 cri.go:89] found id: ""
	I0806 08:38:27.224237   79219 logs.go:276] 1 containers: [f7edb6bece3347d88a263663a1d6549165dafdcb67ad723494b937766cba6da6]
	I0806 08:38:27.224284   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:27.228883   79219 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:27.228957   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:27.275914   79219 cri.go:89] found id: "df2efdb62a4af10f89a1b37599d4258ac3b72660353795f8c2141f3ef70c1f65"
	I0806 08:38:27.275939   79219 cri.go:89] found id: ""
	I0806 08:38:27.275948   79219 logs.go:276] 1 containers: [df2efdb62a4af10f89a1b37599d4258ac3b72660353795f8c2141f3ef70c1f65]
	I0806 08:38:27.276009   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:27.280979   79219 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:27.281037   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:27.326292   79219 cri.go:89] found id: "9e1c91bc3a920b6f7db4477b8219f13fa092c68ff53abe2390325a68dad253e5"
	I0806 08:38:27.326312   79219 cri.go:89] found id: ""
	I0806 08:38:27.326319   79219 logs.go:276] 1 containers: [9e1c91bc3a920b6f7db4477b8219f13fa092c68ff53abe2390325a68dad253e5]
	I0806 08:38:27.326365   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:27.331249   79219 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:27.331326   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:27.374229   79219 cri.go:89] found id: "a25c5660a3a2b7226bf308b44307a3912efa7a9a73e79ec63f83133b80a44683"
	I0806 08:38:27.374250   79219 cri.go:89] found id: ""
	I0806 08:38:27.374257   79219 logs.go:276] 1 containers: [a25c5660a3a2b7226bf308b44307a3912efa7a9a73e79ec63f83133b80a44683]
	I0806 08:38:27.374300   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:27.378875   79219 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:27.378957   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:27.416159   79219 cri.go:89] found id: "612fc6a8f2c1208097a6d6287c5fecb20026608a278af5d0d4efc9662e065eb9"
	I0806 08:38:27.416183   79219 cri.go:89] found id: ""
	I0806 08:38:27.416192   79219 logs.go:276] 1 containers: [612fc6a8f2c1208097a6d6287c5fecb20026608a278af5d0d4efc9662e065eb9]
	I0806 08:38:27.416243   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:27.420463   79219 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:27.420531   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:27.461673   79219 cri.go:89] found id: "d8651a7ed6615e826b950eab003841c2ec872b48160f207e77dd32a684125a23"
	I0806 08:38:27.461700   79219 cri.go:89] found id: ""
	I0806 08:38:27.461710   79219 logs.go:276] 1 containers: [d8651a7ed6615e826b950eab003841c2ec872b48160f207e77dd32a684125a23]
	I0806 08:38:27.461762   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:27.466404   79219 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:27.466474   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:27.507684   79219 cri.go:89] found id: ""
	I0806 08:38:27.507708   79219 logs.go:276] 0 containers: []
	W0806 08:38:27.507716   79219 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:27.507722   79219 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0806 08:38:27.507787   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0806 08:38:27.547812   79219 cri.go:89] found id: "367eb6487187a37acafb1fa74364a2d9ec5d9241888ed2e871fb14f2af161975"
	I0806 08:38:27.547844   79219 cri.go:89] found id: "78abe2efa92d2d6c9171ed65bd4d941693be3207554f9318a7cefb91544eeb20"
	I0806 08:38:27.547850   79219 cri.go:89] found id: ""
	I0806 08:38:27.547859   79219 logs.go:276] 2 containers: [367eb6487187a37acafb1fa74364a2d9ec5d9241888ed2e871fb14f2af161975 78abe2efa92d2d6c9171ed65bd4d941693be3207554f9318a7cefb91544eeb20]
	I0806 08:38:27.547929   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:27.552639   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:27.556993   79219 logs.go:123] Gathering logs for kube-proxy [612fc6a8f2c1208097a6d6287c5fecb20026608a278af5d0d4efc9662e065eb9] ...
	I0806 08:38:27.557022   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 612fc6a8f2c1208097a6d6287c5fecb20026608a278af5d0d4efc9662e065eb9"
	I0806 08:38:27.599317   79219 logs.go:123] Gathering logs for storage-provisioner [78abe2efa92d2d6c9171ed65bd4d941693be3207554f9318a7cefb91544eeb20] ...
	I0806 08:38:27.599345   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78abe2efa92d2d6c9171ed65bd4d941693be3207554f9318a7cefb91544eeb20"
	I0806 08:38:27.643812   79219 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:27.643845   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:28.119873   79219 logs.go:123] Gathering logs for container status ...
	I0806 08:38:28.119909   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:28.160698   79219 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:28.160744   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 08:38:28.275895   79219 logs.go:123] Gathering logs for coredns [9e1c91bc3a920b6f7db4477b8219f13fa092c68ff53abe2390325a68dad253e5] ...
	I0806 08:38:28.275927   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e1c91bc3a920b6f7db4477b8219f13fa092c68ff53abe2390325a68dad253e5"
	I0806 08:38:28.324525   79219 logs.go:123] Gathering logs for kube-scheduler [a25c5660a3a2b7226bf308b44307a3912efa7a9a73e79ec63f83133b80a44683] ...
	I0806 08:38:28.324559   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a25c5660a3a2b7226bf308b44307a3912efa7a9a73e79ec63f83133b80a44683"
	I0806 08:38:28.372860   79219 logs.go:123] Gathering logs for etcd [df2efdb62a4af10f89a1b37599d4258ac3b72660353795f8c2141f3ef70c1f65] ...
	I0806 08:38:28.372891   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df2efdb62a4af10f89a1b37599d4258ac3b72660353795f8c2141f3ef70c1f65"
	I0806 08:38:28.438909   79219 logs.go:123] Gathering logs for kube-controller-manager [d8651a7ed6615e826b950eab003841c2ec872b48160f207e77dd32a684125a23] ...
	I0806 08:38:28.438943   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8651a7ed6615e826b950eab003841c2ec872b48160f207e77dd32a684125a23"
	I0806 08:38:28.514238   79219 logs.go:123] Gathering logs for storage-provisioner [367eb6487187a37acafb1fa74364a2d9ec5d9241888ed2e871fb14f2af161975] ...
	I0806 08:38:28.514279   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 367eb6487187a37acafb1fa74364a2d9ec5d9241888ed2e871fb14f2af161975"
	I0806 08:38:28.552420   79219 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:28.552451   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:28.605414   79219 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:28.605452   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:28.621364   79219 logs.go:123] Gathering logs for kube-apiserver [f7edb6bece3347d88a263663a1d6549165dafdcb67ad723494b937766cba6da6] ...
	I0806 08:38:28.621404   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7edb6bece3347d88a263663a1d6549165dafdcb67ad723494b937766cba6da6"
	I0806 08:38:27.469521   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:29.967734   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:28.812203   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:28.812237   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:31.327103   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:31.340745   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:31.340826   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:31.382470   79927 cri.go:89] found id: ""
	I0806 08:38:31.382503   79927 logs.go:276] 0 containers: []
	W0806 08:38:31.382515   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:31.382523   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:31.382593   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:31.423319   79927 cri.go:89] found id: ""
	I0806 08:38:31.423343   79927 logs.go:276] 0 containers: []
	W0806 08:38:31.423353   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:31.423359   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:31.423423   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:31.463073   79927 cri.go:89] found id: ""
	I0806 08:38:31.463099   79927 logs.go:276] 0 containers: []
	W0806 08:38:31.463108   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:31.463115   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:31.463165   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:31.499169   79927 cri.go:89] found id: ""
	I0806 08:38:31.499209   79927 logs.go:276] 0 containers: []
	W0806 08:38:31.499220   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:31.499228   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:31.499284   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:31.535030   79927 cri.go:89] found id: ""
	I0806 08:38:31.535060   79927 logs.go:276] 0 containers: []
	W0806 08:38:31.535071   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:31.535079   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:31.535145   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:31.575872   79927 cri.go:89] found id: ""
	I0806 08:38:31.575894   79927 logs.go:276] 0 containers: []
	W0806 08:38:31.575902   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:31.575907   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:31.575952   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:31.614766   79927 cri.go:89] found id: ""
	I0806 08:38:31.614787   79927 logs.go:276] 0 containers: []
	W0806 08:38:31.614797   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:31.614804   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:31.614860   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:31.655929   79927 cri.go:89] found id: ""
	I0806 08:38:31.655954   79927 logs.go:276] 0 containers: []
	W0806 08:38:31.655964   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:31.655992   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:31.656009   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:31.728563   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:31.728611   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:31.745445   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:31.745479   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:31.827965   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:31.827998   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:31.828014   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:31.935051   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:31.935093   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:30.788021   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:32.788768   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:31.179830   79219 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8443/healthz ...
	I0806 08:38:31.184262   79219 api_server.go:279] https://192.168.39.190:8443/healthz returned 200:
	ok
	I0806 08:38:31.185245   79219 api_server.go:141] control plane version: v1.30.3
	I0806 08:38:31.185267   79219 api_server.go:131] duration metric: took 3.997664057s to wait for apiserver health ...
	I0806 08:38:31.185274   79219 system_pods.go:43] waiting for kube-system pods to appear ...
	I0806 08:38:31.185297   79219 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:31.185339   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:31.222960   79219 cri.go:89] found id: "f7edb6bece3347d88a263663a1d6549165dafdcb67ad723494b937766cba6da6"
	I0806 08:38:31.222986   79219 cri.go:89] found id: ""
	I0806 08:38:31.222993   79219 logs.go:276] 1 containers: [f7edb6bece3347d88a263663a1d6549165dafdcb67ad723494b937766cba6da6]
	I0806 08:38:31.223042   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:31.227448   79219 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:31.227514   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:31.265326   79219 cri.go:89] found id: "df2efdb62a4af10f89a1b37599d4258ac3b72660353795f8c2141f3ef70c1f65"
	I0806 08:38:31.265349   79219 cri.go:89] found id: ""
	I0806 08:38:31.265356   79219 logs.go:276] 1 containers: [df2efdb62a4af10f89a1b37599d4258ac3b72660353795f8c2141f3ef70c1f65]
	I0806 08:38:31.265402   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:31.269672   79219 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:31.269732   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:31.310202   79219 cri.go:89] found id: "9e1c91bc3a920b6f7db4477b8219f13fa092c68ff53abe2390325a68dad253e5"
	I0806 08:38:31.310228   79219 cri.go:89] found id: ""
	I0806 08:38:31.310237   79219 logs.go:276] 1 containers: [9e1c91bc3a920b6f7db4477b8219f13fa092c68ff53abe2390325a68dad253e5]
	I0806 08:38:31.310283   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:31.314486   79219 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:31.314540   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:31.352612   79219 cri.go:89] found id: "a25c5660a3a2b7226bf308b44307a3912efa7a9a73e79ec63f83133b80a44683"
	I0806 08:38:31.352637   79219 cri.go:89] found id: ""
	I0806 08:38:31.352647   79219 logs.go:276] 1 containers: [a25c5660a3a2b7226bf308b44307a3912efa7a9a73e79ec63f83133b80a44683]
	I0806 08:38:31.352707   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:31.358230   79219 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:31.358300   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:31.402435   79219 cri.go:89] found id: "612fc6a8f2c1208097a6d6287c5fecb20026608a278af5d0d4efc9662e065eb9"
	I0806 08:38:31.402462   79219 cri.go:89] found id: ""
	I0806 08:38:31.402472   79219 logs.go:276] 1 containers: [612fc6a8f2c1208097a6d6287c5fecb20026608a278af5d0d4efc9662e065eb9]
	I0806 08:38:31.402530   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:31.407723   79219 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:31.407793   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:31.455484   79219 cri.go:89] found id: "d8651a7ed6615e826b950eab003841c2ec872b48160f207e77dd32a684125a23"
	I0806 08:38:31.455511   79219 cri.go:89] found id: ""
	I0806 08:38:31.455520   79219 logs.go:276] 1 containers: [d8651a7ed6615e826b950eab003841c2ec872b48160f207e77dd32a684125a23]
	I0806 08:38:31.455582   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:31.461155   79219 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:31.461231   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:31.501317   79219 cri.go:89] found id: ""
	I0806 08:38:31.501342   79219 logs.go:276] 0 containers: []
	W0806 08:38:31.501352   79219 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:31.501359   79219 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0806 08:38:31.501418   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0806 08:38:31.552712   79219 cri.go:89] found id: "367eb6487187a37acafb1fa74364a2d9ec5d9241888ed2e871fb14f2af161975"
	I0806 08:38:31.552742   79219 cri.go:89] found id: "78abe2efa92d2d6c9171ed65bd4d941693be3207554f9318a7cefb91544eeb20"
	I0806 08:38:31.552748   79219 cri.go:89] found id: ""
	I0806 08:38:31.552758   79219 logs.go:276] 2 containers: [367eb6487187a37acafb1fa74364a2d9ec5d9241888ed2e871fb14f2af161975 78abe2efa92d2d6c9171ed65bd4d941693be3207554f9318a7cefb91544eeb20]
	I0806 08:38:31.552820   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:31.561505   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:31.566613   79219 logs.go:123] Gathering logs for storage-provisioner [367eb6487187a37acafb1fa74364a2d9ec5d9241888ed2e871fb14f2af161975] ...
	I0806 08:38:31.566640   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 367eb6487187a37acafb1fa74364a2d9ec5d9241888ed2e871fb14f2af161975"
	I0806 08:38:31.613033   79219 logs.go:123] Gathering logs for storage-provisioner [78abe2efa92d2d6c9171ed65bd4d941693be3207554f9318a7cefb91544eeb20] ...
	I0806 08:38:31.613073   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78abe2efa92d2d6c9171ed65bd4d941693be3207554f9318a7cefb91544eeb20"
	I0806 08:38:31.655353   79219 logs.go:123] Gathering logs for kube-scheduler [a25c5660a3a2b7226bf308b44307a3912efa7a9a73e79ec63f83133b80a44683] ...
	I0806 08:38:31.655389   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a25c5660a3a2b7226bf308b44307a3912efa7a9a73e79ec63f83133b80a44683"
	I0806 08:38:31.703934   79219 logs.go:123] Gathering logs for kube-controller-manager [d8651a7ed6615e826b950eab003841c2ec872b48160f207e77dd32a684125a23] ...
	I0806 08:38:31.703960   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8651a7ed6615e826b950eab003841c2ec872b48160f207e77dd32a684125a23"
	I0806 08:38:31.768880   79219 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:31.768916   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 08:38:31.896045   79219 logs.go:123] Gathering logs for kube-apiserver [f7edb6bece3347d88a263663a1d6549165dafdcb67ad723494b937766cba6da6] ...
	I0806 08:38:31.896079   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7edb6bece3347d88a263663a1d6549165dafdcb67ad723494b937766cba6da6"
	I0806 08:38:31.966646   79219 logs.go:123] Gathering logs for etcd [df2efdb62a4af10f89a1b37599d4258ac3b72660353795f8c2141f3ef70c1f65] ...
	I0806 08:38:31.966682   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df2efdb62a4af10f89a1b37599d4258ac3b72660353795f8c2141f3ef70c1f65"
	I0806 08:38:32.029471   79219 logs.go:123] Gathering logs for coredns [9e1c91bc3a920b6f7db4477b8219f13fa092c68ff53abe2390325a68dad253e5] ...
	I0806 08:38:32.029509   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e1c91bc3a920b6f7db4477b8219f13fa092c68ff53abe2390325a68dad253e5"
	I0806 08:38:32.071406   79219 logs.go:123] Gathering logs for kube-proxy [612fc6a8f2c1208097a6d6287c5fecb20026608a278af5d0d4efc9662e065eb9] ...
	I0806 08:38:32.071435   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 612fc6a8f2c1208097a6d6287c5fecb20026608a278af5d0d4efc9662e065eb9"
	I0806 08:38:32.122333   79219 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:32.122368   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:32.478494   79219 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:32.478540   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:32.530649   79219 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:32.530696   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:32.545866   79219 logs.go:123] Gathering logs for container status ...
	I0806 08:38:32.545895   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:35.101370   79219 system_pods.go:59] 8 kube-system pods found
	I0806 08:38:35.101397   79219 system_pods.go:61] "coredns-7db6d8ff4d-4xxwm" [eeb47dea-32e9-458f-9ad9-2af6d4e68cc0] Running
	I0806 08:38:35.101402   79219 system_pods.go:61] "etcd-embed-certs-324808" [1e0a82cd-6704-4da3-8992-15c68a7aaf70] Running
	I0806 08:38:35.101405   79219 system_pods.go:61] "kube-apiserver-embed-certs-324808" [a33c54d9-1fc5-4243-8fe9-d535c40908c3] Running
	I0806 08:38:35.101409   79219 system_pods.go:61] "kube-controller-manager-embed-certs-324808" [b1c519b8-9c96-4c87-90f3-cc277cfe7763] Running
	I0806 08:38:35.101412   79219 system_pods.go:61] "kube-proxy-8lbzv" [0c45e6c0-9b7f-4c48-bff5-38d4a5949bec] Running
	I0806 08:38:35.101415   79219 system_pods.go:61] "kube-scheduler-embed-certs-324808" [78c2827f-341a-4441-bf7e-eff947fc3ea8] Running
	I0806 08:38:35.101421   79219 system_pods.go:61] "metrics-server-569cc877fc-8xb9k" [7a5fb326-11a7-48fa-bcde-a22026a1dccb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0806 08:38:35.101424   79219 system_pods.go:61] "storage-provisioner" [15c7d89a-e994-4cca-931c-f646157a15e8] Running
	I0806 08:38:35.101431   79219 system_pods.go:74] duration metric: took 3.916151772s to wait for pod list to return data ...
	I0806 08:38:35.101438   79219 default_sa.go:34] waiting for default service account to be created ...
	I0806 08:38:35.103518   79219 default_sa.go:45] found service account: "default"
	I0806 08:38:35.103536   79219 default_sa.go:55] duration metric: took 2.092582ms for default service account to be created ...
	I0806 08:38:35.103546   79219 system_pods.go:116] waiting for k8s-apps to be running ...
	I0806 08:38:35.108383   79219 system_pods.go:86] 8 kube-system pods found
	I0806 08:38:35.108409   79219 system_pods.go:89] "coredns-7db6d8ff4d-4xxwm" [eeb47dea-32e9-458f-9ad9-2af6d4e68cc0] Running
	I0806 08:38:35.108417   79219 system_pods.go:89] "etcd-embed-certs-324808" [1e0a82cd-6704-4da3-8992-15c68a7aaf70] Running
	I0806 08:38:35.108423   79219 system_pods.go:89] "kube-apiserver-embed-certs-324808" [a33c54d9-1fc5-4243-8fe9-d535c40908c3] Running
	I0806 08:38:35.108427   79219 system_pods.go:89] "kube-controller-manager-embed-certs-324808" [b1c519b8-9c96-4c87-90f3-cc277cfe7763] Running
	I0806 08:38:35.108431   79219 system_pods.go:89] "kube-proxy-8lbzv" [0c45e6c0-9b7f-4c48-bff5-38d4a5949bec] Running
	I0806 08:38:35.108436   79219 system_pods.go:89] "kube-scheduler-embed-certs-324808" [78c2827f-341a-4441-bf7e-eff947fc3ea8] Running
	I0806 08:38:35.108442   79219 system_pods.go:89] "metrics-server-569cc877fc-8xb9k" [7a5fb326-11a7-48fa-bcde-a22026a1dccb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0806 08:38:35.108447   79219 system_pods.go:89] "storage-provisioner" [15c7d89a-e994-4cca-931c-f646157a15e8] Running
	I0806 08:38:35.108453   79219 system_pods.go:126] duration metric: took 4.901954ms to wait for k8s-apps to be running ...
	I0806 08:38:35.108463   79219 system_svc.go:44] waiting for kubelet service to be running ....
	I0806 08:38:35.108505   79219 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 08:38:35.126018   79219 system_svc.go:56] duration metric: took 17.549117ms WaitForService to wait for kubelet
	I0806 08:38:35.126045   79219 kubeadm.go:582] duration metric: took 4m23.224415251s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0806 08:38:35.126065   79219 node_conditions.go:102] verifying NodePressure condition ...
	I0806 08:38:35.129214   79219 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0806 08:38:35.129231   79219 node_conditions.go:123] node cpu capacity is 2
	I0806 08:38:35.129241   79219 node_conditions.go:105] duration metric: took 3.172893ms to run NodePressure ...
	I0806 08:38:35.129252   79219 start.go:241] waiting for startup goroutines ...
	I0806 08:38:35.129259   79219 start.go:246] waiting for cluster config update ...
	I0806 08:38:35.129273   79219 start.go:255] writing updated cluster config ...
	I0806 08:38:35.129540   79219 ssh_runner.go:195] Run: rm -f paused
	I0806 08:38:35.178651   79219 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0806 08:38:35.180921   79219 out.go:177] * Done! kubectl is now configured to use "embed-certs-324808" cluster and "default" namespace by default
	I0806 08:38:31.968170   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:34.468911   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:34.478890   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:34.493573   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:34.493650   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:34.529054   79927 cri.go:89] found id: ""
	I0806 08:38:34.529083   79927 logs.go:276] 0 containers: []
	W0806 08:38:34.529094   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:34.529101   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:34.529171   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:34.570214   79927 cri.go:89] found id: ""
	I0806 08:38:34.570245   79927 logs.go:276] 0 containers: []
	W0806 08:38:34.570256   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:34.570264   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:34.570325   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:34.607198   79927 cri.go:89] found id: ""
	I0806 08:38:34.607232   79927 logs.go:276] 0 containers: []
	W0806 08:38:34.607243   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:34.607250   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:34.607296   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:34.646238   79927 cri.go:89] found id: ""
	I0806 08:38:34.646266   79927 logs.go:276] 0 containers: []
	W0806 08:38:34.646278   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:34.646287   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:34.646354   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:34.682137   79927 cri.go:89] found id: ""
	I0806 08:38:34.682161   79927 logs.go:276] 0 containers: []
	W0806 08:38:34.682169   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:34.682175   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:34.682227   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:34.721211   79927 cri.go:89] found id: ""
	I0806 08:38:34.721238   79927 logs.go:276] 0 containers: []
	W0806 08:38:34.721246   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:34.721252   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:34.721306   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:34.759693   79927 cri.go:89] found id: ""
	I0806 08:38:34.759728   79927 logs.go:276] 0 containers: []
	W0806 08:38:34.759737   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:34.759743   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:34.759793   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:34.799224   79927 cri.go:89] found id: ""
	I0806 08:38:34.799246   79927 logs.go:276] 0 containers: []
	W0806 08:38:34.799254   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:34.799262   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:34.799273   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:34.852682   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:34.852727   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:34.866456   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:34.866494   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:34.933029   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:34.933055   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:34.933076   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:35.017215   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:35.017253   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:37.567662   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:37.582220   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:37.582284   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:37.622112   79927 cri.go:89] found id: ""
	I0806 08:38:37.622145   79927 logs.go:276] 0 containers: []
	W0806 08:38:37.622153   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:37.622164   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:37.622228   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:37.657767   79927 cri.go:89] found id: ""
	I0806 08:38:37.657791   79927 logs.go:276] 0 containers: []
	W0806 08:38:37.657798   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:37.657804   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:37.657849   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:37.691442   79927 cri.go:89] found id: ""
	I0806 08:38:37.691471   79927 logs.go:276] 0 containers: []
	W0806 08:38:37.691480   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:37.691485   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:37.691539   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:37.725810   79927 cri.go:89] found id: ""
	I0806 08:38:37.725837   79927 logs.go:276] 0 containers: []
	W0806 08:38:37.725859   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:37.725866   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:37.725922   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:37.764708   79927 cri.go:89] found id: ""
	I0806 08:38:37.764737   79927 logs.go:276] 0 containers: []
	W0806 08:38:37.764748   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:37.764755   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:37.764818   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:37.802915   79927 cri.go:89] found id: ""
	I0806 08:38:37.802939   79927 logs.go:276] 0 containers: []
	W0806 08:38:37.802947   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:37.802953   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:37.803008   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:37.838492   79927 cri.go:89] found id: ""
	I0806 08:38:37.838525   79927 logs.go:276] 0 containers: []
	W0806 08:38:37.838536   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:37.838544   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:37.838607   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:37.873937   79927 cri.go:89] found id: ""
	I0806 08:38:37.873969   79927 logs.go:276] 0 containers: []
	W0806 08:38:37.873979   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:37.873990   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:37.874005   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:37.912169   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:37.912193   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:37.968269   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:37.968305   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:37.983414   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:37.983440   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:38.055031   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:38.055049   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:38.055062   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:35.289081   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:37.789220   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:36.968687   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:39.468051   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:41.468488   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:40.635449   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:40.649962   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:40.650045   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:40.688975   79927 cri.go:89] found id: ""
	I0806 08:38:40.689006   79927 logs.go:276] 0 containers: []
	W0806 08:38:40.689017   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:40.689029   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:40.689093   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:40.726740   79927 cri.go:89] found id: ""
	I0806 08:38:40.726770   79927 logs.go:276] 0 containers: []
	W0806 08:38:40.726780   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:40.726788   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:40.726886   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:40.767402   79927 cri.go:89] found id: ""
	I0806 08:38:40.767432   79927 logs.go:276] 0 containers: []
	W0806 08:38:40.767445   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:40.767453   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:40.767510   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:40.809994   79927 cri.go:89] found id: ""
	I0806 08:38:40.810023   79927 logs.go:276] 0 containers: []
	W0806 08:38:40.810033   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:40.810042   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:40.810102   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:40.845624   79927 cri.go:89] found id: ""
	I0806 08:38:40.845653   79927 logs.go:276] 0 containers: []
	W0806 08:38:40.845662   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:40.845668   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:40.845715   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:40.883830   79927 cri.go:89] found id: ""
	I0806 08:38:40.883856   79927 logs.go:276] 0 containers: []
	W0806 08:38:40.883871   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:40.883879   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:40.883937   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:40.919930   79927 cri.go:89] found id: ""
	I0806 08:38:40.919958   79927 logs.go:276] 0 containers: []
	W0806 08:38:40.919966   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:40.919971   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:40.920018   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:40.954563   79927 cri.go:89] found id: ""
	I0806 08:38:40.954594   79927 logs.go:276] 0 containers: []
	W0806 08:38:40.954605   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:40.954616   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:40.954629   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:41.008780   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:41.008813   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:41.022292   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:41.022321   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:41.090241   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:41.090273   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:41.090294   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:41.171739   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:41.171779   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:43.712410   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:43.726676   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:43.726747   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:40.288192   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:42.289039   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:42.789083   79614 pod_ready.go:81] duration metric: took 4m0.00771712s for pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace to be "Ready" ...
	E0806 08:38:42.789107   79614 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0806 08:38:42.789117   79614 pod_ready.go:38] duration metric: took 4m5.553596655s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 08:38:42.789135   79614 api_server.go:52] waiting for apiserver process to appear ...
	I0806 08:38:42.789163   79614 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:42.789215   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:42.837092   79614 cri.go:89] found id: "49843be03d16addb301ffd26e517770ff29b00f049efb524a2139b8b2153325c"
	I0806 08:38:42.837119   79614 cri.go:89] found id: ""
	I0806 08:38:42.837128   79614 logs.go:276] 1 containers: [49843be03d16addb301ffd26e517770ff29b00f049efb524a2139b8b2153325c]
	I0806 08:38:42.837188   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:42.842270   79614 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:42.842334   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:42.878018   79614 cri.go:89] found id: "b5fa6fe8f1db6b59f47ea637515efb2864e0408ce1df0e6851b3bda0e240f5d3"
	I0806 08:38:42.878044   79614 cri.go:89] found id: ""
	I0806 08:38:42.878053   79614 logs.go:276] 1 containers: [b5fa6fe8f1db6b59f47ea637515efb2864e0408ce1df0e6851b3bda0e240f5d3]
	I0806 08:38:42.878115   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:42.882437   79614 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:42.882509   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:42.925635   79614 cri.go:89] found id: "59f63eeae644f7c8944cdb7b1d05d9e49fe84eb28e9ccbad2d2dd79846d7eb52"
	I0806 08:38:42.925655   79614 cri.go:89] found id: ""
	I0806 08:38:42.925662   79614 logs.go:276] 1 containers: [59f63eeae644f7c8944cdb7b1d05d9e49fe84eb28e9ccbad2d2dd79846d7eb52]
	I0806 08:38:42.925704   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:42.930525   79614 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:42.930583   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:42.972716   79614 cri.go:89] found id: "e596abcad3dc13d27548701f515beebba051e04a9e20f8ce1d018731d880ff1c"
	I0806 08:38:42.972736   79614 cri.go:89] found id: ""
	I0806 08:38:42.972744   79614 logs.go:276] 1 containers: [e596abcad3dc13d27548701f515beebba051e04a9e20f8ce1d018731d880ff1c]
	I0806 08:38:42.972791   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:42.977840   79614 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:42.977894   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:43.017316   79614 cri.go:89] found id: "75990046864b4a4db9f8c02f44de4eefbb0cf45584880b053bce4ca0b677ce4e"
	I0806 08:38:43.017339   79614 cri.go:89] found id: ""
	I0806 08:38:43.017346   79614 logs.go:276] 1 containers: [75990046864b4a4db9f8c02f44de4eefbb0cf45584880b053bce4ca0b677ce4e]
	I0806 08:38:43.017396   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:43.022341   79614 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:43.022416   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:43.067113   79614 cri.go:89] found id: "05bbffa4c989e42bd020ebe6b943459ec1d7b31a6c5e5ba9a37056da15542658"
	I0806 08:38:43.067145   79614 cri.go:89] found id: ""
	I0806 08:38:43.067155   79614 logs.go:276] 1 containers: [05bbffa4c989e42bd020ebe6b943459ec1d7b31a6c5e5ba9a37056da15542658]
	I0806 08:38:43.067211   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:43.071674   79614 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:43.071748   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:43.114123   79614 cri.go:89] found id: ""
	I0806 08:38:43.114146   79614 logs.go:276] 0 containers: []
	W0806 08:38:43.114154   79614 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:43.114159   79614 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0806 08:38:43.114211   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0806 08:38:43.155927   79614 cri.go:89] found id: "e668e4a5fa59e279bc227b6110983cc4660b639d5b1e1291ae60d361f682d82e"
	I0806 08:38:43.155955   79614 cri.go:89] found id: "abd677ac30faba3fef9e9d562601ad0012d7f1b48852e08b8f2684efda6b0f2e"
	I0806 08:38:43.155961   79614 cri.go:89] found id: ""
	I0806 08:38:43.155969   79614 logs.go:276] 2 containers: [e668e4a5fa59e279bc227b6110983cc4660b639d5b1e1291ae60d361f682d82e abd677ac30faba3fef9e9d562601ad0012d7f1b48852e08b8f2684efda6b0f2e]
	I0806 08:38:43.156017   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:43.165735   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:43.171834   79614 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:43.171861   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:43.232588   79614 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:43.232628   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:43.248033   79614 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:43.248080   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 08:38:43.383729   79614 logs.go:123] Gathering logs for etcd [b5fa6fe8f1db6b59f47ea637515efb2864e0408ce1df0e6851b3bda0e240f5d3] ...
	I0806 08:38:43.383760   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5fa6fe8f1db6b59f47ea637515efb2864e0408ce1df0e6851b3bda0e240f5d3"
	I0806 08:38:43.428082   79614 logs.go:123] Gathering logs for coredns [59f63eeae644f7c8944cdb7b1d05d9e49fe84eb28e9ccbad2d2dd79846d7eb52] ...
	I0806 08:38:43.428116   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59f63eeae644f7c8944cdb7b1d05d9e49fe84eb28e9ccbad2d2dd79846d7eb52"
	I0806 08:38:43.467054   79614 logs.go:123] Gathering logs for storage-provisioner [e668e4a5fa59e279bc227b6110983cc4660b639d5b1e1291ae60d361f682d82e] ...
	I0806 08:38:43.467081   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e668e4a5fa59e279bc227b6110983cc4660b639d5b1e1291ae60d361f682d82e"
	I0806 08:38:43.502900   79614 logs.go:123] Gathering logs for storage-provisioner [abd677ac30faba3fef9e9d562601ad0012d7f1b48852e08b8f2684efda6b0f2e] ...
	I0806 08:38:43.502928   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abd677ac30faba3fef9e9d562601ad0012d7f1b48852e08b8f2684efda6b0f2e"
	I0806 08:38:43.539338   79614 logs.go:123] Gathering logs for kube-apiserver [49843be03d16addb301ffd26e517770ff29b00f049efb524a2139b8b2153325c] ...
	I0806 08:38:43.539369   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49843be03d16addb301ffd26e517770ff29b00f049efb524a2139b8b2153325c"
	I0806 08:38:43.585813   79614 logs.go:123] Gathering logs for kube-scheduler [e596abcad3dc13d27548701f515beebba051e04a9e20f8ce1d018731d880ff1c] ...
	I0806 08:38:43.585855   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e596abcad3dc13d27548701f515beebba051e04a9e20f8ce1d018731d880ff1c"
	I0806 08:38:43.627141   79614 logs.go:123] Gathering logs for kube-proxy [75990046864b4a4db9f8c02f44de4eefbb0cf45584880b053bce4ca0b677ce4e] ...
	I0806 08:38:43.627173   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75990046864b4a4db9f8c02f44de4eefbb0cf45584880b053bce4ca0b677ce4e"
	I0806 08:38:43.663754   79614 logs.go:123] Gathering logs for kube-controller-manager [05bbffa4c989e42bd020ebe6b943459ec1d7b31a6c5e5ba9a37056da15542658] ...
	I0806 08:38:43.663783   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 05bbffa4c989e42bd020ebe6b943459ec1d7b31a6c5e5ba9a37056da15542658"
	I0806 08:38:43.717594   79614 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:43.717631   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:43.469134   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:45.967796   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:43.765343   79927 cri.go:89] found id: ""
	I0806 08:38:43.765379   79927 logs.go:276] 0 containers: []
	W0806 08:38:43.765390   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:43.765398   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:43.765457   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:43.803813   79927 cri.go:89] found id: ""
	I0806 08:38:43.803846   79927 logs.go:276] 0 containers: []
	W0806 08:38:43.803859   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:43.803868   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:43.803933   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:43.841307   79927 cri.go:89] found id: ""
	I0806 08:38:43.841339   79927 logs.go:276] 0 containers: []
	W0806 08:38:43.841349   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:43.841356   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:43.841418   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:43.876488   79927 cri.go:89] found id: ""
	I0806 08:38:43.876521   79927 logs.go:276] 0 containers: []
	W0806 08:38:43.876534   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:43.876544   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:43.876606   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:43.910972   79927 cri.go:89] found id: ""
	I0806 08:38:43.911000   79927 logs.go:276] 0 containers: []
	W0806 08:38:43.911010   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:43.911017   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:43.911077   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:43.945541   79927 cri.go:89] found id: ""
	I0806 08:38:43.945573   79927 logs.go:276] 0 containers: []
	W0806 08:38:43.945584   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:43.945592   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:43.945655   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:43.981366   79927 cri.go:89] found id: ""
	I0806 08:38:43.981399   79927 logs.go:276] 0 containers: []
	W0806 08:38:43.981408   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:43.981413   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:43.981463   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:44.015349   79927 cri.go:89] found id: ""
	I0806 08:38:44.015375   79927 logs.go:276] 0 containers: []
	W0806 08:38:44.015384   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:44.015394   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:44.015408   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:44.068089   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:44.068129   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:44.082496   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:44.082524   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:44.157765   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:44.157788   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:44.157805   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:44.242842   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:44.242885   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:46.789280   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:46.803490   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:46.803562   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:46.844164   79927 cri.go:89] found id: ""
	I0806 08:38:46.844192   79927 logs.go:276] 0 containers: []
	W0806 08:38:46.844200   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:46.844206   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:46.844247   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:46.880968   79927 cri.go:89] found id: ""
	I0806 08:38:46.881003   79927 logs.go:276] 0 containers: []
	W0806 08:38:46.881015   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:46.881022   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:46.881082   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:46.919291   79927 cri.go:89] found id: ""
	I0806 08:38:46.919311   79927 logs.go:276] 0 containers: []
	W0806 08:38:46.919319   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:46.919324   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:46.919369   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:46.960591   79927 cri.go:89] found id: ""
	I0806 08:38:46.960616   79927 logs.go:276] 0 containers: []
	W0806 08:38:46.960626   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:46.960632   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:46.960681   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:46.995913   79927 cri.go:89] found id: ""
	I0806 08:38:46.995943   79927 logs.go:276] 0 containers: []
	W0806 08:38:46.995954   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:46.995961   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:46.996045   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:47.035185   79927 cri.go:89] found id: ""
	I0806 08:38:47.035213   79927 logs.go:276] 0 containers: []
	W0806 08:38:47.035225   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:47.035236   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:47.035292   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:47.079605   79927 cri.go:89] found id: ""
	I0806 08:38:47.079634   79927 logs.go:276] 0 containers: []
	W0806 08:38:47.079646   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:47.079653   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:47.079701   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:47.118819   79927 cri.go:89] found id: ""
	I0806 08:38:47.118849   79927 logs.go:276] 0 containers: []
	W0806 08:38:47.118861   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:47.118873   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:47.118888   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:47.181357   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:47.181394   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:47.197676   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:47.197707   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:47.270304   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:47.270331   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:47.270348   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:47.354145   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:47.354176   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:44.291978   79614 logs.go:123] Gathering logs for container status ...
	I0806 08:38:44.292026   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:46.843357   79614 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:46.860793   79614 api_server.go:72] duration metric: took 4m14.882891742s to wait for apiserver process to appear ...
	I0806 08:38:46.860820   79614 api_server.go:88] waiting for apiserver healthz status ...
	I0806 08:38:46.860858   79614 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:46.860923   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:46.909044   79614 cri.go:89] found id: "49843be03d16addb301ffd26e517770ff29b00f049efb524a2139b8b2153325c"
	I0806 08:38:46.909068   79614 cri.go:89] found id: ""
	I0806 08:38:46.909076   79614 logs.go:276] 1 containers: [49843be03d16addb301ffd26e517770ff29b00f049efb524a2139b8b2153325c]
	I0806 08:38:46.909134   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:46.913443   79614 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:46.913522   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:46.955246   79614 cri.go:89] found id: "b5fa6fe8f1db6b59f47ea637515efb2864e0408ce1df0e6851b3bda0e240f5d3"
	I0806 08:38:46.955273   79614 cri.go:89] found id: ""
	I0806 08:38:46.955281   79614 logs.go:276] 1 containers: [b5fa6fe8f1db6b59f47ea637515efb2864e0408ce1df0e6851b3bda0e240f5d3]
	I0806 08:38:46.955347   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:46.959914   79614 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:46.959994   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:47.001639   79614 cri.go:89] found id: "59f63eeae644f7c8944cdb7b1d05d9e49fe84eb28e9ccbad2d2dd79846d7eb52"
	I0806 08:38:47.001665   79614 cri.go:89] found id: ""
	I0806 08:38:47.001673   79614 logs.go:276] 1 containers: [59f63eeae644f7c8944cdb7b1d05d9e49fe84eb28e9ccbad2d2dd79846d7eb52]
	I0806 08:38:47.001717   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:47.006260   79614 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:47.006344   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:47.049226   79614 cri.go:89] found id: "e596abcad3dc13d27548701f515beebba051e04a9e20f8ce1d018731d880ff1c"
	I0806 08:38:47.049252   79614 cri.go:89] found id: ""
	I0806 08:38:47.049261   79614 logs.go:276] 1 containers: [e596abcad3dc13d27548701f515beebba051e04a9e20f8ce1d018731d880ff1c]
	I0806 08:38:47.049317   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:47.053544   79614 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:47.053626   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:47.095805   79614 cri.go:89] found id: "75990046864b4a4db9f8c02f44de4eefbb0cf45584880b053bce4ca0b677ce4e"
	I0806 08:38:47.095832   79614 cri.go:89] found id: ""
	I0806 08:38:47.095841   79614 logs.go:276] 1 containers: [75990046864b4a4db9f8c02f44de4eefbb0cf45584880b053bce4ca0b677ce4e]
	I0806 08:38:47.095895   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:47.100272   79614 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:47.100329   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:47.146108   79614 cri.go:89] found id: "05bbffa4c989e42bd020ebe6b943459ec1d7b31a6c5e5ba9a37056da15542658"
	I0806 08:38:47.146132   79614 cri.go:89] found id: ""
	I0806 08:38:47.146142   79614 logs.go:276] 1 containers: [05bbffa4c989e42bd020ebe6b943459ec1d7b31a6c5e5ba9a37056da15542658]
	I0806 08:38:47.146200   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:47.150808   79614 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:47.150863   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:47.194122   79614 cri.go:89] found id: ""
	I0806 08:38:47.194157   79614 logs.go:276] 0 containers: []
	W0806 08:38:47.194169   79614 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:47.194177   79614 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0806 08:38:47.194241   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0806 08:38:47.237872   79614 cri.go:89] found id: "e668e4a5fa59e279bc227b6110983cc4660b639d5b1e1291ae60d361f682d82e"
	I0806 08:38:47.237906   79614 cri.go:89] found id: "abd677ac30faba3fef9e9d562601ad0012d7f1b48852e08b8f2684efda6b0f2e"
	I0806 08:38:47.237915   79614 cri.go:89] found id: ""
	I0806 08:38:47.237925   79614 logs.go:276] 2 containers: [e668e4a5fa59e279bc227b6110983cc4660b639d5b1e1291ae60d361f682d82e abd677ac30faba3fef9e9d562601ad0012d7f1b48852e08b8f2684efda6b0f2e]
	I0806 08:38:47.237988   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:47.242528   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:47.247317   79614 logs.go:123] Gathering logs for kube-proxy [75990046864b4a4db9f8c02f44de4eefbb0cf45584880b053bce4ca0b677ce4e] ...
	I0806 08:38:47.247341   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75990046864b4a4db9f8c02f44de4eefbb0cf45584880b053bce4ca0b677ce4e"
	I0806 08:38:47.289612   79614 logs.go:123] Gathering logs for kube-controller-manager [05bbffa4c989e42bd020ebe6b943459ec1d7b31a6c5e5ba9a37056da15542658] ...
	I0806 08:38:47.289644   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 05bbffa4c989e42bd020ebe6b943459ec1d7b31a6c5e5ba9a37056da15542658"
	I0806 08:38:47.345619   79614 logs.go:123] Gathering logs for storage-provisioner [e668e4a5fa59e279bc227b6110983cc4660b639d5b1e1291ae60d361f682d82e] ...
	I0806 08:38:47.345653   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e668e4a5fa59e279bc227b6110983cc4660b639d5b1e1291ae60d361f682d82e"
	I0806 08:38:47.401281   79614 logs.go:123] Gathering logs for storage-provisioner [abd677ac30faba3fef9e9d562601ad0012d7f1b48852e08b8f2684efda6b0f2e] ...
	I0806 08:38:47.401306   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abd677ac30faba3fef9e9d562601ad0012d7f1b48852e08b8f2684efda6b0f2e"
	I0806 08:38:47.438335   79614 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:47.438364   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 08:38:47.537334   79614 logs.go:123] Gathering logs for etcd [b5fa6fe8f1db6b59f47ea637515efb2864e0408ce1df0e6851b3bda0e240f5d3] ...
	I0806 08:38:47.537360   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5fa6fe8f1db6b59f47ea637515efb2864e0408ce1df0e6851b3bda0e240f5d3"
	I0806 08:38:47.592101   79614 logs.go:123] Gathering logs for coredns [59f63eeae644f7c8944cdb7b1d05d9e49fe84eb28e9ccbad2d2dd79846d7eb52] ...
	I0806 08:38:47.592132   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59f63eeae644f7c8944cdb7b1d05d9e49fe84eb28e9ccbad2d2dd79846d7eb52"
	I0806 08:38:47.626714   79614 logs.go:123] Gathering logs for kube-scheduler [e596abcad3dc13d27548701f515beebba051e04a9e20f8ce1d018731d880ff1c] ...
	I0806 08:38:47.626738   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e596abcad3dc13d27548701f515beebba051e04a9e20f8ce1d018731d880ff1c"
	I0806 08:38:47.663232   79614 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:47.663262   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:48.133308   79614 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:48.133344   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:48.189669   79614 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:48.189709   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:48.208530   79614 logs.go:123] Gathering logs for kube-apiserver [49843be03d16addb301ffd26e517770ff29b00f049efb524a2139b8b2153325c] ...
	I0806 08:38:48.208557   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49843be03d16addb301ffd26e517770ff29b00f049efb524a2139b8b2153325c"
	I0806 08:38:48.264499   79614 logs.go:123] Gathering logs for container status ...
	I0806 08:38:48.264535   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:49.901481   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:49.914862   79927 kubeadm.go:597] duration metric: took 4m3.80396481s to restartPrimaryControlPlane
	W0806 08:38:49.914926   79927 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0806 08:38:49.914952   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0806 08:38:50.394829   79927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 08:38:50.409425   79927 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0806 08:38:50.419349   79927 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0806 08:38:50.429688   79927 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0806 08:38:50.429711   79927 kubeadm.go:157] found existing configuration files:
	
	I0806 08:38:50.429756   79927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0806 08:38:50.440055   79927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0806 08:38:50.440126   79927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0806 08:38:50.450734   79927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0806 08:38:50.461860   79927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0806 08:38:50.461943   79927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0806 08:38:50.473324   79927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0806 08:38:50.483381   79927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0806 08:38:50.483440   79927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0806 08:38:50.493247   79927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0806 08:38:50.502800   79927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0806 08:38:50.502861   79927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0806 08:38:50.512897   79927 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0806 08:38:50.585658   79927 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0806 08:38:50.585795   79927 kubeadm.go:310] [preflight] Running pre-flight checks
	I0806 08:38:50.740936   79927 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0806 08:38:50.741079   79927 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0806 08:38:50.741200   79927 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0806 08:38:50.938086   79927 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0806 08:38:48.468228   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:50.968702   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:50.940205   79927 out.go:204]   - Generating certificates and keys ...
	I0806 08:38:50.940315   79927 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0806 08:38:50.940428   79927 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0806 08:38:50.940543   79927 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0806 08:38:50.940621   79927 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0806 08:38:50.940723   79927 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0806 08:38:50.940800   79927 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0806 08:38:50.940881   79927 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0806 08:38:50.941150   79927 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0806 08:38:50.941954   79927 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0806 08:38:50.942765   79927 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0806 08:38:50.942910   79927 kubeadm.go:310] [certs] Using the existing "sa" key
	I0806 08:38:50.942993   79927 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0806 08:38:51.039558   79927 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0806 08:38:51.265243   79927 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0806 08:38:51.348501   79927 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0806 08:38:51.509558   79927 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0806 08:38:51.530725   79927 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0806 08:38:51.534108   79927 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0806 08:38:51.534297   79927 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0806 08:38:51.680149   79927 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0806 08:38:51.682195   79927 out.go:204]   - Booting up control plane ...
	I0806 08:38:51.682331   79927 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0806 08:38:51.693736   79927 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0806 08:38:51.695315   79927 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0806 08:38:51.697107   79927 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0806 08:38:51.700906   79927 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0806 08:38:50.822580   79614 api_server.go:253] Checking apiserver healthz at https://192.168.50.235:8444/healthz ...
	I0806 08:38:50.827121   79614 api_server.go:279] https://192.168.50.235:8444/healthz returned 200:
	ok
	I0806 08:38:50.828327   79614 api_server.go:141] control plane version: v1.30.3
	I0806 08:38:50.828355   79614 api_server.go:131] duration metric: took 3.967524972s to wait for apiserver health ...
	I0806 08:38:50.828381   79614 system_pods.go:43] waiting for kube-system pods to appear ...
	I0806 08:38:50.828406   79614 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:50.828460   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:50.875518   79614 cri.go:89] found id: "49843be03d16addb301ffd26e517770ff29b00f049efb524a2139b8b2153325c"
	I0806 08:38:50.875545   79614 cri.go:89] found id: ""
	I0806 08:38:50.875555   79614 logs.go:276] 1 containers: [49843be03d16addb301ffd26e517770ff29b00f049efb524a2139b8b2153325c]
	I0806 08:38:50.875611   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:50.881260   79614 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:50.881321   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:50.923252   79614 cri.go:89] found id: "b5fa6fe8f1db6b59f47ea637515efb2864e0408ce1df0e6851b3bda0e240f5d3"
	I0806 08:38:50.923281   79614 cri.go:89] found id: ""
	I0806 08:38:50.923292   79614 logs.go:276] 1 containers: [b5fa6fe8f1db6b59f47ea637515efb2864e0408ce1df0e6851b3bda0e240f5d3]
	I0806 08:38:50.923350   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:50.928115   79614 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:50.928176   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:50.971427   79614 cri.go:89] found id: "59f63eeae644f7c8944cdb7b1d05d9e49fe84eb28e9ccbad2d2dd79846d7eb52"
	I0806 08:38:50.971453   79614 cri.go:89] found id: ""
	I0806 08:38:50.971463   79614 logs.go:276] 1 containers: [59f63eeae644f7c8944cdb7b1d05d9e49fe84eb28e9ccbad2d2dd79846d7eb52]
	I0806 08:38:50.971520   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:50.975665   79614 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:50.975789   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:51.014601   79614 cri.go:89] found id: "e596abcad3dc13d27548701f515beebba051e04a9e20f8ce1d018731d880ff1c"
	I0806 08:38:51.014633   79614 cri.go:89] found id: ""
	I0806 08:38:51.014644   79614 logs.go:276] 1 containers: [e596abcad3dc13d27548701f515beebba051e04a9e20f8ce1d018731d880ff1c]
	I0806 08:38:51.014722   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:51.019197   79614 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:51.019266   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:51.057063   79614 cri.go:89] found id: "75990046864b4a4db9f8c02f44de4eefbb0cf45584880b053bce4ca0b677ce4e"
	I0806 08:38:51.057089   79614 cri.go:89] found id: ""
	I0806 08:38:51.057098   79614 logs.go:276] 1 containers: [75990046864b4a4db9f8c02f44de4eefbb0cf45584880b053bce4ca0b677ce4e]
	I0806 08:38:51.057159   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:51.061504   79614 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:51.061571   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:51.106496   79614 cri.go:89] found id: "05bbffa4c989e42bd020ebe6b943459ec1d7b31a6c5e5ba9a37056da15542658"
	I0806 08:38:51.106523   79614 cri.go:89] found id: ""
	I0806 08:38:51.106532   79614 logs.go:276] 1 containers: [05bbffa4c989e42bd020ebe6b943459ec1d7b31a6c5e5ba9a37056da15542658]
	I0806 08:38:51.106590   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:51.111040   79614 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:51.111109   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:51.156042   79614 cri.go:89] found id: ""
	I0806 08:38:51.156072   79614 logs.go:276] 0 containers: []
	W0806 08:38:51.156083   79614 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:51.156090   79614 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0806 08:38:51.156152   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0806 08:38:51.208712   79614 cri.go:89] found id: "e668e4a5fa59e279bc227b6110983cc4660b639d5b1e1291ae60d361f682d82e"
	I0806 08:38:51.208733   79614 cri.go:89] found id: "abd677ac30faba3fef9e9d562601ad0012d7f1b48852e08b8f2684efda6b0f2e"
	I0806 08:38:51.208737   79614 cri.go:89] found id: ""
	I0806 08:38:51.208744   79614 logs.go:276] 2 containers: [e668e4a5fa59e279bc227b6110983cc4660b639d5b1e1291ae60d361f682d82e abd677ac30faba3fef9e9d562601ad0012d7f1b48852e08b8f2684efda6b0f2e]
	I0806 08:38:51.208796   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:51.213508   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:51.219082   79614 logs.go:123] Gathering logs for etcd [b5fa6fe8f1db6b59f47ea637515efb2864e0408ce1df0e6851b3bda0e240f5d3] ...
	I0806 08:38:51.219104   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5fa6fe8f1db6b59f47ea637515efb2864e0408ce1df0e6851b3bda0e240f5d3"
	I0806 08:38:51.266352   79614 logs.go:123] Gathering logs for storage-provisioner [e668e4a5fa59e279bc227b6110983cc4660b639d5b1e1291ae60d361f682d82e] ...
	I0806 08:38:51.266380   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e668e4a5fa59e279bc227b6110983cc4660b639d5b1e1291ae60d361f682d82e"
	I0806 08:38:51.315128   79614 logs.go:123] Gathering logs for storage-provisioner [abd677ac30faba3fef9e9d562601ad0012d7f1b48852e08b8f2684efda6b0f2e] ...
	I0806 08:38:51.315168   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abd677ac30faba3fef9e9d562601ad0012d7f1b48852e08b8f2684efda6b0f2e"
	I0806 08:38:51.357872   79614 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:51.357914   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:51.768288   79614 logs.go:123] Gathering logs for container status ...
	I0806 08:38:51.768328   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:51.825304   79614 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:51.825341   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:51.888532   79614 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:51.888567   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 08:38:51.998271   79614 logs.go:123] Gathering logs for coredns [59f63eeae644f7c8944cdb7b1d05d9e49fe84eb28e9ccbad2d2dd79846d7eb52] ...
	I0806 08:38:51.998307   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59f63eeae644f7c8944cdb7b1d05d9e49fe84eb28e9ccbad2d2dd79846d7eb52"
	I0806 08:38:52.034549   79614 logs.go:123] Gathering logs for kube-scheduler [e596abcad3dc13d27548701f515beebba051e04a9e20f8ce1d018731d880ff1c] ...
	I0806 08:38:52.034574   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e596abcad3dc13d27548701f515beebba051e04a9e20f8ce1d018731d880ff1c"
	I0806 08:38:52.079289   79614 logs.go:123] Gathering logs for kube-proxy [75990046864b4a4db9f8c02f44de4eefbb0cf45584880b053bce4ca0b677ce4e] ...
	I0806 08:38:52.079317   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75990046864b4a4db9f8c02f44de4eefbb0cf45584880b053bce4ca0b677ce4e"
	I0806 08:38:52.124134   79614 logs.go:123] Gathering logs for kube-controller-manager [05bbffa4c989e42bd020ebe6b943459ec1d7b31a6c5e5ba9a37056da15542658] ...
	I0806 08:38:52.124163   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 05bbffa4c989e42bd020ebe6b943459ec1d7b31a6c5e5ba9a37056da15542658"
	I0806 08:38:52.191877   79614 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:52.191914   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:52.210712   79614 logs.go:123] Gathering logs for kube-apiserver [49843be03d16addb301ffd26e517770ff29b00f049efb524a2139b8b2153325c] ...
	I0806 08:38:52.210740   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49843be03d16addb301ffd26e517770ff29b00f049efb524a2139b8b2153325c"
	I0806 08:38:54.770330   79614 system_pods.go:59] 8 kube-system pods found
	I0806 08:38:54.770356   79614 system_pods.go:61] "coredns-7db6d8ff4d-gx8tt" [48f83ba8-a238-4baf-94af-88370fefcb02] Running
	I0806 08:38:54.770361   79614 system_pods.go:61] "etcd-default-k8s-diff-port-990751" [bf342ba4-1e11-40bf-80fd-02b402043ecf] Running
	I0806 08:38:54.770365   79614 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-990751" [11ee42c4-c9e9-41cf-ace2-deac0f30153a] Running
	I0806 08:38:54.770369   79614 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-990751" [f17fca30-0afc-4ed8-a8cc-cb64e69723f3] Running
	I0806 08:38:54.770373   79614 system_pods.go:61] "kube-proxy-lpf88" [866c10f0-2c44-4c03-b460-ff2194bd1ec6] Running
	I0806 08:38:54.770377   79614 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-990751" [2e591ea7-07bd-464f-bf0e-bc24bfcca016] Running
	I0806 08:38:54.770383   79614 system_pods.go:61] "metrics-server-569cc877fc-hnxd4" [f369ac70-49b7-448a-9852-89232fb66da9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0806 08:38:54.770388   79614 system_pods.go:61] "storage-provisioner" [0faff3fc-d34b-418e-972b-95a2e36b577a] Running
	I0806 08:38:54.770395   79614 system_pods.go:74] duration metric: took 3.942006529s to wait for pod list to return data ...
	I0806 08:38:54.770401   79614 default_sa.go:34] waiting for default service account to be created ...
	I0806 08:38:54.773935   79614 default_sa.go:45] found service account: "default"
	I0806 08:38:54.773956   79614 default_sa.go:55] duration metric: took 3.549307ms for default service account to be created ...
	I0806 08:38:54.773963   79614 system_pods.go:116] waiting for k8s-apps to be running ...
	I0806 08:38:54.779748   79614 system_pods.go:86] 8 kube-system pods found
	I0806 08:38:54.779773   79614 system_pods.go:89] "coredns-7db6d8ff4d-gx8tt" [48f83ba8-a238-4baf-94af-88370fefcb02] Running
	I0806 08:38:54.779780   79614 system_pods.go:89] "etcd-default-k8s-diff-port-990751" [bf342ba4-1e11-40bf-80fd-02b402043ecf] Running
	I0806 08:38:54.779787   79614 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-990751" [11ee42c4-c9e9-41cf-ace2-deac0f30153a] Running
	I0806 08:38:54.779794   79614 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-990751" [f17fca30-0afc-4ed8-a8cc-cb64e69723f3] Running
	I0806 08:38:54.779801   79614 system_pods.go:89] "kube-proxy-lpf88" [866c10f0-2c44-4c03-b460-ff2194bd1ec6] Running
	I0806 08:38:54.779807   79614 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-990751" [2e591ea7-07bd-464f-bf0e-bc24bfcca016] Running
	I0806 08:38:54.779818   79614 system_pods.go:89] "metrics-server-569cc877fc-hnxd4" [f369ac70-49b7-448a-9852-89232fb66da9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0806 08:38:54.779825   79614 system_pods.go:89] "storage-provisioner" [0faff3fc-d34b-418e-972b-95a2e36b577a] Running
	I0806 08:38:54.779834   79614 system_pods.go:126] duration metric: took 5.864748ms to wait for k8s-apps to be running ...
	I0806 08:38:54.779842   79614 system_svc.go:44] waiting for kubelet service to be running ....
	I0806 08:38:54.779898   79614 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 08:38:54.796325   79614 system_svc.go:56] duration metric: took 16.472895ms WaitForService to wait for kubelet
	I0806 08:38:54.796353   79614 kubeadm.go:582] duration metric: took 4m22.818456744s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0806 08:38:54.796397   79614 node_conditions.go:102] verifying NodePressure condition ...
	I0806 08:38:54.800536   79614 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0806 08:38:54.800564   79614 node_conditions.go:123] node cpu capacity is 2
	I0806 08:38:54.800577   79614 node_conditions.go:105] duration metric: took 4.173499ms to run NodePressure ...
	I0806 08:38:54.800591   79614 start.go:241] waiting for startup goroutines ...
	I0806 08:38:54.800600   79614 start.go:246] waiting for cluster config update ...
	I0806 08:38:54.800613   79614 start.go:255] writing updated cluster config ...
	I0806 08:38:54.800936   79614 ssh_runner.go:195] Run: rm -f paused
	I0806 08:38:54.850703   79614 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0806 08:38:54.852771   79614 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-990751" cluster and "default" namespace by default
	I0806 08:38:53.469241   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:55.968335   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:58.469143   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:39:00.470785   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:39:02.967983   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:39:05.468164   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:39:07.469643   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:39:09.971774   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:39:12.469590   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:39:14.968935   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:39:17.468202   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:39:19.968322   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:39:21.968455   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:39:24.467291   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:39:26.475005   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:39:28.968164   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:39:31.467876   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:39:31.701723   79927 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0806 08:39:31.702058   79927 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0806 08:39:31.702242   79927 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0806 08:39:33.468293   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:39:34.961921   78922 pod_ready.go:81] duration metric: took 4m0.000413299s for pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace to be "Ready" ...
	E0806 08:39:34.961944   78922 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace to be "Ready" (will not retry!)
	I0806 08:39:34.961967   78922 pod_ready.go:38] duration metric: took 4m12.510868918s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 08:39:34.962010   78922 kubeadm.go:597] duration metric: took 4m19.878821661s to restartPrimaryControlPlane
	W0806 08:39:34.962072   78922 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0806 08:39:34.962105   78922 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0806 08:39:36.702773   79927 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0806 08:39:36.703067   79927 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0806 08:39:46.703330   79927 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0806 08:39:46.703594   79927 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0806 08:40:01.383913   78922 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.421778806s)
	I0806 08:40:01.383994   78922 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 08:40:01.409726   78922 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0806 08:40:01.432284   78922 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0806 08:40:01.445162   78922 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0806 08:40:01.445187   78922 kubeadm.go:157] found existing configuration files:
	
	I0806 08:40:01.445242   78922 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0806 08:40:01.466287   78922 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0806 08:40:01.466344   78922 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0806 08:40:01.481355   78922 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0806 08:40:01.499933   78922 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0806 08:40:01.500003   78922 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0806 08:40:01.518094   78922 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0806 08:40:01.527345   78922 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0806 08:40:01.527408   78922 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0806 08:40:01.539328   78922 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0806 08:40:01.549752   78922 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0806 08:40:01.549863   78922 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0806 08:40:01.561309   78922 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0806 08:40:01.609660   78922 kubeadm.go:310] W0806 08:40:01.587147    2983 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0806 08:40:01.610476   78922 kubeadm.go:310] W0806 08:40:01.588016    2983 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0806 08:40:01.749335   78922 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0806 08:40:06.704889   79927 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0806 08:40:06.705136   79927 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0806 08:40:10.616781   78922 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-rc.0
	I0806 08:40:10.616868   78922 kubeadm.go:310] [preflight] Running pre-flight checks
	I0806 08:40:10.616966   78922 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0806 08:40:10.617110   78922 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0806 08:40:10.617234   78922 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0806 08:40:10.617332   78922 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0806 08:40:10.618729   78922 out.go:204]   - Generating certificates and keys ...
	I0806 08:40:10.618805   78922 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0806 08:40:10.618886   78922 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0806 08:40:10.618961   78922 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0806 08:40:10.619025   78922 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0806 08:40:10.619127   78922 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0806 08:40:10.619200   78922 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0806 08:40:10.619289   78922 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0806 08:40:10.619373   78922 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0806 08:40:10.619485   78922 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0806 08:40:10.619596   78922 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0806 08:40:10.619644   78922 kubeadm.go:310] [certs] Using the existing "sa" key
	I0806 08:40:10.619721   78922 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0806 08:40:10.619787   78922 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0806 08:40:10.619871   78922 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0806 08:40:10.619916   78922 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0806 08:40:10.619968   78922 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0806 08:40:10.620028   78922 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0806 08:40:10.620097   78922 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0806 08:40:10.620153   78922 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0806 08:40:10.621538   78922 out.go:204]   - Booting up control plane ...
	I0806 08:40:10.621632   78922 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0806 08:40:10.621739   78922 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0806 08:40:10.621802   78922 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0806 08:40:10.621898   78922 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0806 08:40:10.621979   78922 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0806 08:40:10.622011   78922 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0806 08:40:10.622117   78922 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0806 08:40:10.622225   78922 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0806 08:40:10.622291   78922 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.639662ms
	I0806 08:40:10.622351   78922 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0806 08:40:10.622399   78922 kubeadm.go:310] [api-check] The API server is healthy after 5.503393689s
	I0806 08:40:10.622505   78922 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0806 08:40:10.622639   78922 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0806 08:40:10.622692   78922 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0806 08:40:10.622843   78922 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-703340 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0806 08:40:10.622895   78922 kubeadm.go:310] [bootstrap-token] Using token: 5r8726.myu0qo9tvx4sxjd3
	I0806 08:40:10.624254   78922 out.go:204]   - Configuring RBAC rules ...
	I0806 08:40:10.624381   78922 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0806 08:40:10.624463   78922 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0806 08:40:10.624613   78922 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0806 08:40:10.624813   78922 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0806 08:40:10.624990   78922 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0806 08:40:10.625114   78922 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0806 08:40:10.625253   78922 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0806 08:40:10.625312   78922 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0806 08:40:10.625378   78922 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0806 08:40:10.625387   78922 kubeadm.go:310] 
	I0806 08:40:10.625464   78922 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0806 08:40:10.625472   78922 kubeadm.go:310] 
	I0806 08:40:10.625580   78922 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0806 08:40:10.625595   78922 kubeadm.go:310] 
	I0806 08:40:10.625615   78922 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0806 08:40:10.625668   78922 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0806 08:40:10.625708   78922 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0806 08:40:10.625712   78922 kubeadm.go:310] 
	I0806 08:40:10.625822   78922 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0806 08:40:10.625846   78922 kubeadm.go:310] 
	I0806 08:40:10.625891   78922 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0806 08:40:10.625898   78922 kubeadm.go:310] 
	I0806 08:40:10.625940   78922 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0806 08:40:10.626007   78922 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0806 08:40:10.626068   78922 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0806 08:40:10.626074   78922 kubeadm.go:310] 
	I0806 08:40:10.626173   78922 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0806 08:40:10.626272   78922 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0806 08:40:10.626282   78922 kubeadm.go:310] 
	I0806 08:40:10.626381   78922 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 5r8726.myu0qo9tvx4sxjd3 \
	I0806 08:40:10.626499   78922 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d36e5c1dfe989c7d561c809d7c06e0fc13926ac8f6d664cc5d6865ea5f79931b \
	I0806 08:40:10.626529   78922 kubeadm.go:310] 	--control-plane 
	I0806 08:40:10.626538   78922 kubeadm.go:310] 
	I0806 08:40:10.626639   78922 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0806 08:40:10.626647   78922 kubeadm.go:310] 
	I0806 08:40:10.626745   78922 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 5r8726.myu0qo9tvx4sxjd3 \
	I0806 08:40:10.626893   78922 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d36e5c1dfe989c7d561c809d7c06e0fc13926ac8f6d664cc5d6865ea5f79931b 
	I0806 08:40:10.626920   78922 cni.go:84] Creating CNI manager for ""
	I0806 08:40:10.626929   78922 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0806 08:40:10.628682   78922 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0806 08:40:10.630096   78922 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0806 08:40:10.642492   78922 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0806 08:40:10.667673   78922 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0806 08:40:10.667763   78922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:40:10.667833   78922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-703340 minikube.k8s.io/updated_at=2024_08_06T08_40_10_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=e92cb06692f5ea1ba801d10d148e5e92e807f9c8 minikube.k8s.io/name=no-preload-703340 minikube.k8s.io/primary=true
	I0806 08:40:10.705177   78922 ops.go:34] apiserver oom_adj: -16
	I0806 08:40:10.901976   78922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:40:11.401990   78922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:40:11.902906   78922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:40:12.402461   78922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:40:12.902909   78922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:40:13.402018   78922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:40:13.902098   78922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:40:14.402869   78922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:40:14.902034   78922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:40:14.999571   78922 kubeadm.go:1113] duration metric: took 4.331874752s to wait for elevateKubeSystemPrivileges
	I0806 08:40:14.999615   78922 kubeadm.go:394] duration metric: took 4m59.975766448s to StartCluster
	I0806 08:40:14.999636   78922 settings.go:142] acquiring lock: {Name:mkb0bfbcae3b4205ef43487a9de84cfd8afd2e5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:40:14.999722   78922 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19370-13057/kubeconfig
	I0806 08:40:15.001429   78922 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/kubeconfig: {Name:mk5c066914acf8e36556b29792f75b98076ca34a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:40:15.001696   78922 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.126 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0806 08:40:15.001753   78922 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0806 08:40:15.001864   78922 addons.go:69] Setting storage-provisioner=true in profile "no-preload-703340"
	I0806 08:40:15.001909   78922 addons.go:234] Setting addon storage-provisioner=true in "no-preload-703340"
	W0806 08:40:15.001921   78922 addons.go:243] addon storage-provisioner should already be in state true
	I0806 08:40:15.001919   78922 addons.go:69] Setting default-storageclass=true in profile "no-preload-703340"
	I0806 08:40:15.001945   78922 addons.go:69] Setting metrics-server=true in profile "no-preload-703340"
	I0806 08:40:15.001968   78922 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-703340"
	I0806 08:40:15.001984   78922 addons.go:234] Setting addon metrics-server=true in "no-preload-703340"
	W0806 08:40:15.001997   78922 addons.go:243] addon metrics-server should already be in state true
	I0806 08:40:15.001958   78922 host.go:66] Checking if "no-preload-703340" exists ...
	I0806 08:40:15.002030   78922 host.go:66] Checking if "no-preload-703340" exists ...
	I0806 08:40:15.001918   78922 config.go:182] Loaded profile config "no-preload-703340": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-rc.0
	I0806 08:40:15.002284   78922 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:40:15.002295   78922 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:40:15.002310   78922 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:40:15.002319   78922 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:40:15.002354   78922 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:40:15.002312   78922 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:40:15.003644   78922 out.go:177] * Verifying Kubernetes components...
	I0806 08:40:15.005157   78922 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 08:40:15.018061   78922 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35339
	I0806 08:40:15.018548   78922 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:40:15.018866   78922 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44375
	I0806 08:40:15.019201   78922 main.go:141] libmachine: Using API Version  1
	I0806 08:40:15.019226   78922 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:40:15.019270   78922 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:40:15.019582   78922 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:40:15.019743   78922 main.go:141] libmachine: Using API Version  1
	I0806 08:40:15.019748   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetState
	I0806 08:40:15.019765   78922 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:40:15.020089   78922 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:40:15.020617   78922 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:40:15.020664   78922 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:40:15.020815   78922 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32773
	I0806 08:40:15.021303   78922 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:40:15.021775   78922 main.go:141] libmachine: Using API Version  1
	I0806 08:40:15.021796   78922 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:40:15.022125   78922 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:40:15.022549   78922 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:40:15.022577   78922 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:40:15.023653   78922 addons.go:234] Setting addon default-storageclass=true in "no-preload-703340"
	W0806 08:40:15.023678   78922 addons.go:243] addon default-storageclass should already be in state true
	I0806 08:40:15.023708   78922 host.go:66] Checking if "no-preload-703340" exists ...
	I0806 08:40:15.024066   78922 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:40:15.024120   78922 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:40:15.036138   78922 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44447
	I0806 08:40:15.036586   78922 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:40:15.037132   78922 main.go:141] libmachine: Using API Version  1
	I0806 08:40:15.037153   78922 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:40:15.037609   78922 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:40:15.037921   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetState
	I0806 08:40:15.038640   78922 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42341
	I0806 08:40:15.038777   78922 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34793
	I0806 08:40:15.039003   78922 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:40:15.039098   78922 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:40:15.039473   78922 main.go:141] libmachine: Using API Version  1
	I0806 08:40:15.039489   78922 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:40:15.039607   78922 main.go:141] libmachine: Using API Version  1
	I0806 08:40:15.039623   78922 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:40:15.039820   78922 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:40:15.040041   78922 main.go:141] libmachine: (no-preload-703340) Calling .DriverName
	I0806 08:40:15.040181   78922 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:40:15.040414   78922 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:40:15.040456   78922 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:40:15.040477   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetState
	I0806 08:40:15.042137   78922 main.go:141] libmachine: (no-preload-703340) Calling .DriverName
	I0806 08:40:15.042223   78922 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0806 08:40:15.043522   78922 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 08:40:15.043585   78922 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0806 08:40:15.043601   78922 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0806 08:40:15.043619   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHHostname
	I0806 08:40:15.044812   78922 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0806 08:40:15.044829   78922 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0806 08:40:15.044854   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHHostname
	I0806 08:40:15.047376   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:40:15.047978   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:40:15.048001   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:40:15.048228   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:40:15.048269   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHPort
	I0806 08:40:15.048485   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHKeyPath
	I0806 08:40:15.048617   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHUsername
	I0806 08:40:15.048667   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:40:15.048688   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:40:15.048800   78922 sshutil.go:53] new ssh client: &{IP:192.168.72.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/no-preload-703340/id_rsa Username:docker}
	I0806 08:40:15.049107   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHPort
	I0806 08:40:15.049267   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHKeyPath
	I0806 08:40:15.049403   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHUsername
	I0806 08:40:15.049518   78922 sshutil.go:53] new ssh client: &{IP:192.168.72.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/no-preload-703340/id_rsa Username:docker}
	I0806 08:40:15.059440   78922 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33377
	I0806 08:40:15.059910   78922 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:40:15.060423   78922 main.go:141] libmachine: Using API Version  1
	I0806 08:40:15.060441   78922 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:40:15.060741   78922 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:40:15.060922   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetState
	I0806 08:40:15.062303   78922 main.go:141] libmachine: (no-preload-703340) Calling .DriverName
	I0806 08:40:15.062491   78922 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0806 08:40:15.062503   78922 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0806 08:40:15.062516   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHHostname
	I0806 08:40:15.065588   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:40:15.066008   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:40:15.066024   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:40:15.066147   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHPort
	I0806 08:40:15.066302   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHKeyPath
	I0806 08:40:15.066407   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHUsername
	I0806 08:40:15.066500   78922 sshutil.go:53] new ssh client: &{IP:192.168.72.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/no-preload-703340/id_rsa Username:docker}
	I0806 08:40:15.209763   78922 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 08:40:15.231626   78922 node_ready.go:35] waiting up to 6m0s for node "no-preload-703340" to be "Ready" ...
	I0806 08:40:15.239924   78922 node_ready.go:49] node "no-preload-703340" has status "Ready":"True"
	I0806 08:40:15.239952   78922 node_ready.go:38] duration metric: took 8.292674ms for node "no-preload-703340" to be "Ready" ...
	I0806 08:40:15.239964   78922 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 08:40:15.247043   78922 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-9dt4q" in "kube-system" namespace to be "Ready" ...
	I0806 08:40:15.320295   78922 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0806 08:40:15.343811   78922 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0806 08:40:15.343838   78922 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0806 08:40:15.367466   78922 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0806 08:40:15.392170   78922 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0806 08:40:15.392200   78922 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0806 08:40:15.440711   78922 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0806 08:40:15.440741   78922 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0806 08:40:15.483982   78922 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0806 08:40:16.105714   78922 main.go:141] libmachine: Making call to close driver server
	I0806 08:40:16.105739   78922 main.go:141] libmachine: (no-preload-703340) Calling .Close
	I0806 08:40:16.105768   78922 main.go:141] libmachine: Making call to close driver server
	I0806 08:40:16.105789   78922 main.go:141] libmachine: (no-preload-703340) Calling .Close
	I0806 08:40:16.106051   78922 main.go:141] libmachine: (no-preload-703340) DBG | Closing plugin on server side
	I0806 08:40:16.106086   78922 main.go:141] libmachine: (no-preload-703340) DBG | Closing plugin on server side
	I0806 08:40:16.106097   78922 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:40:16.106113   78922 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:40:16.106133   78922 main.go:141] libmachine: Making call to close driver server
	I0806 08:40:16.106144   78922 main.go:141] libmachine: (no-preload-703340) Calling .Close
	I0806 08:40:16.106176   78922 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:40:16.106254   78922 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:40:16.106280   78922 main.go:141] libmachine: Making call to close driver server
	I0806 08:40:16.106287   78922 main.go:141] libmachine: (no-preload-703340) Calling .Close
	I0806 08:40:16.107761   78922 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:40:16.107779   78922 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:40:16.107767   78922 main.go:141] libmachine: (no-preload-703340) DBG | Closing plugin on server side
	I0806 08:40:16.107805   78922 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:40:16.107820   78922 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:40:16.107874   78922 main.go:141] libmachine: (no-preload-703340) DBG | Closing plugin on server side
	I0806 08:40:16.133295   78922 main.go:141] libmachine: Making call to close driver server
	I0806 08:40:16.133318   78922 main.go:141] libmachine: (no-preload-703340) Calling .Close
	I0806 08:40:16.133642   78922 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:40:16.133660   78922 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:40:16.133680   78922 main.go:141] libmachine: (no-preload-703340) DBG | Closing plugin on server side
	I0806 08:40:16.628763   78922 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.144732374s)
	I0806 08:40:16.628864   78922 main.go:141] libmachine: Making call to close driver server
	I0806 08:40:16.628891   78922 main.go:141] libmachine: (no-preload-703340) Calling .Close
	I0806 08:40:16.629348   78922 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:40:16.629375   78922 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:40:16.629385   78922 main.go:141] libmachine: Making call to close driver server
	I0806 08:40:16.629396   78922 main.go:141] libmachine: (no-preload-703340) Calling .Close
	I0806 08:40:16.629406   78922 main.go:141] libmachine: (no-preload-703340) DBG | Closing plugin on server side
	I0806 08:40:16.629673   78922 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:40:16.629690   78922 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:40:16.629702   78922 addons.go:475] Verifying addon metrics-server=true in "no-preload-703340"
	I0806 08:40:16.631683   78922 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0806 08:40:16.632723   78922 addons.go:510] duration metric: took 1.630970284s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0806 08:40:17.264829   78922 pod_ready.go:102] pod "coredns-6f6b679f8f-9dt4q" in "kube-system" namespace has status "Ready":"False"
	I0806 08:40:19.754652   78922 pod_ready.go:102] pod "coredns-6f6b679f8f-9dt4q" in "kube-system" namespace has status "Ready":"False"
	I0806 08:40:21.753116   78922 pod_ready.go:92] pod "coredns-6f6b679f8f-9dt4q" in "kube-system" namespace has status "Ready":"True"
	I0806 08:40:21.753139   78922 pod_ready.go:81] duration metric: took 6.506065964s for pod "coredns-6f6b679f8f-9dt4q" in "kube-system" namespace to be "Ready" ...
	I0806 08:40:21.753149   78922 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-mpfwj" in "kube-system" namespace to be "Ready" ...
	I0806 08:40:21.757521   78922 pod_ready.go:92] pod "coredns-6f6b679f8f-mpfwj" in "kube-system" namespace has status "Ready":"True"
	I0806 08:40:21.757542   78922 pod_ready.go:81] duration metric: took 4.3867ms for pod "coredns-6f6b679f8f-mpfwj" in "kube-system" namespace to be "Ready" ...
	I0806 08:40:21.757553   78922 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-703340" in "kube-system" namespace to be "Ready" ...
	I0806 08:40:21.762161   78922 pod_ready.go:92] pod "etcd-no-preload-703340" in "kube-system" namespace has status "Ready":"True"
	I0806 08:40:21.762179   78922 pod_ready.go:81] duration metric: took 4.619377ms for pod "etcd-no-preload-703340" in "kube-system" namespace to be "Ready" ...
	I0806 08:40:21.762188   78922 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-703340" in "kube-system" namespace to be "Ready" ...
	I0806 08:40:21.766394   78922 pod_ready.go:92] pod "kube-apiserver-no-preload-703340" in "kube-system" namespace has status "Ready":"True"
	I0806 08:40:21.766411   78922 pod_ready.go:81] duration metric: took 4.216869ms for pod "kube-apiserver-no-preload-703340" in "kube-system" namespace to be "Ready" ...
	I0806 08:40:21.766421   78922 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-703340" in "kube-system" namespace to be "Ready" ...
	I0806 08:40:21.770569   78922 pod_ready.go:92] pod "kube-controller-manager-no-preload-703340" in "kube-system" namespace has status "Ready":"True"
	I0806 08:40:21.770592   78922 pod_ready.go:81] duration metric: took 4.160997ms for pod "kube-controller-manager-no-preload-703340" in "kube-system" namespace to be "Ready" ...
	I0806 08:40:21.770600   78922 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zsv9x" in "kube-system" namespace to be "Ready" ...
	I0806 08:40:22.151795   78922 pod_ready.go:92] pod "kube-proxy-zsv9x" in "kube-system" namespace has status "Ready":"True"
	I0806 08:40:22.151819   78922 pod_ready.go:81] duration metric: took 381.212951ms for pod "kube-proxy-zsv9x" in "kube-system" namespace to be "Ready" ...
	I0806 08:40:22.151829   78922 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-703340" in "kube-system" namespace to be "Ready" ...
	I0806 08:40:22.550798   78922 pod_ready.go:92] pod "kube-scheduler-no-preload-703340" in "kube-system" namespace has status "Ready":"True"
	I0806 08:40:22.550821   78922 pod_ready.go:81] duration metric: took 398.985533ms for pod "kube-scheduler-no-preload-703340" in "kube-system" namespace to be "Ready" ...
	I0806 08:40:22.550830   78922 pod_ready.go:38] duration metric: took 7.310852309s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 08:40:22.550842   78922 api_server.go:52] waiting for apiserver process to appear ...
	I0806 08:40:22.550888   78922 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:40:22.567410   78922 api_server.go:72] duration metric: took 7.565683505s to wait for apiserver process to appear ...
	I0806 08:40:22.567441   78922 api_server.go:88] waiting for apiserver healthz status ...
	I0806 08:40:22.567464   78922 api_server.go:253] Checking apiserver healthz at https://192.168.72.126:8443/healthz ...
	I0806 08:40:22.572514   78922 api_server.go:279] https://192.168.72.126:8443/healthz returned 200:
	ok
	I0806 08:40:22.573572   78922 api_server.go:141] control plane version: v1.31.0-rc.0
	I0806 08:40:22.573596   78922 api_server.go:131] duration metric: took 6.147703ms to wait for apiserver health ...
	I0806 08:40:22.573606   78922 system_pods.go:43] waiting for kube-system pods to appear ...
	I0806 08:40:22.754910   78922 system_pods.go:59] 9 kube-system pods found
	I0806 08:40:22.754937   78922 system_pods.go:61] "coredns-6f6b679f8f-9dt4q" [a62f9c28-2e06-4072-b459-3a879cba65d6] Running
	I0806 08:40:22.754942   78922 system_pods.go:61] "coredns-6f6b679f8f-mpfwj" [9da40ee8-73c2-4479-a2aa-32a2dce7e21d] Running
	I0806 08:40:22.754945   78922 system_pods.go:61] "etcd-no-preload-703340" [2b89a18a-69f6-429b-8616-b84bf9e666b4] Running
	I0806 08:40:22.754950   78922 system_pods.go:61] "kube-apiserver-no-preload-703340" [8b863a60-03ad-4d18-855a-12d29dc63612] Running
	I0806 08:40:22.754953   78922 system_pods.go:61] "kube-controller-manager-no-preload-703340" [8490ec19-da74-4c39-9a69-17918506ea10] Running
	I0806 08:40:22.754956   78922 system_pods.go:61] "kube-proxy-zsv9x" [2859a7f2-f880-4be4-806e-163eb9caf535] Running
	I0806 08:40:22.754959   78922 system_pods.go:61] "kube-scheduler-no-preload-703340" [e003ccd3-48e7-4e4e-9f05-166003935575] Running
	I0806 08:40:22.754964   78922 system_pods.go:61] "metrics-server-6867b74b74-tqnjm" [4ab2bdd5-bd70-445a-84ac-6a45bdaf06a7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0806 08:40:22.754972   78922 system_pods.go:61] "storage-provisioner" [5497ddbc-8ed5-4b42-80e8-81aa7c1b8fa7] Running
	I0806 08:40:22.754980   78922 system_pods.go:74] duration metric: took 181.367063ms to wait for pod list to return data ...
	I0806 08:40:22.754990   78922 default_sa.go:34] waiting for default service account to be created ...
	I0806 08:40:22.951215   78922 default_sa.go:45] found service account: "default"
	I0806 08:40:22.951242   78922 default_sa.go:55] duration metric: took 196.244443ms for default service account to be created ...
	I0806 08:40:22.951250   78922 system_pods.go:116] waiting for k8s-apps to be running ...
	I0806 08:40:23.154822   78922 system_pods.go:86] 9 kube-system pods found
	I0806 08:40:23.154852   78922 system_pods.go:89] "coredns-6f6b679f8f-9dt4q" [a62f9c28-2e06-4072-b459-3a879cba65d6] Running
	I0806 08:40:23.154858   78922 system_pods.go:89] "coredns-6f6b679f8f-mpfwj" [9da40ee8-73c2-4479-a2aa-32a2dce7e21d] Running
	I0806 08:40:23.154862   78922 system_pods.go:89] "etcd-no-preload-703340" [2b89a18a-69f6-429b-8616-b84bf9e666b4] Running
	I0806 08:40:23.154865   78922 system_pods.go:89] "kube-apiserver-no-preload-703340" [8b863a60-03ad-4d18-855a-12d29dc63612] Running
	I0806 08:40:23.154870   78922 system_pods.go:89] "kube-controller-manager-no-preload-703340" [8490ec19-da74-4c39-9a69-17918506ea10] Running
	I0806 08:40:23.154875   78922 system_pods.go:89] "kube-proxy-zsv9x" [2859a7f2-f880-4be4-806e-163eb9caf535] Running
	I0806 08:40:23.154879   78922 system_pods.go:89] "kube-scheduler-no-preload-703340" [e003ccd3-48e7-4e4e-9f05-166003935575] Running
	I0806 08:40:23.154886   78922 system_pods.go:89] "metrics-server-6867b74b74-tqnjm" [4ab2bdd5-bd70-445a-84ac-6a45bdaf06a7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0806 08:40:23.154890   78922 system_pods.go:89] "storage-provisioner" [5497ddbc-8ed5-4b42-80e8-81aa7c1b8fa7] Running
	I0806 08:40:23.154898   78922 system_pods.go:126] duration metric: took 203.642884ms to wait for k8s-apps to be running ...
	I0806 08:40:23.154905   78922 system_svc.go:44] waiting for kubelet service to be running ....
	I0806 08:40:23.154948   78922 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 08:40:23.170577   78922 system_svc.go:56] duration metric: took 15.661926ms WaitForService to wait for kubelet
	I0806 08:40:23.170609   78922 kubeadm.go:582] duration metric: took 8.168885934s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0806 08:40:23.170631   78922 node_conditions.go:102] verifying NodePressure condition ...
	I0806 08:40:23.352727   78922 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0806 08:40:23.352753   78922 node_conditions.go:123] node cpu capacity is 2
	I0806 08:40:23.352763   78922 node_conditions.go:105] duration metric: took 182.12736ms to run NodePressure ...
	I0806 08:40:23.352774   78922 start.go:241] waiting for startup goroutines ...
	I0806 08:40:23.352780   78922 start.go:246] waiting for cluster config update ...
	I0806 08:40:23.352790   78922 start.go:255] writing updated cluster config ...
	I0806 08:40:23.353072   78922 ssh_runner.go:195] Run: rm -f paused
	I0806 08:40:23.401609   78922 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-rc.0 (minor skew: 1)
	I0806 08:40:23.403639   78922 out.go:177] * Done! kubectl is now configured to use "no-preload-703340" cluster and "default" namespace by default
	I0806 08:40:46.706828   79927 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0806 08:40:46.707130   79927 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0806 08:40:46.707156   79927 kubeadm.go:310] 
	I0806 08:40:46.707218   79927 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0806 08:40:46.707278   79927 kubeadm.go:310] 		timed out waiting for the condition
	I0806 08:40:46.707304   79927 kubeadm.go:310] 
	I0806 08:40:46.707381   79927 kubeadm.go:310] 	This error is likely caused by:
	I0806 08:40:46.707439   79927 kubeadm.go:310] 		- The kubelet is not running
	I0806 08:40:46.707569   79927 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0806 08:40:46.707580   79927 kubeadm.go:310] 
	I0806 08:40:46.707705   79927 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0806 08:40:46.707756   79927 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0806 08:40:46.707805   79927 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0806 08:40:46.707814   79927 kubeadm.go:310] 
	I0806 08:40:46.707974   79927 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0806 08:40:46.708118   79927 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0806 08:40:46.708130   79927 kubeadm.go:310] 
	I0806 08:40:46.708297   79927 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0806 08:40:46.708448   79927 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0806 08:40:46.708577   79927 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0806 08:40:46.708673   79927 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0806 08:40:46.708684   79927 kubeadm.go:310] 
	I0806 08:40:46.709441   79927 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0806 08:40:46.709593   79927 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0806 08:40:46.709791   79927 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0806 08:40:46.709862   79927 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0806 08:40:46.709928   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0806 08:40:47.169443   79927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 08:40:47.187784   79927 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0806 08:40:47.199239   79927 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0806 08:40:47.199267   79927 kubeadm.go:157] found existing configuration files:
	
	I0806 08:40:47.199325   79927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0806 08:40:47.209294   79927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0806 08:40:47.209361   79927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0806 08:40:47.219483   79927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0806 08:40:47.229674   79927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0806 08:40:47.229732   79927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0806 08:40:47.240729   79927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0806 08:40:47.250964   79927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0806 08:40:47.251018   79927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0806 08:40:47.261322   79927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0806 08:40:47.271411   79927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0806 08:40:47.271502   79927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0806 08:40:47.281986   79927 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0806 08:40:47.514409   79927 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0806 08:42:43.760446   79927 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0806 08:42:43.760586   79927 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0806 08:42:43.762233   79927 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0806 08:42:43.762301   79927 kubeadm.go:310] [preflight] Running pre-flight checks
	I0806 08:42:43.762391   79927 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0806 08:42:43.762504   79927 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0806 08:42:43.762602   79927 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0806 08:42:43.762653   79927 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0806 08:42:43.764435   79927 out.go:204]   - Generating certificates and keys ...
	I0806 08:42:43.764523   79927 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0806 08:42:43.764587   79927 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0806 08:42:43.764684   79927 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0806 08:42:43.764761   79927 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0806 08:42:43.764833   79927 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0806 08:42:43.764905   79927 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0806 08:42:43.765002   79927 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0806 08:42:43.765100   79927 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0806 08:42:43.765202   79927 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0806 08:42:43.765338   79927 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0806 08:42:43.765400   79927 kubeadm.go:310] [certs] Using the existing "sa" key
	I0806 08:42:43.765516   79927 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0806 08:42:43.765601   79927 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0806 08:42:43.765680   79927 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0806 08:42:43.765774   79927 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0806 08:42:43.765854   79927 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0806 08:42:43.766002   79927 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0806 08:42:43.766133   79927 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0806 08:42:43.766194   79927 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0806 08:42:43.766273   79927 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0806 08:42:43.767861   79927 out.go:204]   - Booting up control plane ...
	I0806 08:42:43.767957   79927 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0806 08:42:43.768051   79927 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0806 08:42:43.768136   79927 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0806 08:42:43.768232   79927 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0806 08:42:43.768487   79927 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0806 08:42:43.768559   79927 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0806 08:42:43.768635   79927 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0806 08:42:43.768862   79927 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0806 08:42:43.768945   79927 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0806 08:42:43.769124   79927 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0806 08:42:43.769195   79927 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0806 08:42:43.769379   79927 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0806 08:42:43.769463   79927 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0806 08:42:43.769721   79927 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0806 08:42:43.769823   79927 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0806 08:42:43.770089   79927 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0806 08:42:43.770102   79927 kubeadm.go:310] 
	I0806 08:42:43.770161   79927 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0806 08:42:43.770223   79927 kubeadm.go:310] 		timed out waiting for the condition
	I0806 08:42:43.770234   79927 kubeadm.go:310] 
	I0806 08:42:43.770274   79927 kubeadm.go:310] 	This error is likely caused by:
	I0806 08:42:43.770303   79927 kubeadm.go:310] 		- The kubelet is not running
	I0806 08:42:43.770389   79927 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0806 08:42:43.770399   79927 kubeadm.go:310] 
	I0806 08:42:43.770487   79927 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0806 08:42:43.770519   79927 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0806 08:42:43.770547   79927 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0806 08:42:43.770554   79927 kubeadm.go:310] 
	I0806 08:42:43.770657   79927 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0806 08:42:43.770756   79927 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0806 08:42:43.770769   79927 kubeadm.go:310] 
	I0806 08:42:43.770932   79927 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0806 08:42:43.771079   79927 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0806 08:42:43.771178   79927 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0806 08:42:43.771266   79927 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0806 08:42:43.771315   79927 kubeadm.go:310] 
	I0806 08:42:43.771336   79927 kubeadm.go:394] duration metric: took 7m57.719144545s to StartCluster
	I0806 08:42:43.771381   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:42:43.771451   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:42:43.817981   79927 cri.go:89] found id: ""
	I0806 08:42:43.818010   79927 logs.go:276] 0 containers: []
	W0806 08:42:43.818021   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:42:43.818028   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:42:43.818087   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:42:43.855481   79927 cri.go:89] found id: ""
	I0806 08:42:43.855506   79927 logs.go:276] 0 containers: []
	W0806 08:42:43.855517   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:42:43.855524   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:42:43.855584   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:42:43.893110   79927 cri.go:89] found id: ""
	I0806 08:42:43.893141   79927 logs.go:276] 0 containers: []
	W0806 08:42:43.893152   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:42:43.893172   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:42:43.893255   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:42:43.929785   79927 cri.go:89] found id: ""
	I0806 08:42:43.929844   79927 logs.go:276] 0 containers: []
	W0806 08:42:43.929853   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:42:43.929858   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:42:43.929921   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:42:43.970501   79927 cri.go:89] found id: ""
	I0806 08:42:43.970533   79927 logs.go:276] 0 containers: []
	W0806 08:42:43.970541   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:42:43.970546   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:42:43.970590   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:42:44.007344   79927 cri.go:89] found id: ""
	I0806 08:42:44.007373   79927 logs.go:276] 0 containers: []
	W0806 08:42:44.007382   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:42:44.007388   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:42:44.007434   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:42:44.043724   79927 cri.go:89] found id: ""
	I0806 08:42:44.043747   79927 logs.go:276] 0 containers: []
	W0806 08:42:44.043755   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:42:44.043761   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:42:44.043810   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:42:44.084642   79927 cri.go:89] found id: ""
	I0806 08:42:44.084682   79927 logs.go:276] 0 containers: []
	W0806 08:42:44.084694   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:42:44.084709   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:42:44.084724   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:42:44.157081   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:42:44.157125   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:42:44.175703   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:42:44.175746   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:42:44.275665   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:42:44.275699   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:42:44.275717   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:42:44.385031   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:42:44.385076   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0806 08:42:44.425573   79927 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0806 08:42:44.425616   79927 out.go:239] * 
	W0806 08:42:44.425683   79927 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0806 08:42:44.425713   79927 out.go:239] * 
	W0806 08:42:44.426633   79927 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0806 08:42:44.429961   79927 out.go:177] 
	W0806 08:42:44.431352   79927 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0806 08:42:44.431398   79927 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0806 08:42:44.431414   79927 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0806 08:42:44.432963   79927 out.go:177] 
	
	
	==> CRI-O <==
	Aug 06 08:47:37 embed-certs-324808 crio[728]: time="2024-08-06 08:47:37.347353543Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722934057347332438,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b245751c-a683-4a7f-9597-ef3086f2e4cc name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 08:47:37 embed-certs-324808 crio[728]: time="2024-08-06 08:47:37.348155194Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=572045f1-495e-402d-80ec-657aa93d8868 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:47:37 embed-certs-324808 crio[728]: time="2024-08-06 08:47:37.348204455Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=572045f1-495e-402d-80ec-657aa93d8868 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:47:37 embed-certs-324808 crio[728]: time="2024-08-06 08:47:37.348395232Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:367eb6487187a37acafb1fa74364a2d9ec5d9241888ed2e871fb14f2af161975,PodSandboxId:27950acba30f699b26c3086801707c13702e0c69ec58aa1582ce326e756490e2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722933278910752050,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15c7d89a-e994-4cca-931c-f646157a15e8,},Annotations:map[string]string{io.kubernetes.container.hash: 4540f957,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b29f8b66a54410c4d859c7dd605288a33ddbdf5d1c48722c718f2dbbc9ed9f3,PodSandboxId:3683ed34d915eac01c70be437fd87c3e3d9efe834c554a0ff79b4830d0aa0f62,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722933258423173197,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 41718f5d-fd79-4af8-9464-04a00be13504,},Annotations:map[string]string{io.kubernetes.container.hash: 58ebd90e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e1c91bc3a920b6f7db4477b8219f13fa092c68ff53abe2390325a68dad253e5,PodSandboxId:ed059b4ceda167186ee96ca5e73360b595dac7a59f59f8bd89423f739dadbb0c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722933255844194079,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4xxwm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eeb47dea-32e9-458f-9ad9-2af6d4e68cc0,},Annotations:map[string]string{io.kubernetes.container.hash: f3899951,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:612fc6a8f2c1208097a6d6287c5fecb20026608a278af5d0d4efc9662e065eb9,PodSandboxId:ab1bd703c0510dbbe4c386c1d76510a87f45de355b7a86c1c84201ffe2f6730d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722933248127519730,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8lbzv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c45e6c0-9b7f-4c48-b
ff5-38d4a5949bec,},Annotations:map[string]string{io.kubernetes.container.hash: e30e5c8f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78abe2efa92d2d6c9171ed65bd4d941693be3207554f9318a7cefb91544eeb20,PodSandboxId:27950acba30f699b26c3086801707c13702e0c69ec58aa1582ce326e756490e2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722933248067976839,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15c7d89a-e994-4cca-931c-f646157a1
5e8,},Annotations:map[string]string{io.kubernetes.container.hash: 4540f957,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7edb6bece3347d88a263663a1d6549165dafdcb67ad723494b937766cba6da6,PodSandboxId:11616bc5ed6d0b08a352623e9a760f17c020d30ca828c7811afe1737dfcc2ca3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722933243338331603,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-324808,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18189df0a2512fa0a9393883a718fa75,},Annota
tions:map[string]string{io.kubernetes.container.hash: fafde40b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a25c5660a3a2b7226bf308b44307a3912efa7a9a73e79ec63f83133b80a44683,PodSandboxId:e6dd8d4a1944eb1c32f8b7427d808ec1abbab695ee962c045256ef57d96231c6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722933243355896420,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-324808,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91054f20d0444b47ed2a594ff97bc641,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df2efdb62a4af10f89a1b37599d4258ac3b72660353795f8c2141f3ef70c1f65,PodSandboxId:963ae0e923498a287d076541863b72249288ed25de0b24e686c3d10d12681101,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722933243328499057,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-324808,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8157c47ede204e65846d42268b1da9e0,},Annotations:map[string]string{io.kubernetes.container.hash:
9461fdd0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8651a7ed6615e826b950eab003841c2ec872b48160f207e77dd32a684125a23,PodSandboxId:6dbf61a7e83176b57abac02851c6bd7772ea711eb7641b03791397e466cb2127,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722933243294289542,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-324808,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af4bee3c7061a5d622ebea0a07e8602e,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=572045f1-495e-402d-80ec-657aa93d8868 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:47:37 embed-certs-324808 crio[728]: time="2024-08-06 08:47:37.392572202Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=117989da-8c7a-4142-8a5c-5e04ea450d35 name=/runtime.v1.RuntimeService/Version
	Aug 06 08:47:37 embed-certs-324808 crio[728]: time="2024-08-06 08:47:37.392668493Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=117989da-8c7a-4142-8a5c-5e04ea450d35 name=/runtime.v1.RuntimeService/Version
	Aug 06 08:47:37 embed-certs-324808 crio[728]: time="2024-08-06 08:47:37.394379702Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=74b7719f-7f2a-4051-8b3f-a56864a57c66 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 08:47:37 embed-certs-324808 crio[728]: time="2024-08-06 08:47:37.394866359Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722934057394789325,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=74b7719f-7f2a-4051-8b3f-a56864a57c66 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 08:47:37 embed-certs-324808 crio[728]: time="2024-08-06 08:47:37.395776741Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6ff77f02-a6a9-43b6-8222-879368f8704e name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:47:37 embed-certs-324808 crio[728]: time="2024-08-06 08:47:37.395852565Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6ff77f02-a6a9-43b6-8222-879368f8704e name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:47:37 embed-certs-324808 crio[728]: time="2024-08-06 08:47:37.396464027Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:367eb6487187a37acafb1fa74364a2d9ec5d9241888ed2e871fb14f2af161975,PodSandboxId:27950acba30f699b26c3086801707c13702e0c69ec58aa1582ce326e756490e2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722933278910752050,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15c7d89a-e994-4cca-931c-f646157a15e8,},Annotations:map[string]string{io.kubernetes.container.hash: 4540f957,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b29f8b66a54410c4d859c7dd605288a33ddbdf5d1c48722c718f2dbbc9ed9f3,PodSandboxId:3683ed34d915eac01c70be437fd87c3e3d9efe834c554a0ff79b4830d0aa0f62,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722933258423173197,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 41718f5d-fd79-4af8-9464-04a00be13504,},Annotations:map[string]string{io.kubernetes.container.hash: 58ebd90e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e1c91bc3a920b6f7db4477b8219f13fa092c68ff53abe2390325a68dad253e5,PodSandboxId:ed059b4ceda167186ee96ca5e73360b595dac7a59f59f8bd89423f739dadbb0c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722933255844194079,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4xxwm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eeb47dea-32e9-458f-9ad9-2af6d4e68cc0,},Annotations:map[string]string{io.kubernetes.container.hash: f3899951,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:612fc6a8f2c1208097a6d6287c5fecb20026608a278af5d0d4efc9662e065eb9,PodSandboxId:ab1bd703c0510dbbe4c386c1d76510a87f45de355b7a86c1c84201ffe2f6730d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722933248127519730,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8lbzv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c45e6c0-9b7f-4c48-b
ff5-38d4a5949bec,},Annotations:map[string]string{io.kubernetes.container.hash: e30e5c8f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78abe2efa92d2d6c9171ed65bd4d941693be3207554f9318a7cefb91544eeb20,PodSandboxId:27950acba30f699b26c3086801707c13702e0c69ec58aa1582ce326e756490e2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722933248067976839,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15c7d89a-e994-4cca-931c-f646157a1
5e8,},Annotations:map[string]string{io.kubernetes.container.hash: 4540f957,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7edb6bece3347d88a263663a1d6549165dafdcb67ad723494b937766cba6da6,PodSandboxId:11616bc5ed6d0b08a352623e9a760f17c020d30ca828c7811afe1737dfcc2ca3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722933243338331603,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-324808,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18189df0a2512fa0a9393883a718fa75,},Annota
tions:map[string]string{io.kubernetes.container.hash: fafde40b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a25c5660a3a2b7226bf308b44307a3912efa7a9a73e79ec63f83133b80a44683,PodSandboxId:e6dd8d4a1944eb1c32f8b7427d808ec1abbab695ee962c045256ef57d96231c6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722933243355896420,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-324808,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91054f20d0444b47ed2a594ff97bc641,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df2efdb62a4af10f89a1b37599d4258ac3b72660353795f8c2141f3ef70c1f65,PodSandboxId:963ae0e923498a287d076541863b72249288ed25de0b24e686c3d10d12681101,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722933243328499057,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-324808,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8157c47ede204e65846d42268b1da9e0,},Annotations:map[string]string{io.kubernetes.container.hash:
9461fdd0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8651a7ed6615e826b950eab003841c2ec872b48160f207e77dd32a684125a23,PodSandboxId:6dbf61a7e83176b57abac02851c6bd7772ea711eb7641b03791397e466cb2127,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722933243294289542,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-324808,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af4bee3c7061a5d622ebea0a07e8602e,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6ff77f02-a6a9-43b6-8222-879368f8704e name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:47:37 embed-certs-324808 crio[728]: time="2024-08-06 08:47:37.440671220Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c7603d6e-58de-4f1f-9f70-39d0910ee3dc name=/runtime.v1.RuntimeService/Version
	Aug 06 08:47:37 embed-certs-324808 crio[728]: time="2024-08-06 08:47:37.440772852Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c7603d6e-58de-4f1f-9f70-39d0910ee3dc name=/runtime.v1.RuntimeService/Version
	Aug 06 08:47:37 embed-certs-324808 crio[728]: time="2024-08-06 08:47:37.442227259Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8ce61d66-f3bf-4586-a773-f3d07e9042ae name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 08:47:37 embed-certs-324808 crio[728]: time="2024-08-06 08:47:37.442723107Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722934057442698022,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8ce61d66-f3bf-4586-a773-f3d07e9042ae name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 08:47:37 embed-certs-324808 crio[728]: time="2024-08-06 08:47:37.443827896Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=eef22920-13eb-4c33-8fb7-90701d643f81 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:47:37 embed-certs-324808 crio[728]: time="2024-08-06 08:47:37.443908757Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=eef22920-13eb-4c33-8fb7-90701d643f81 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:47:37 embed-certs-324808 crio[728]: time="2024-08-06 08:47:37.444188731Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:367eb6487187a37acafb1fa74364a2d9ec5d9241888ed2e871fb14f2af161975,PodSandboxId:27950acba30f699b26c3086801707c13702e0c69ec58aa1582ce326e756490e2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722933278910752050,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15c7d89a-e994-4cca-931c-f646157a15e8,},Annotations:map[string]string{io.kubernetes.container.hash: 4540f957,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b29f8b66a54410c4d859c7dd605288a33ddbdf5d1c48722c718f2dbbc9ed9f3,PodSandboxId:3683ed34d915eac01c70be437fd87c3e3d9efe834c554a0ff79b4830d0aa0f62,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722933258423173197,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 41718f5d-fd79-4af8-9464-04a00be13504,},Annotations:map[string]string{io.kubernetes.container.hash: 58ebd90e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e1c91bc3a920b6f7db4477b8219f13fa092c68ff53abe2390325a68dad253e5,PodSandboxId:ed059b4ceda167186ee96ca5e73360b595dac7a59f59f8bd89423f739dadbb0c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722933255844194079,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4xxwm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eeb47dea-32e9-458f-9ad9-2af6d4e68cc0,},Annotations:map[string]string{io.kubernetes.container.hash: f3899951,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:612fc6a8f2c1208097a6d6287c5fecb20026608a278af5d0d4efc9662e065eb9,PodSandboxId:ab1bd703c0510dbbe4c386c1d76510a87f45de355b7a86c1c84201ffe2f6730d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722933248127519730,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8lbzv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c45e6c0-9b7f-4c48-b
ff5-38d4a5949bec,},Annotations:map[string]string{io.kubernetes.container.hash: e30e5c8f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78abe2efa92d2d6c9171ed65bd4d941693be3207554f9318a7cefb91544eeb20,PodSandboxId:27950acba30f699b26c3086801707c13702e0c69ec58aa1582ce326e756490e2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722933248067976839,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15c7d89a-e994-4cca-931c-f646157a1
5e8,},Annotations:map[string]string{io.kubernetes.container.hash: 4540f957,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7edb6bece3347d88a263663a1d6549165dafdcb67ad723494b937766cba6da6,PodSandboxId:11616bc5ed6d0b08a352623e9a760f17c020d30ca828c7811afe1737dfcc2ca3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722933243338331603,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-324808,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18189df0a2512fa0a9393883a718fa75,},Annota
tions:map[string]string{io.kubernetes.container.hash: fafde40b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a25c5660a3a2b7226bf308b44307a3912efa7a9a73e79ec63f83133b80a44683,PodSandboxId:e6dd8d4a1944eb1c32f8b7427d808ec1abbab695ee962c045256ef57d96231c6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722933243355896420,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-324808,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91054f20d0444b47ed2a594ff97bc641,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df2efdb62a4af10f89a1b37599d4258ac3b72660353795f8c2141f3ef70c1f65,PodSandboxId:963ae0e923498a287d076541863b72249288ed25de0b24e686c3d10d12681101,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722933243328499057,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-324808,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8157c47ede204e65846d42268b1da9e0,},Annotations:map[string]string{io.kubernetes.container.hash:
9461fdd0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8651a7ed6615e826b950eab003841c2ec872b48160f207e77dd32a684125a23,PodSandboxId:6dbf61a7e83176b57abac02851c6bd7772ea711eb7641b03791397e466cb2127,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722933243294289542,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-324808,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af4bee3c7061a5d622ebea0a07e8602e,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=eef22920-13eb-4c33-8fb7-90701d643f81 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:47:37 embed-certs-324808 crio[728]: time="2024-08-06 08:47:37.479913530Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a3d9f567-0de4-4311-9ebe-fab37da0eac1 name=/runtime.v1.RuntimeService/Version
	Aug 06 08:47:37 embed-certs-324808 crio[728]: time="2024-08-06 08:47:37.480013886Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a3d9f567-0de4-4311-9ebe-fab37da0eac1 name=/runtime.v1.RuntimeService/Version
	Aug 06 08:47:37 embed-certs-324808 crio[728]: time="2024-08-06 08:47:37.481577369Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d7428bd8-f6e1-4a86-85ea-419935b50144 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 08:47:37 embed-certs-324808 crio[728]: time="2024-08-06 08:47:37.481990371Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722934057481966232,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d7428bd8-f6e1-4a86-85ea-419935b50144 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 08:47:37 embed-certs-324808 crio[728]: time="2024-08-06 08:47:37.482563848Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bbfaef35-0d79-4702-8534-6205c6bb6a0e name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:47:37 embed-certs-324808 crio[728]: time="2024-08-06 08:47:37.482621889Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bbfaef35-0d79-4702-8534-6205c6bb6a0e name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:47:37 embed-certs-324808 crio[728]: time="2024-08-06 08:47:37.483134133Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:367eb6487187a37acafb1fa74364a2d9ec5d9241888ed2e871fb14f2af161975,PodSandboxId:27950acba30f699b26c3086801707c13702e0c69ec58aa1582ce326e756490e2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722933278910752050,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15c7d89a-e994-4cca-931c-f646157a15e8,},Annotations:map[string]string{io.kubernetes.container.hash: 4540f957,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b29f8b66a54410c4d859c7dd605288a33ddbdf5d1c48722c718f2dbbc9ed9f3,PodSandboxId:3683ed34d915eac01c70be437fd87c3e3d9efe834c554a0ff79b4830d0aa0f62,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722933258423173197,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 41718f5d-fd79-4af8-9464-04a00be13504,},Annotations:map[string]string{io.kubernetes.container.hash: 58ebd90e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e1c91bc3a920b6f7db4477b8219f13fa092c68ff53abe2390325a68dad253e5,PodSandboxId:ed059b4ceda167186ee96ca5e73360b595dac7a59f59f8bd89423f739dadbb0c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722933255844194079,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4xxwm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eeb47dea-32e9-458f-9ad9-2af6d4e68cc0,},Annotations:map[string]string{io.kubernetes.container.hash: f3899951,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:612fc6a8f2c1208097a6d6287c5fecb20026608a278af5d0d4efc9662e065eb9,PodSandboxId:ab1bd703c0510dbbe4c386c1d76510a87f45de355b7a86c1c84201ffe2f6730d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722933248127519730,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8lbzv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c45e6c0-9b7f-4c48-b
ff5-38d4a5949bec,},Annotations:map[string]string{io.kubernetes.container.hash: e30e5c8f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78abe2efa92d2d6c9171ed65bd4d941693be3207554f9318a7cefb91544eeb20,PodSandboxId:27950acba30f699b26c3086801707c13702e0c69ec58aa1582ce326e756490e2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722933248067976839,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15c7d89a-e994-4cca-931c-f646157a1
5e8,},Annotations:map[string]string{io.kubernetes.container.hash: 4540f957,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7edb6bece3347d88a263663a1d6549165dafdcb67ad723494b937766cba6da6,PodSandboxId:11616bc5ed6d0b08a352623e9a760f17c020d30ca828c7811afe1737dfcc2ca3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722933243338331603,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-324808,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18189df0a2512fa0a9393883a718fa75,},Annota
tions:map[string]string{io.kubernetes.container.hash: fafde40b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a25c5660a3a2b7226bf308b44307a3912efa7a9a73e79ec63f83133b80a44683,PodSandboxId:e6dd8d4a1944eb1c32f8b7427d808ec1abbab695ee962c045256ef57d96231c6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722933243355896420,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-324808,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91054f20d0444b47ed2a594ff97bc641,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df2efdb62a4af10f89a1b37599d4258ac3b72660353795f8c2141f3ef70c1f65,PodSandboxId:963ae0e923498a287d076541863b72249288ed25de0b24e686c3d10d12681101,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722933243328499057,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-324808,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8157c47ede204e65846d42268b1da9e0,},Annotations:map[string]string{io.kubernetes.container.hash:
9461fdd0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8651a7ed6615e826b950eab003841c2ec872b48160f207e77dd32a684125a23,PodSandboxId:6dbf61a7e83176b57abac02851c6bd7772ea711eb7641b03791397e466cb2127,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722933243294289542,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-324808,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af4bee3c7061a5d622ebea0a07e8602e,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bbfaef35-0d79-4702-8534-6205c6bb6a0e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	367eb6487187a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   27950acba30f6       storage-provisioner
	1b29f8b66a544       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   3683ed34d915e       busybox
	9e1c91bc3a920       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago      Running             coredns                   1                   ed059b4ceda16       coredns-7db6d8ff4d-4xxwm
	612fc6a8f2c12       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      13 minutes ago      Running             kube-proxy                1                   ab1bd703c0510       kube-proxy-8lbzv
	78abe2efa92d2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   27950acba30f6       storage-provisioner
	a25c5660a3a2b       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      13 minutes ago      Running             kube-scheduler            1                   e6dd8d4a1944e       kube-scheduler-embed-certs-324808
	f7edb6bece334       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      13 minutes ago      Running             kube-apiserver            1                   11616bc5ed6d0       kube-apiserver-embed-certs-324808
	df2efdb62a4af       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      13 minutes ago      Running             etcd                      1                   963ae0e923498       etcd-embed-certs-324808
	d8651a7ed6615       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      13 minutes ago      Running             kube-controller-manager   1                   6dbf61a7e8317       kube-controller-manager-embed-certs-324808
	
	
	==> coredns [9e1c91bc3a920b6f7db4477b8219f13fa092c68ff53abe2390325a68dad253e5] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:46236 - 38099 "HINFO IN 2530163823123408894.8164630948987810677. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.016562482s
	
	
	==> describe nodes <==
	Name:               embed-certs-324808
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-324808
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e92cb06692f5ea1ba801d10d148e5e92e807f9c8
	                    minikube.k8s.io/name=embed-certs-324808
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_06T08_26_16_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 06 Aug 2024 08:26:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-324808
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 06 Aug 2024 08:47:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 06 Aug 2024 08:44:49 +0000   Tue, 06 Aug 2024 08:26:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 06 Aug 2024 08:44:49 +0000   Tue, 06 Aug 2024 08:26:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 06 Aug 2024 08:44:49 +0000   Tue, 06 Aug 2024 08:26:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 06 Aug 2024 08:44:49 +0000   Tue, 06 Aug 2024 08:34:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.190
	  Hostname:    embed-certs-324808
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c408910bdcf648538fd8ba76b3291141
	  System UUID:                c408910b-dcf6-4853-8fd8-ba76b3291141
	  Boot ID:                    0201bd3e-74f3-48df-8c90-ae37873684ef
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 coredns-7db6d8ff4d-4xxwm                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     21m
	  kube-system                 etcd-embed-certs-324808                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         21m
	  kube-system                 kube-apiserver-embed-certs-324808             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-controller-manager-embed-certs-324808    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-proxy-8lbzv                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-scheduler-embed-certs-324808             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 metrics-server-569cc877fc-8xb9k               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         20m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21m (x2 over 21m)  kubelet          Node embed-certs-324808 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m (x2 over 21m)  kubelet          Node embed-certs-324808 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m (x2 over 21m)  kubelet          Node embed-certs-324808 status is now: NodeHasSufficientPID
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeReady                21m                kubelet          Node embed-certs-324808 status is now: NodeReady
	  Normal  RegisteredNode           21m                node-controller  Node embed-certs-324808 event: Registered Node embed-certs-324808 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node embed-certs-324808 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node embed-certs-324808 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node embed-certs-324808 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node embed-certs-324808 event: Registered Node embed-certs-324808 in Controller
	
	
	==> dmesg <==
	[Aug 6 08:33] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050723] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040216] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.779708] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.594729] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.589448] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.558208] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.061291] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.052531] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[  +0.208472] systemd-fstab-generator[670]: Ignoring "noauto" option for root device
	[  +0.117471] systemd-fstab-generator[682]: Ignoring "noauto" option for root device
	[  +0.305461] systemd-fstab-generator[712]: Ignoring "noauto" option for root device
	[  +4.460210] systemd-fstab-generator[808]: Ignoring "noauto" option for root device
	[  +0.060310] kauditd_printk_skb: 130 callbacks suppressed
	[Aug 6 08:34] systemd-fstab-generator[931]: Ignoring "noauto" option for root device
	[  +5.632385] kauditd_printk_skb: 97 callbacks suppressed
	[  +3.960408] systemd-fstab-generator[1542]: Ignoring "noauto" option for root device
	[  +1.735312] kauditd_printk_skb: 62 callbacks suppressed
	[  +9.389115] kauditd_printk_skb: 43 callbacks suppressed
	
	
	==> etcd [df2efdb62a4af10f89a1b37599d4258ac3b72660353795f8c2141f3ef70c1f65] <==
	{"level":"info","ts":"2024-08-06T08:34:22.973259Z","caller":"traceutil/trace.go:171","msg":"trace[1866380291] range","detail":"{range_begin:/registry/daemonsets/kube-system/kube-proxy; range_end:; response_count:1; response_revision:535; }","duration":"881.361455ms","start":"2024-08-06T08:34:22.091889Z","end":"2024-08-06T08:34:22.973251Z","steps":["trace[1866380291] 'agreement among raft nodes before linearized reading'  (duration: 689.187892ms)","trace[1866380291] 'range keys from in-memory index tree'  (duration: 191.273588ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-06T08:34:22.973298Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-06T08:34:22.09188Z","time spent":"881.410123ms","remote":"127.0.0.1:42654","response type":"/etcdserverpb.KV/Range","request count":0,"request size":45,"response count":1,"response size":2918,"request content":"key:\"/registry/daemonsets/kube-system/kube-proxy\" "}
	{"level":"warn","ts":"2024-08-06T08:34:22.973228Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.564711774s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/legacy-service-account-token-cleaner\" ","response":"range_response_count:1 size:238"}
	{"level":"info","ts":"2024-08-06T08:34:22.973505Z","caller":"traceutil/trace.go:171","msg":"trace[1065217615] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/legacy-service-account-token-cleaner; range_end:; response_count:1; response_revision:535; }","duration":"1.564995974s","start":"2024-08-06T08:34:21.408494Z","end":"2024-08-06T08:34:22.97349Z","steps":["trace[1065217615] 'agreement among raft nodes before linearized reading'  (duration: 1.372792805s)","trace[1065217615] 'range keys from in-memory index tree'  (duration: 191.926971ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-06T08:34:22.973547Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-06T08:34:21.40848Z","time spent":"1.56505946s","remote":"127.0.0.1:42340","response type":"/etcdserverpb.KV/Range","request count":0,"request size":76,"response count":1,"response size":261,"request content":"key:\"/registry/serviceaccounts/kube-system/legacy-service-account-token-cleaner\" "}
	{"level":"warn","ts":"2024-08-06T08:34:22.9724Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"880.572216ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/kube-system/metrics-server\" ","response":"range_response_count:1 size:1656"}
	{"level":"info","ts":"2024-08-06T08:34:22.973794Z","caller":"traceutil/trace.go:171","msg":"trace[1933367346] range","detail":"{range_begin:/registry/services/specs/kube-system/metrics-server; range_end:; response_count:1; response_revision:535; }","duration":"881.966102ms","start":"2024-08-06T08:34:22.091818Z","end":"2024-08-06T08:34:22.973784Z","steps":["trace[1933367346] 'agreement among raft nodes before linearized reading'  (duration: 689.265196ms)","trace[1933367346] 'range keys from in-memory index tree'  (duration: 191.291748ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-06T08:34:22.973853Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-06T08:34:22.091814Z","time spent":"882.029438ms","remote":"127.0.0.1:42326","response type":"/etcdserverpb.KV/Range","request count":0,"request size":53,"response count":1,"response size":1679,"request content":"key:\"/registry/services/specs/kube-system/metrics-server\" "}
	{"level":"info","ts":"2024-08-06T08:34:45.388651Z","caller":"traceutil/trace.go:171","msg":"trace[752656019] transaction","detail":"{read_only:false; response_revision:566; number_of_response:1; }","duration":"272.066085ms","start":"2024-08-06T08:34:45.11656Z","end":"2024-08-06T08:34:45.388626Z","steps":["trace[752656019] 'process raft request'  (duration: 271.94346ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-06T08:34:45.902937Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"386.870464ms","expected-duration":"100ms","prefix":"","request":"header:<ID:7465438928253715769 > lease_revoke:<id:679a9126d2ef84bc>","response":"size:28"}
	{"level":"info","ts":"2024-08-06T08:34:45.903073Z","caller":"traceutil/trace.go:171","msg":"trace[2110889150] linearizableReadLoop","detail":"{readStateIndex:606; appliedIndex:605; }","duration":"301.129605ms","start":"2024-08-06T08:34:45.601923Z","end":"2024-08-06T08:34:45.903053Z","steps":["trace[2110889150] 'read index received'  (duration: 65.358µs)","trace[2110889150] 'applied index is now lower than readState.Index'  (duration: 301.06277ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-06T08:34:45.903289Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"301.332338ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-569cc877fc-8xb9k\" ","response":"range_response_count:1 size:4283"}
	{"level":"info","ts":"2024-08-06T08:34:45.90334Z","caller":"traceutil/trace.go:171","msg":"trace[1685802118] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-569cc877fc-8xb9k; range_end:; response_count:1; response_revision:566; }","duration":"301.44565ms","start":"2024-08-06T08:34:45.601882Z","end":"2024-08-06T08:34:45.903327Z","steps":["trace[1685802118] 'agreement among raft nodes before linearized reading'  (duration: 301.291654ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-06T08:34:45.903369Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-06T08:34:45.601864Z","time spent":"301.496171ms","remote":"127.0.0.1:42312","response type":"/etcdserverpb.KV/Range","request count":0,"request size":60,"response count":1,"response size":4306,"request content":"key:\"/registry/pods/kube-system/metrics-server-569cc877fc-8xb9k\" "}
	{"level":"warn","ts":"2024-08-06T08:34:45.903286Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"227.892052ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-06T08:34:45.903601Z","caller":"traceutil/trace.go:171","msg":"trace[52243845] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:566; }","duration":"228.267737ms","start":"2024-08-06T08:34:45.67532Z","end":"2024-08-06T08:34:45.903588Z","steps":["trace[52243845] 'agreement among raft nodes before linearized reading'  (duration: 227.882969ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-06T08:35:03.642932Z","caller":"traceutil/trace.go:171","msg":"trace[919814308] transaction","detail":"{read_only:false; response_revision:585; number_of_response:1; }","duration":"148.222386ms","start":"2024-08-06T08:35:03.494685Z","end":"2024-08-06T08:35:03.642908Z","steps":["trace[919814308] 'process raft request'  (duration: 147.814265ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-06T08:35:08.888386Z","caller":"traceutil/trace.go:171","msg":"trace[574368570] transaction","detail":"{read_only:false; response_revision:590; number_of_response:1; }","duration":"200.69712ms","start":"2024-08-06T08:35:08.687669Z","end":"2024-08-06T08:35:08.888366Z","steps":["trace[574368570] 'process raft request'  (duration: 200.610643ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-06T08:35:08.922381Z","caller":"traceutil/trace.go:171","msg":"trace[332391516] linearizableReadLoop","detail":"{readStateIndex:635; appliedIndex:634; }","duration":"222.552251ms","start":"2024-08-06T08:35:08.699813Z","end":"2024-08-06T08:35:08.922365Z","steps":["trace[332391516] 'read index received'  (duration: 188.883517ms)","trace[332391516] 'applied index is now lower than readState.Index'  (duration: 33.668025ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-06T08:35:08.92256Z","caller":"traceutil/trace.go:171","msg":"trace[2018374232] transaction","detail":"{read_only:false; response_revision:591; number_of_response:1; }","duration":"227.946012ms","start":"2024-08-06T08:35:08.694607Z","end":"2024-08-06T08:35:08.922553Z","steps":["trace[2018374232] 'process raft request'  (duration: 225.975678ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-06T08:35:08.922826Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"222.967334ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/\" range_end:\"/registry/clusterrolebindings0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-08-06T08:35:08.922892Z","caller":"traceutil/trace.go:171","msg":"trace[1230724112] range","detail":"{range_begin:/registry/clusterrolebindings/; range_end:/registry/clusterrolebindings0; response_count:0; response_revision:591; }","duration":"223.086639ms","start":"2024-08-06T08:35:08.699792Z","end":"2024-08-06T08:35:08.922879Z","steps":["trace[1230724112] 'agreement among raft nodes before linearized reading'  (duration: 222.956595ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-06T08:44:05.511962Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":794}
	{"level":"info","ts":"2024-08-06T08:44:05.522777Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":794,"took":"10.282003ms","hash":3987429953,"current-db-size-bytes":2174976,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":2174976,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2024-08-06T08:44:05.522891Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3987429953,"revision":794,"compact-revision":-1}
	
	
	==> kernel <==
	 08:47:37 up 14 min,  0 users,  load average: 0.31, 0.19, 0.16
	Linux embed-certs-324808 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [f7edb6bece3347d88a263663a1d6549165dafdcb67ad723494b937766cba6da6] <==
	I0806 08:42:07.830500       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0806 08:44:06.834071       1 handler_proxy.go:93] no RequestInfo found in the context
	E0806 08:44:06.834230       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0806 08:44:07.835344       1 handler_proxy.go:93] no RequestInfo found in the context
	E0806 08:44:07.835544       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0806 08:44:07.835672       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0806 08:44:07.835379       1 handler_proxy.go:93] no RequestInfo found in the context
	E0806 08:44:07.835805       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0806 08:44:07.837113       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0806 08:45:07.836009       1 handler_proxy.go:93] no RequestInfo found in the context
	E0806 08:45:07.836082       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0806 08:45:07.836091       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0806 08:45:07.837262       1 handler_proxy.go:93] no RequestInfo found in the context
	E0806 08:45:07.837340       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0806 08:45:07.837348       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0806 08:47:07.837274       1 handler_proxy.go:93] no RequestInfo found in the context
	E0806 08:47:07.837377       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0806 08:47:07.837387       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0806 08:47:07.837484       1 handler_proxy.go:93] no RequestInfo found in the context
	E0806 08:47:07.837541       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0806 08:47:07.839366       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [d8651a7ed6615e826b950eab003841c2ec872b48160f207e77dd32a684125a23] <==
	I0806 08:41:52.201869       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0806 08:42:21.724080       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0806 08:42:22.209636       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0806 08:42:51.728519       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0806 08:42:52.217096       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0806 08:43:21.733458       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0806 08:43:22.226629       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0806 08:43:51.738398       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0806 08:43:52.235355       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0806 08:44:21.743722       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0806 08:44:22.244350       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0806 08:44:51.749344       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0806 08:44:52.252172       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0806 08:45:21.754636       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0806 08:45:22.260870       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0806 08:45:24.704072       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="348.606µs"
	I0806 08:45:37.696769       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="222.718µs"
	E0806 08:45:51.759801       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0806 08:45:52.269880       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0806 08:46:21.765792       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0806 08:46:22.277718       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0806 08:46:51.771165       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0806 08:46:52.285384       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0806 08:47:21.777039       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0806 08:47:22.294292       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [612fc6a8f2c1208097a6d6287c5fecb20026608a278af5d0d4efc9662e065eb9] <==
	I0806 08:34:08.276218       1 server_linux.go:69] "Using iptables proxy"
	I0806 08:34:08.285065       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.190"]
	I0806 08:34:08.318635       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0806 08:34:08.318703       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0806 08:34:08.318728       1 server_linux.go:165] "Using iptables Proxier"
	I0806 08:34:08.321292       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0806 08:34:08.321649       1 server.go:872] "Version info" version="v1.30.3"
	I0806 08:34:08.321676       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0806 08:34:08.323039       1 config.go:192] "Starting service config controller"
	I0806 08:34:08.323069       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0806 08:34:08.323102       1 config.go:101] "Starting endpoint slice config controller"
	I0806 08:34:08.323106       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0806 08:34:08.325796       1 config.go:319] "Starting node config controller"
	I0806 08:34:08.325830       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0806 08:34:08.423192       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0806 08:34:08.423281       1 shared_informer.go:320] Caches are synced for service config
	I0806 08:34:08.426078       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [a25c5660a3a2b7226bf308b44307a3912efa7a9a73e79ec63f83133b80a44683] <==
	I0806 08:34:04.328149       1 serving.go:380] Generated self-signed cert in-memory
	W0806 08:34:06.787589       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0806 08:34:06.787645       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0806 08:34:06.787657       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0806 08:34:06.787664       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0806 08:34:06.818807       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0806 08:34:06.819498       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0806 08:34:06.821043       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0806 08:34:06.823683       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0806 08:34:06.823774       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0806 08:34:06.823855       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0806 08:34:06.924536       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 06 08:45:10 embed-certs-324808 kubelet[938]: E0806 08:45:10.698355     938 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Aug 06 08:45:10 embed-certs-324808 kubelet[938]: E0806 08:45:10.698714     938 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Aug 06 08:45:10 embed-certs-324808 kubelet[938]: E0806 08:45:10.698976     938 kuberuntime_manager.go:1256] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-54ztt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,Recurs
iveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false
,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-569cc877fc-8xb9k_kube-system(7a5fb326-11a7-48fa-bcde-a22026a1dccb): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Aug 06 08:45:10 embed-certs-324808 kubelet[938]: E0806 08:45:10.699254     938 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-569cc877fc-8xb9k" podUID="7a5fb326-11a7-48fa-bcde-a22026a1dccb"
	Aug 06 08:45:24 embed-certs-324808 kubelet[938]: E0806 08:45:24.685770     938 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-8xb9k" podUID="7a5fb326-11a7-48fa-bcde-a22026a1dccb"
	Aug 06 08:45:37 embed-certs-324808 kubelet[938]: E0806 08:45:37.682517     938 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-8xb9k" podUID="7a5fb326-11a7-48fa-bcde-a22026a1dccb"
	Aug 06 08:45:50 embed-certs-324808 kubelet[938]: E0806 08:45:50.684059     938 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-8xb9k" podUID="7a5fb326-11a7-48fa-bcde-a22026a1dccb"
	Aug 06 08:46:02 embed-certs-324808 kubelet[938]: E0806 08:46:02.720800     938 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 06 08:46:02 embed-certs-324808 kubelet[938]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 06 08:46:02 embed-certs-324808 kubelet[938]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 06 08:46:02 embed-certs-324808 kubelet[938]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 06 08:46:02 embed-certs-324808 kubelet[938]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 06 08:46:04 embed-certs-324808 kubelet[938]: E0806 08:46:04.689511     938 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-8xb9k" podUID="7a5fb326-11a7-48fa-bcde-a22026a1dccb"
	Aug 06 08:46:17 embed-certs-324808 kubelet[938]: E0806 08:46:17.683015     938 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-8xb9k" podUID="7a5fb326-11a7-48fa-bcde-a22026a1dccb"
	Aug 06 08:46:29 embed-certs-324808 kubelet[938]: E0806 08:46:29.683284     938 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-8xb9k" podUID="7a5fb326-11a7-48fa-bcde-a22026a1dccb"
	Aug 06 08:46:43 embed-certs-324808 kubelet[938]: E0806 08:46:43.682354     938 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-8xb9k" podUID="7a5fb326-11a7-48fa-bcde-a22026a1dccb"
	Aug 06 08:46:55 embed-certs-324808 kubelet[938]: E0806 08:46:55.683056     938 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-8xb9k" podUID="7a5fb326-11a7-48fa-bcde-a22026a1dccb"
	Aug 06 08:47:02 embed-certs-324808 kubelet[938]: E0806 08:47:02.716213     938 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 06 08:47:02 embed-certs-324808 kubelet[938]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 06 08:47:02 embed-certs-324808 kubelet[938]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 06 08:47:02 embed-certs-324808 kubelet[938]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 06 08:47:02 embed-certs-324808 kubelet[938]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 06 08:47:09 embed-certs-324808 kubelet[938]: E0806 08:47:09.682994     938 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-8xb9k" podUID="7a5fb326-11a7-48fa-bcde-a22026a1dccb"
	Aug 06 08:47:23 embed-certs-324808 kubelet[938]: E0806 08:47:23.682578     938 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-8xb9k" podUID="7a5fb326-11a7-48fa-bcde-a22026a1dccb"
	Aug 06 08:47:35 embed-certs-324808 kubelet[938]: E0806 08:47:35.682206     938 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-8xb9k" podUID="7a5fb326-11a7-48fa-bcde-a22026a1dccb"
	
	
	==> storage-provisioner [367eb6487187a37acafb1fa74364a2d9ec5d9241888ed2e871fb14f2af161975] <==
	I0806 08:34:39.042609       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0806 08:34:39.054272       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0806 08:34:39.054538       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0806 08:34:39.065607       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0806 08:34:39.066085       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-324808_113a14c2-2344-4cc7-8a7d-b76920a40753!
	I0806 08:34:39.067612       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"059f55b2-f290-4d6f-b0a5-25d3b1db6b88", APIVersion:"v1", ResourceVersion:"557", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-324808_113a14c2-2344-4cc7-8a7d-b76920a40753 became leader
	I0806 08:34:39.167331       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-324808_113a14c2-2344-4cc7-8a7d-b76920a40753!
	
	
	==> storage-provisioner [78abe2efa92d2d6c9171ed65bd4d941693be3207554f9318a7cefb91544eeb20] <==
	I0806 08:34:08.185137       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0806 08:34:38.188167       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-324808 -n embed-certs-324808
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-324808 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-8xb9k
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-324808 describe pod metrics-server-569cc877fc-8xb9k
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-324808 describe pod metrics-server-569cc877fc-8xb9k: exit status 1 (62.534918ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-8xb9k" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-324808 describe pod metrics-server-569cc877fc-8xb9k: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.44s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0806 08:39:13.203001   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/addons-505457/client.crt: no such file or directory
E0806 08:39:26.450302   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/enable-default-cni-405163/client.crt: no such file or directory
E0806 08:40:01.380329   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/flannel-405163/client.crt: no such file or directory
E0806 08:40:04.748543   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/bridge-405163/client.crt: no such file or directory
E0806 08:40:20.597952   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/auto-405163/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-990751 -n default-k8s-diff-port-990751
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-08-06 08:47:55.380678192 +0000 UTC m=+6198.605488627
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-990751 -n default-k8s-diff-port-990751
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-990751 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-990751 logs -n 25: (2.195486463s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p flannel-405163 sudo cat                             | flannel-405163               | jenkins | v1.33.1 | 06 Aug 24 08:25 UTC | 06 Aug 24 08:25 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p flannel-405163 sudo                                 | flannel-405163               | jenkins | v1.33.1 | 06 Aug 24 08:25 UTC | 06 Aug 24 08:25 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p flannel-405163 sudo                                 | flannel-405163               | jenkins | v1.33.1 | 06 Aug 24 08:25 UTC | 06 Aug 24 08:25 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p flannel-405163 sudo                                 | flannel-405163               | jenkins | v1.33.1 | 06 Aug 24 08:25 UTC | 06 Aug 24 08:25 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p flannel-405163 sudo find                            | flannel-405163               | jenkins | v1.33.1 | 06 Aug 24 08:25 UTC | 06 Aug 24 08:25 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p flannel-405163 sudo crio                            | flannel-405163               | jenkins | v1.33.1 | 06 Aug 24 08:25 UTC | 06 Aug 24 08:25 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p flannel-405163                                      | flannel-405163               | jenkins | v1.33.1 | 06 Aug 24 08:25 UTC | 06 Aug 24 08:25 UTC |
	| delete  | -p                                                     | disable-driver-mounts-607762 | jenkins | v1.33.1 | 06 Aug 24 08:25 UTC | 06 Aug 24 08:25 UTC |
	|         | disable-driver-mounts-607762                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-990751 | jenkins | v1.33.1 | 06 Aug 24 08:25 UTC | 06 Aug 24 08:27 UTC |
	|         | default-k8s-diff-port-990751                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-703340             | no-preload-703340            | jenkins | v1.33.1 | 06 Aug 24 08:26 UTC | 06 Aug 24 08:26 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-703340                                   | no-preload-703340            | jenkins | v1.33.1 | 06 Aug 24 08:26 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-324808            | embed-certs-324808           | jenkins | v1.33.1 | 06 Aug 24 08:26 UTC | 06 Aug 24 08:26 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-324808                                  | embed-certs-324808           | jenkins | v1.33.1 | 06 Aug 24 08:26 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-990751  | default-k8s-diff-port-990751 | jenkins | v1.33.1 | 06 Aug 24 08:27 UTC | 06 Aug 24 08:27 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-990751 | jenkins | v1.33.1 | 06 Aug 24 08:27 UTC |                     |
	|         | default-k8s-diff-port-990751                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-703340                  | no-preload-703340            | jenkins | v1.33.1 | 06 Aug 24 08:28 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-703340                                   | no-preload-703340            | jenkins | v1.33.1 | 06 Aug 24 08:28 UTC | 06 Aug 24 08:40 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-409903        | old-k8s-version-409903       | jenkins | v1.33.1 | 06 Aug 24 08:28 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-324808                 | embed-certs-324808           | jenkins | v1.33.1 | 06 Aug 24 08:29 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-324808                                  | embed-certs-324808           | jenkins | v1.33.1 | 06 Aug 24 08:29 UTC | 06 Aug 24 08:38 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-990751       | default-k8s-diff-port-990751 | jenkins | v1.33.1 | 06 Aug 24 08:30 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-990751 | jenkins | v1.33.1 | 06 Aug 24 08:30 UTC | 06 Aug 24 08:38 UTC |
	|         | default-k8s-diff-port-990751                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-409903                              | old-k8s-version-409903       | jenkins | v1.33.1 | 06 Aug 24 08:30 UTC | 06 Aug 24 08:30 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-409903             | old-k8s-version-409903       | jenkins | v1.33.1 | 06 Aug 24 08:30 UTC | 06 Aug 24 08:30 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-409903                              | old-k8s-version-409903       | jenkins | v1.33.1 | 06 Aug 24 08:30 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/06 08:30:58
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0806 08:30:58.757517   79927 out.go:291] Setting OutFile to fd 1 ...
	I0806 08:30:58.757772   79927 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 08:30:58.757780   79927 out.go:304] Setting ErrFile to fd 2...
	I0806 08:30:58.757784   79927 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 08:30:58.757992   79927 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19370-13057/.minikube/bin
	I0806 08:30:58.758527   79927 out.go:298] Setting JSON to false
	I0806 08:30:58.759405   79927 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8005,"bootTime":1722925054,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0806 08:30:58.759463   79927 start.go:139] virtualization: kvm guest
	I0806 08:30:58.762200   79927 out.go:177] * [old-k8s-version-409903] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0806 08:30:58.763461   79927 out.go:177]   - MINIKUBE_LOCATION=19370
	I0806 08:30:58.763514   79927 notify.go:220] Checking for updates...
	I0806 08:30:58.766017   79927 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0806 08:30:58.767180   79927 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19370-13057/kubeconfig
	I0806 08:30:58.768270   79927 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19370-13057/.minikube
	I0806 08:30:58.769303   79927 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0806 08:30:58.770385   79927 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0806 08:30:58.771931   79927 config.go:182] Loaded profile config "old-k8s-version-409903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0806 08:30:58.772319   79927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:30:58.772407   79927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:30:58.787845   79927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39563
	I0806 08:30:58.788270   79927 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:30:58.788863   79927 main.go:141] libmachine: Using API Version  1
	I0806 08:30:58.788879   79927 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:30:58.789169   79927 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:30:58.789314   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .DriverName
	I0806 08:30:58.791190   79927 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0806 08:30:58.792487   79927 driver.go:392] Setting default libvirt URI to qemu:///system
	I0806 08:30:58.792776   79927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:30:58.792811   79927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:30:58.807347   79927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33093
	I0806 08:30:58.807788   79927 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:30:58.808260   79927 main.go:141] libmachine: Using API Version  1
	I0806 08:30:58.808276   79927 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:30:58.808633   79927 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:30:58.808849   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .DriverName
	I0806 08:30:58.845305   79927 out.go:177] * Using the kvm2 driver based on existing profile
	I0806 08:30:58.846560   79927 start.go:297] selected driver: kvm2
	I0806 08:30:58.846580   79927 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-409903 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-409903 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.158 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 08:30:58.846684   79927 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0806 08:30:58.847335   79927 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 08:30:58.847395   79927 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19370-13057/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0806 08:30:58.862013   79927 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0806 08:30:58.862398   79927 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0806 08:30:58.862431   79927 cni.go:84] Creating CNI manager for ""
	I0806 08:30:58.862441   79927 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0806 08:30:58.862493   79927 start.go:340] cluster config:
	{Name:old-k8s-version-409903 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-409903 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.158 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 08:30:58.862609   79927 iso.go:125] acquiring lock: {Name:mke58d71de28babe305c5eb0c31c274e9713bb3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 08:30:58.864484   79927 out.go:177] * Starting "old-k8s-version-409903" primary control-plane node in "old-k8s-version-409903" cluster
	I0806 08:30:58.865763   79927 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0806 08:30:58.865796   79927 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0806 08:30:58.865803   79927 cache.go:56] Caching tarball of preloaded images
	I0806 08:30:58.865912   79927 preload.go:172] Found /home/jenkins/minikube-integration/19370-13057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0806 08:30:58.865928   79927 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0806 08:30:58.866013   79927 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/old-k8s-version-409903/config.json ...
	I0806 08:30:58.866228   79927 start.go:360] acquireMachinesLock for old-k8s-version-409903: {Name:mkc602ac9c8cc3360dcc0fcb8104303837aaab61 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 08:31:01.700664   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:31:04.772608   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:31:10.852634   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:31:13.924605   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:31:20.004661   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:31:23.076602   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:31:29.156621   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:31:32.228616   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:31:38.308684   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:31:41.380622   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:31:47.460632   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:31:50.532638   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:31:56.612610   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:31:59.684696   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:32:05.764642   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:32:08.836560   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:32:14.916653   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:32:17.988720   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:32:24.068597   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:32:27.140686   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:32:33.220630   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:32:36.292609   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:32:42.372654   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:32:45.444619   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:32:51.524637   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:32:54.596677   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:33:00.676664   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:33:03.748647   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:33:09.828643   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:33:12.900674   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:33:18.980642   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:33:22.052616   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:33:28.132639   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:33:31.204678   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:33:34.209491   79219 start.go:364] duration metric: took 4m19.788239383s to acquireMachinesLock for "embed-certs-324808"
	I0806 08:33:34.209554   79219 start.go:96] Skipping create...Using existing machine configuration
	I0806 08:33:34.209563   79219 fix.go:54] fixHost starting: 
	I0806 08:33:34.209914   79219 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:33:34.209945   79219 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:33:34.225180   79219 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44037
	I0806 08:33:34.225607   79219 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:33:34.226014   79219 main.go:141] libmachine: Using API Version  1
	I0806 08:33:34.226033   79219 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:33:34.226323   79219 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:33:34.226522   79219 main.go:141] libmachine: (embed-certs-324808) Calling .DriverName
	I0806 08:33:34.226684   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetState
	I0806 08:33:34.228172   79219 fix.go:112] recreateIfNeeded on embed-certs-324808: state=Stopped err=<nil>
	I0806 08:33:34.228200   79219 main.go:141] libmachine: (embed-certs-324808) Calling .DriverName
	W0806 08:33:34.228355   79219 fix.go:138] unexpected machine state, will restart: <nil>
	I0806 08:33:34.231261   79219 out.go:177] * Restarting existing kvm2 VM for "embed-certs-324808" ...
	I0806 08:33:34.232543   79219 main.go:141] libmachine: (embed-certs-324808) Calling .Start
	I0806 08:33:34.232728   79219 main.go:141] libmachine: (embed-certs-324808) Ensuring networks are active...
	I0806 08:33:34.233475   79219 main.go:141] libmachine: (embed-certs-324808) Ensuring network default is active
	I0806 08:33:34.233819   79219 main.go:141] libmachine: (embed-certs-324808) Ensuring network mk-embed-certs-324808 is active
	I0806 08:33:34.234291   79219 main.go:141] libmachine: (embed-certs-324808) Getting domain xml...
	I0806 08:33:34.235129   79219 main.go:141] libmachine: (embed-certs-324808) Creating domain...
	I0806 08:33:34.206591   78922 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 08:33:34.206636   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetMachineName
	I0806 08:33:34.206950   78922 buildroot.go:166] provisioning hostname "no-preload-703340"
	I0806 08:33:34.206975   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetMachineName
	I0806 08:33:34.207187   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHHostname
	I0806 08:33:34.209379   78922 machine.go:97] duration metric: took 4m37.416826471s to provisionDockerMachine
	I0806 08:33:34.209416   78922 fix.go:56] duration metric: took 4m37.443014145s for fixHost
	I0806 08:33:34.209424   78922 start.go:83] releasing machines lock for "no-preload-703340", held for 4m37.443037097s
	W0806 08:33:34.209442   78922 start.go:714] error starting host: provision: host is not running
	W0806 08:33:34.209538   78922 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0806 08:33:34.209545   78922 start.go:729] Will try again in 5 seconds ...
	I0806 08:33:35.434516   79219 main.go:141] libmachine: (embed-certs-324808) Waiting to get IP...
	I0806 08:33:35.435546   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:35.435973   79219 main.go:141] libmachine: (embed-certs-324808) DBG | unable to find current IP address of domain embed-certs-324808 in network mk-embed-certs-324808
	I0806 08:33:35.436045   79219 main.go:141] libmachine: (embed-certs-324808) DBG | I0806 08:33:35.435966   80472 retry.go:31] will retry after 190.247547ms: waiting for machine to come up
	I0806 08:33:35.627362   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:35.627768   79219 main.go:141] libmachine: (embed-certs-324808) DBG | unable to find current IP address of domain embed-certs-324808 in network mk-embed-certs-324808
	I0806 08:33:35.627803   79219 main.go:141] libmachine: (embed-certs-324808) DBG | I0806 08:33:35.627742   80472 retry.go:31] will retry after 306.927129ms: waiting for machine to come up
	I0806 08:33:35.936242   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:35.936714   79219 main.go:141] libmachine: (embed-certs-324808) DBG | unable to find current IP address of domain embed-certs-324808 in network mk-embed-certs-324808
	I0806 08:33:35.936738   79219 main.go:141] libmachine: (embed-certs-324808) DBG | I0806 08:33:35.936661   80472 retry.go:31] will retry after 363.399025ms: waiting for machine to come up
	I0806 08:33:36.301454   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:36.301934   79219 main.go:141] libmachine: (embed-certs-324808) DBG | unable to find current IP address of domain embed-certs-324808 in network mk-embed-certs-324808
	I0806 08:33:36.301967   79219 main.go:141] libmachine: (embed-certs-324808) DBG | I0806 08:33:36.301879   80472 retry.go:31] will retry after 495.07719ms: waiting for machine to come up
	I0806 08:33:36.798489   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:36.798930   79219 main.go:141] libmachine: (embed-certs-324808) DBG | unable to find current IP address of domain embed-certs-324808 in network mk-embed-certs-324808
	I0806 08:33:36.798958   79219 main.go:141] libmachine: (embed-certs-324808) DBG | I0806 08:33:36.798870   80472 retry.go:31] will retry after 755.286107ms: waiting for machine to come up
	I0806 08:33:37.555799   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:37.556155   79219 main.go:141] libmachine: (embed-certs-324808) DBG | unable to find current IP address of domain embed-certs-324808 in network mk-embed-certs-324808
	I0806 08:33:37.556187   79219 main.go:141] libmachine: (embed-certs-324808) DBG | I0806 08:33:37.556094   80472 retry.go:31] will retry after 600.530399ms: waiting for machine to come up
	I0806 08:33:38.157817   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:38.158303   79219 main.go:141] libmachine: (embed-certs-324808) DBG | unable to find current IP address of domain embed-certs-324808 in network mk-embed-certs-324808
	I0806 08:33:38.158334   79219 main.go:141] libmachine: (embed-certs-324808) DBG | I0806 08:33:38.158241   80472 retry.go:31] will retry after 943.262354ms: waiting for machine to come up
	I0806 08:33:39.103226   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:39.103615   79219 main.go:141] libmachine: (embed-certs-324808) DBG | unable to find current IP address of domain embed-certs-324808 in network mk-embed-certs-324808
	I0806 08:33:39.103645   79219 main.go:141] libmachine: (embed-certs-324808) DBG | I0806 08:33:39.103563   80472 retry.go:31] will retry after 1.100862399s: waiting for machine to come up
	I0806 08:33:39.211204   78922 start.go:360] acquireMachinesLock for no-preload-703340: {Name:mkc602ac9c8cc3360dcc0fcb8104303837aaab61 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 08:33:40.205796   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:40.206130   79219 main.go:141] libmachine: (embed-certs-324808) DBG | unable to find current IP address of domain embed-certs-324808 in network mk-embed-certs-324808
	I0806 08:33:40.206160   79219 main.go:141] libmachine: (embed-certs-324808) DBG | I0806 08:33:40.206077   80472 retry.go:31] will retry after 1.405550913s: waiting for machine to come up
	I0806 08:33:41.613629   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:41.614060   79219 main.go:141] libmachine: (embed-certs-324808) DBG | unable to find current IP address of domain embed-certs-324808 in network mk-embed-certs-324808
	I0806 08:33:41.614092   79219 main.go:141] libmachine: (embed-certs-324808) DBG | I0806 08:33:41.614001   80472 retry.go:31] will retry after 2.190867783s: waiting for machine to come up
	I0806 08:33:43.807417   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:43.808017   79219 main.go:141] libmachine: (embed-certs-324808) DBG | unable to find current IP address of domain embed-certs-324808 in network mk-embed-certs-324808
	I0806 08:33:43.808049   79219 main.go:141] libmachine: (embed-certs-324808) DBG | I0806 08:33:43.807958   80472 retry.go:31] will retry after 2.233558066s: waiting for machine to come up
	I0806 08:33:46.043427   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:46.043803   79219 main.go:141] libmachine: (embed-certs-324808) DBG | unable to find current IP address of domain embed-certs-324808 in network mk-embed-certs-324808
	I0806 08:33:46.043830   79219 main.go:141] libmachine: (embed-certs-324808) DBG | I0806 08:33:46.043754   80472 retry.go:31] will retry after 3.171710196s: waiting for machine to come up
	I0806 08:33:49.218967   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:49.219319   79219 main.go:141] libmachine: (embed-certs-324808) DBG | unable to find current IP address of domain embed-certs-324808 in network mk-embed-certs-324808
	I0806 08:33:49.219349   79219 main.go:141] libmachine: (embed-certs-324808) DBG | I0806 08:33:49.219266   80472 retry.go:31] will retry after 4.141473633s: waiting for machine to come up
	I0806 08:33:53.361887   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.362300   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has current primary IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.362315   79219 main.go:141] libmachine: (embed-certs-324808) Found IP for machine: 192.168.39.190
	I0806 08:33:53.362324   79219 main.go:141] libmachine: (embed-certs-324808) Reserving static IP address...
	I0806 08:33:53.362762   79219 main.go:141] libmachine: (embed-certs-324808) Reserved static IP address: 192.168.39.190
	I0806 08:33:53.362812   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "embed-certs-324808", mac: "52:54:00:b9:df:8c", ip: "192.168.39.190"} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:33:53.362826   79219 main.go:141] libmachine: (embed-certs-324808) Waiting for SSH to be available...
	I0806 08:33:53.362858   79219 main.go:141] libmachine: (embed-certs-324808) DBG | skip adding static IP to network mk-embed-certs-324808 - found existing host DHCP lease matching {name: "embed-certs-324808", mac: "52:54:00:b9:df:8c", ip: "192.168.39.190"}
	I0806 08:33:53.362871   79219 main.go:141] libmachine: (embed-certs-324808) DBG | Getting to WaitForSSH function...
	I0806 08:33:53.364761   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.365170   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:33:53.365204   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.365301   79219 main.go:141] libmachine: (embed-certs-324808) DBG | Using SSH client type: external
	I0806 08:33:53.365333   79219 main.go:141] libmachine: (embed-certs-324808) DBG | Using SSH private key: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/embed-certs-324808/id_rsa (-rw-------)
	I0806 08:33:53.365366   79219 main.go:141] libmachine: (embed-certs-324808) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.190 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19370-13057/.minikube/machines/embed-certs-324808/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0806 08:33:53.365378   79219 main.go:141] libmachine: (embed-certs-324808) DBG | About to run SSH command:
	I0806 08:33:53.365386   79219 main.go:141] libmachine: (embed-certs-324808) DBG | exit 0
	I0806 08:33:53.492385   79219 main.go:141] libmachine: (embed-certs-324808) DBG | SSH cmd err, output: <nil>: 
	I0806 08:33:53.492739   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetConfigRaw
	I0806 08:33:53.493342   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetIP
	I0806 08:33:53.495371   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.495663   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:33:53.495693   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.495982   79219 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/embed-certs-324808/config.json ...
	I0806 08:33:53.496215   79219 machine.go:94] provisionDockerMachine start ...
	I0806 08:33:53.496238   79219 main.go:141] libmachine: (embed-certs-324808) Calling .DriverName
	I0806 08:33:53.496460   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHHostname
	I0806 08:33:53.498625   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.498924   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:33:53.498945   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.499096   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHPort
	I0806 08:33:53.499265   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHKeyPath
	I0806 08:33:53.499394   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHKeyPath
	I0806 08:33:53.499593   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHUsername
	I0806 08:33:53.499815   79219 main.go:141] libmachine: Using SSH client type: native
	I0806 08:33:53.500050   79219 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.190 22 <nil> <nil>}
	I0806 08:33:53.500063   79219 main.go:141] libmachine: About to run SSH command:
	hostname
	I0806 08:33:53.612698   79219 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0806 08:33:53.612725   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetMachineName
	I0806 08:33:53.612979   79219 buildroot.go:166] provisioning hostname "embed-certs-324808"
	I0806 08:33:53.613002   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetMachineName
	I0806 08:33:53.613164   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHHostname
	I0806 08:33:53.615731   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.616123   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:33:53.616157   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.616316   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHPort
	I0806 08:33:53.616530   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHKeyPath
	I0806 08:33:53.616676   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHKeyPath
	I0806 08:33:53.616810   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHUsername
	I0806 08:33:53.616961   79219 main.go:141] libmachine: Using SSH client type: native
	I0806 08:33:53.617124   79219 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.190 22 <nil> <nil>}
	I0806 08:33:53.617138   79219 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-324808 && echo "embed-certs-324808" | sudo tee /etc/hostname
	I0806 08:33:53.742677   79219 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-324808
	
	I0806 08:33:53.742722   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHHostname
	I0806 08:33:53.745670   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.745984   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:33:53.746015   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.746195   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHPort
	I0806 08:33:53.746380   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHKeyPath
	I0806 08:33:53.746545   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHKeyPath
	I0806 08:33:53.746676   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHUsername
	I0806 08:33:53.746867   79219 main.go:141] libmachine: Using SSH client type: native
	I0806 08:33:53.747033   79219 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.190 22 <nil> <nil>}
	I0806 08:33:53.747048   79219 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-324808' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-324808/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-324808' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0806 08:33:53.869596   79219 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 08:33:53.869629   79219 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19370-13057/.minikube CaCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19370-13057/.minikube}
	I0806 08:33:53.869652   79219 buildroot.go:174] setting up certificates
	I0806 08:33:53.869663   79219 provision.go:84] configureAuth start
	I0806 08:33:53.869680   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetMachineName
	I0806 08:33:53.869953   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetIP
	I0806 08:33:53.872701   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.873071   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:33:53.873105   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.873246   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHHostname
	I0806 08:33:53.875521   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.875928   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:33:53.875959   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.876009   79219 provision.go:143] copyHostCerts
	I0806 08:33:53.876077   79219 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem, removing ...
	I0806 08:33:53.876089   79219 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem
	I0806 08:33:53.876170   79219 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem (1123 bytes)
	I0806 08:33:53.876287   79219 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem, removing ...
	I0806 08:33:53.876297   79219 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem
	I0806 08:33:53.876337   79219 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem (1675 bytes)
	I0806 08:33:53.876450   79219 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem, removing ...
	I0806 08:33:53.876460   79219 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem
	I0806 08:33:53.876499   79219 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem (1078 bytes)
	I0806 08:33:53.876578   79219 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem org=jenkins.embed-certs-324808 san=[127.0.0.1 192.168.39.190 embed-certs-324808 localhost minikube]
	I0806 08:33:53.961275   79219 provision.go:177] copyRemoteCerts
	I0806 08:33:53.961344   79219 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0806 08:33:53.961375   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHHostname
	I0806 08:33:53.964004   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.964356   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:33:53.964404   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.964596   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHPort
	I0806 08:33:53.964829   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHKeyPath
	I0806 08:33:53.964983   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHUsername
	I0806 08:33:53.965115   79219 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/embed-certs-324808/id_rsa Username:docker}
	I0806 08:33:54.050335   79219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0806 08:33:54.074977   79219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0806 08:33:54.100201   79219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0806 08:33:54.123356   79219 provision.go:87] duration metric: took 253.6775ms to configureAuth
	I0806 08:33:54.123386   79219 buildroot.go:189] setting minikube options for container-runtime
	I0806 08:33:54.123556   79219 config.go:182] Loaded profile config "embed-certs-324808": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 08:33:54.123634   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHHostname
	I0806 08:33:54.126279   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:54.126721   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:33:54.126746   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:54.126941   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHPort
	I0806 08:33:54.127143   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHKeyPath
	I0806 08:33:54.127309   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHKeyPath
	I0806 08:33:54.127444   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHUsername
	I0806 08:33:54.127611   79219 main.go:141] libmachine: Using SSH client type: native
	I0806 08:33:54.127785   79219 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.190 22 <nil> <nil>}
	I0806 08:33:54.127800   79219 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0806 08:33:54.653395   79614 start.go:364] duration metric: took 3m35.719137455s to acquireMachinesLock for "default-k8s-diff-port-990751"
	I0806 08:33:54.653444   79614 start.go:96] Skipping create...Using existing machine configuration
	I0806 08:33:54.653452   79614 fix.go:54] fixHost starting: 
	I0806 08:33:54.653812   79614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:33:54.653844   79614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:33:54.673686   79614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35867
	I0806 08:33:54.674123   79614 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:33:54.674581   79614 main.go:141] libmachine: Using API Version  1
	I0806 08:33:54.674604   79614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:33:54.674950   79614 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:33:54.675108   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .DriverName
	I0806 08:33:54.675246   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetState
	I0806 08:33:54.676908   79614 fix.go:112] recreateIfNeeded on default-k8s-diff-port-990751: state=Stopped err=<nil>
	I0806 08:33:54.676939   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .DriverName
	W0806 08:33:54.677097   79614 fix.go:138] unexpected machine state, will restart: <nil>
	I0806 08:33:54.678912   79614 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-990751" ...
	I0806 08:33:54.403758   79219 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0806 08:33:54.403785   79219 machine.go:97] duration metric: took 907.554493ms to provisionDockerMachine
	I0806 08:33:54.403799   79219 start.go:293] postStartSetup for "embed-certs-324808" (driver="kvm2")
	I0806 08:33:54.403814   79219 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0806 08:33:54.403854   79219 main.go:141] libmachine: (embed-certs-324808) Calling .DriverName
	I0806 08:33:54.404139   79219 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0806 08:33:54.404168   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHHostname
	I0806 08:33:54.406768   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:54.407114   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:33:54.407142   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:54.407328   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHPort
	I0806 08:33:54.407510   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHKeyPath
	I0806 08:33:54.407647   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHUsername
	I0806 08:33:54.407765   79219 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/embed-certs-324808/id_rsa Username:docker}
	I0806 08:33:54.495482   79219 ssh_runner.go:195] Run: cat /etc/os-release
	I0806 08:33:54.499691   79219 info.go:137] Remote host: Buildroot 2023.02.9
	I0806 08:33:54.499725   79219 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-13057/.minikube/addons for local assets ...
	I0806 08:33:54.499915   79219 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-13057/.minikube/files for local assets ...
	I0806 08:33:54.500104   79219 filesync.go:149] local asset: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem -> 202462.pem in /etc/ssl/certs
	I0806 08:33:54.500243   79219 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0806 08:33:54.509690   79219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem --> /etc/ssl/certs/202462.pem (1708 bytes)
	I0806 08:33:54.534233   79219 start.go:296] duration metric: took 130.420444ms for postStartSetup
	I0806 08:33:54.534282   79219 fix.go:56] duration metric: took 20.324719602s for fixHost
	I0806 08:33:54.534307   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHHostname
	I0806 08:33:54.536871   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:54.537202   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:33:54.537234   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:54.537393   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHPort
	I0806 08:33:54.537562   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHKeyPath
	I0806 08:33:54.537746   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHKeyPath
	I0806 08:33:54.537911   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHUsername
	I0806 08:33:54.538105   79219 main.go:141] libmachine: Using SSH client type: native
	I0806 08:33:54.538285   79219 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.190 22 <nil> <nil>}
	I0806 08:33:54.538295   79219 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0806 08:33:54.653264   79219 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722933234.630593565
	
	I0806 08:33:54.653288   79219 fix.go:216] guest clock: 1722933234.630593565
	I0806 08:33:54.653298   79219 fix.go:229] Guest: 2024-08-06 08:33:54.630593565 +0000 UTC Remote: 2024-08-06 08:33:54.534287484 +0000 UTC m=+280.251105524 (delta=96.306081ms)
	I0806 08:33:54.653327   79219 fix.go:200] guest clock delta is within tolerance: 96.306081ms
	I0806 08:33:54.653337   79219 start.go:83] releasing machines lock for "embed-certs-324808", held for 20.443802529s
	I0806 08:33:54.653361   79219 main.go:141] libmachine: (embed-certs-324808) Calling .DriverName
	I0806 08:33:54.653640   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetIP
	I0806 08:33:54.656332   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:54.656741   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:33:54.656766   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:54.656933   79219 main.go:141] libmachine: (embed-certs-324808) Calling .DriverName
	I0806 08:33:54.657443   79219 main.go:141] libmachine: (embed-certs-324808) Calling .DriverName
	I0806 08:33:54.657623   79219 main.go:141] libmachine: (embed-certs-324808) Calling .DriverName
	I0806 08:33:54.657705   79219 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0806 08:33:54.657748   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHHostname
	I0806 08:33:54.657857   79219 ssh_runner.go:195] Run: cat /version.json
	I0806 08:33:54.657877   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHHostname
	I0806 08:33:54.660302   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:54.660482   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:54.660694   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:33:54.660729   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:54.660823   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:33:54.660831   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHPort
	I0806 08:33:54.660844   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:54.661011   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHPort
	I0806 08:33:54.661094   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHKeyPath
	I0806 08:33:54.661165   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHKeyPath
	I0806 08:33:54.661223   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHUsername
	I0806 08:33:54.661280   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHUsername
	I0806 08:33:54.661338   79219 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/embed-certs-324808/id_rsa Username:docker}
	I0806 08:33:54.661386   79219 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/embed-certs-324808/id_rsa Username:docker}
	I0806 08:33:54.770935   79219 ssh_runner.go:195] Run: systemctl --version
	I0806 08:33:54.777166   79219 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0806 08:33:54.920252   79219 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0806 08:33:54.927740   79219 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0806 08:33:54.927829   79219 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0806 08:33:54.949691   79219 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0806 08:33:54.949720   79219 start.go:495] detecting cgroup driver to use...
	I0806 08:33:54.949794   79219 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0806 08:33:54.967575   79219 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 08:33:54.982019   79219 docker.go:217] disabling cri-docker service (if available) ...
	I0806 08:33:54.982094   79219 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0806 08:33:55.001629   79219 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0806 08:33:55.018332   79219 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0806 08:33:55.139207   79219 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0806 08:33:55.302225   79219 docker.go:233] disabling docker service ...
	I0806 08:33:55.302300   79219 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0806 08:33:55.321470   79219 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0806 08:33:55.334925   79219 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0806 08:33:55.456388   79219 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0806 08:33:55.579725   79219 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0806 08:33:55.594362   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 08:33:55.613084   79219 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0806 08:33:55.613165   79219 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:33:55.623414   79219 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0806 08:33:55.623487   79219 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:33:55.633837   79219 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:33:55.645893   79219 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:33:55.658623   79219 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0806 08:33:55.670016   79219 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:33:55.681227   79219 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:33:55.700047   79219 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:33:55.711409   79219 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0806 08:33:55.721076   79219 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0806 08:33:55.721138   79219 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0806 08:33:55.734470   79219 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0806 08:33:55.750919   79219 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 08:33:55.880306   79219 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0806 08:33:56.031323   79219 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0806 08:33:56.031401   79219 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0806 08:33:56.036456   79219 start.go:563] Will wait 60s for crictl version
	I0806 08:33:56.036522   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:33:56.040542   79219 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0806 08:33:56.081166   79219 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0806 08:33:56.081258   79219 ssh_runner.go:195] Run: crio --version
	I0806 08:33:56.109963   79219 ssh_runner.go:195] Run: crio --version
	I0806 08:33:56.141118   79219 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0806 08:33:54.680267   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .Start
	I0806 08:33:54.680448   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Ensuring networks are active...
	I0806 08:33:54.681411   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Ensuring network default is active
	I0806 08:33:54.681778   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Ensuring network mk-default-k8s-diff-port-990751 is active
	I0806 08:33:54.682205   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Getting domain xml...
	I0806 08:33:54.683028   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Creating domain...
	I0806 08:33:55.964313   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting to get IP...
	I0806 08:33:55.965483   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:33:55.965965   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | unable to find current IP address of domain default-k8s-diff-port-990751 in network mk-default-k8s-diff-port-990751
	I0806 08:33:55.966053   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | I0806 08:33:55.965953   80608 retry.go:31] will retry after 294.177981ms: waiting for machine to come up
	I0806 08:33:56.261404   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:33:56.262032   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | unable to find current IP address of domain default-k8s-diff-port-990751 in network mk-default-k8s-diff-port-990751
	I0806 08:33:56.262059   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | I0806 08:33:56.261969   80608 retry.go:31] will retry after 375.14393ms: waiting for machine to come up
	I0806 08:33:56.638714   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:33:56.639150   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | unable to find current IP address of domain default-k8s-diff-port-990751 in network mk-default-k8s-diff-port-990751
	I0806 08:33:56.639178   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | I0806 08:33:56.639118   80608 retry.go:31] will retry after 439.582122ms: waiting for machine to come up
	I0806 08:33:57.080785   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:33:57.081335   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | unable to find current IP address of domain default-k8s-diff-port-990751 in network mk-default-k8s-diff-port-990751
	I0806 08:33:57.081373   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | I0806 08:33:57.081299   80608 retry.go:31] will retry after 523.875054ms: waiting for machine to come up
	I0806 08:33:57.606731   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:33:57.607220   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | unable to find current IP address of domain default-k8s-diff-port-990751 in network mk-default-k8s-diff-port-990751
	I0806 08:33:57.607250   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | I0806 08:33:57.607178   80608 retry.go:31] will retry after 727.363483ms: waiting for machine to come up
	I0806 08:33:58.336352   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:33:58.336852   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | unable to find current IP address of domain default-k8s-diff-port-990751 in network mk-default-k8s-diff-port-990751
	I0806 08:33:58.336885   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | I0806 08:33:58.336782   80608 retry.go:31] will retry after 662.304027ms: waiting for machine to come up
	I0806 08:33:56.142523   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetIP
	I0806 08:33:56.145641   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:56.145997   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:33:56.146023   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:56.146239   79219 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0806 08:33:56.150763   79219 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 08:33:56.165617   79219 kubeadm.go:883] updating cluster {Name:embed-certs-324808 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-324808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.190 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0806 08:33:56.165775   79219 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0806 08:33:56.165837   79219 ssh_runner.go:195] Run: sudo crictl images --output json
	I0806 08:33:56.205302   79219 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0806 08:33:56.205393   79219 ssh_runner.go:195] Run: which lz4
	I0806 08:33:56.209791   79219 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0806 08:33:56.214338   79219 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0806 08:33:56.214369   79219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0806 08:33:57.735973   79219 crio.go:462] duration metric: took 1.526221921s to copy over tarball
	I0806 08:33:57.736049   79219 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0806 08:33:59.000478   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:33:59.000905   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | unable to find current IP address of domain default-k8s-diff-port-990751 in network mk-default-k8s-diff-port-990751
	I0806 08:33:59.000971   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | I0806 08:33:59.000829   80608 retry.go:31] will retry after 1.110230284s: waiting for machine to come up
	I0806 08:34:00.112658   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:00.113097   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | unable to find current IP address of domain default-k8s-diff-port-990751 in network mk-default-k8s-diff-port-990751
	I0806 08:34:00.113124   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | I0806 08:34:00.113053   80608 retry.go:31] will retry after 1.081213279s: waiting for machine to come up
	I0806 08:34:01.195646   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:01.196062   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | unable to find current IP address of domain default-k8s-diff-port-990751 in network mk-default-k8s-diff-port-990751
	I0806 08:34:01.196097   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | I0806 08:34:01.196014   80608 retry.go:31] will retry after 1.334553019s: waiting for machine to come up
	I0806 08:34:02.532009   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:02.532492   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | unable to find current IP address of domain default-k8s-diff-port-990751 in network mk-default-k8s-diff-port-990751
	I0806 08:34:02.532520   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | I0806 08:34:02.532449   80608 retry.go:31] will retry after 1.42743583s: waiting for machine to come up
	I0806 08:33:59.991340   79219 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.255258522s)
	I0806 08:33:59.991368   79219 crio.go:469] duration metric: took 2.255367027s to extract the tarball
	I0806 08:33:59.991378   79219 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0806 08:34:00.030261   79219 ssh_runner.go:195] Run: sudo crictl images --output json
	I0806 08:34:00.075217   79219 crio.go:514] all images are preloaded for cri-o runtime.
	I0806 08:34:00.075243   79219 cache_images.go:84] Images are preloaded, skipping loading
	I0806 08:34:00.075253   79219 kubeadm.go:934] updating node { 192.168.39.190 8443 v1.30.3 crio true true} ...
	I0806 08:34:00.075396   79219 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-324808 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.190
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-324808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0806 08:34:00.075477   79219 ssh_runner.go:195] Run: crio config
	I0806 08:34:00.124134   79219 cni.go:84] Creating CNI manager for ""
	I0806 08:34:00.124160   79219 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0806 08:34:00.124174   79219 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0806 08:34:00.124195   79219 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.190 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-324808 NodeName:embed-certs-324808 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.190"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.190 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0806 08:34:00.124348   79219 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.190
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-324808"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.190
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.190"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0806 08:34:00.124446   79219 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0806 08:34:00.134782   79219 binaries.go:44] Found k8s binaries, skipping transfer
	I0806 08:34:00.134874   79219 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0806 08:34:00.144892   79219 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0806 08:34:00.164614   79219 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0806 08:34:00.183911   79219 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0806 08:34:00.203821   79219 ssh_runner.go:195] Run: grep 192.168.39.190	control-plane.minikube.internal$ /etc/hosts
	I0806 08:34:00.207980   79219 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.190	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 08:34:00.220566   79219 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 08:34:00.339428   79219 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 08:34:00.357441   79219 certs.go:68] Setting up /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/embed-certs-324808 for IP: 192.168.39.190
	I0806 08:34:00.357470   79219 certs.go:194] generating shared ca certs ...
	I0806 08:34:00.357486   79219 certs.go:226] acquiring lock for ca certs: {Name:mkf5c5aed91f4108054d9f2c742cad66c86afbed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:34:00.357648   79219 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.key
	I0806 08:34:00.357694   79219 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.key
	I0806 08:34:00.357702   79219 certs.go:256] generating profile certs ...
	I0806 08:34:00.357788   79219 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/embed-certs-324808/client.key
	I0806 08:34:00.357870   79219 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/embed-certs-324808/apiserver.key.078f53b0
	I0806 08:34:00.357907   79219 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/embed-certs-324808/proxy-client.key
	I0806 08:34:00.358033   79219 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246.pem (1338 bytes)
	W0806 08:34:00.358070   79219 certs.go:480] ignoring /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246_empty.pem, impossibly tiny 0 bytes
	I0806 08:34:00.358085   79219 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem (1675 bytes)
	I0806 08:34:00.358122   79219 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem (1078 bytes)
	I0806 08:34:00.358151   79219 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem (1123 bytes)
	I0806 08:34:00.358174   79219 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem (1675 bytes)
	I0806 08:34:00.358231   79219 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem (1708 bytes)
	I0806 08:34:00.359196   79219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0806 08:34:00.397544   79219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0806 08:34:00.436824   79219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0806 08:34:00.478633   79219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0806 08:34:00.525641   79219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/embed-certs-324808/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0806 08:34:00.571003   79219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/embed-certs-324808/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0806 08:34:00.596423   79219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/embed-certs-324808/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0806 08:34:00.621129   79219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/embed-certs-324808/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0806 08:34:00.646297   79219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem --> /usr/share/ca-certificates/202462.pem (1708 bytes)
	I0806 08:34:00.672396   79219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0806 08:34:00.698468   79219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246.pem --> /usr/share/ca-certificates/20246.pem (1338 bytes)
	I0806 08:34:00.724115   79219 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0806 08:34:00.742200   79219 ssh_runner.go:195] Run: openssl version
	I0806 08:34:00.748418   79219 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202462.pem && ln -fs /usr/share/ca-certificates/202462.pem /etc/ssl/certs/202462.pem"
	I0806 08:34:00.760234   79219 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202462.pem
	I0806 08:34:00.764947   79219 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  6 07:17 /usr/share/ca-certificates/202462.pem
	I0806 08:34:00.765013   79219 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202462.pem
	I0806 08:34:00.771065   79219 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202462.pem /etc/ssl/certs/3ec20f2e.0"
	I0806 08:34:00.782995   79219 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0806 08:34:00.795092   79219 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0806 08:34:00.800011   79219 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  6 07:07 /usr/share/ca-certificates/minikubeCA.pem
	I0806 08:34:00.800079   79219 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0806 08:34:00.806038   79219 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0806 08:34:00.817660   79219 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20246.pem && ln -fs /usr/share/ca-certificates/20246.pem /etc/ssl/certs/20246.pem"
	I0806 08:34:00.829045   79219 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20246.pem
	I0806 08:34:00.833640   79219 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  6 07:17 /usr/share/ca-certificates/20246.pem
	I0806 08:34:00.833704   79219 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20246.pem
	I0806 08:34:00.839618   79219 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20246.pem /etc/ssl/certs/51391683.0"
	I0806 08:34:00.850938   79219 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0806 08:34:00.855577   79219 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0806 08:34:00.861960   79219 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0806 08:34:00.868626   79219 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0806 08:34:00.875291   79219 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0806 08:34:00.881553   79219 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0806 08:34:00.887650   79219 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0806 08:34:00.894134   79219 kubeadm.go:392] StartCluster: {Name:embed-certs-324808 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-324808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.190 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 08:34:00.894223   79219 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0806 08:34:00.894272   79219 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0806 08:34:00.938980   79219 cri.go:89] found id: ""
	I0806 08:34:00.939046   79219 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0806 08:34:00.951339   79219 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0806 08:34:00.951369   79219 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0806 08:34:00.951430   79219 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0806 08:34:00.962789   79219 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0806 08:34:00.964165   79219 kubeconfig.go:125] found "embed-certs-324808" server: "https://192.168.39.190:8443"
	I0806 08:34:00.967221   79219 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0806 08:34:00.977500   79219 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.190
	I0806 08:34:00.977535   79219 kubeadm.go:1160] stopping kube-system containers ...
	I0806 08:34:00.977547   79219 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0806 08:34:00.977603   79219 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0806 08:34:01.017218   79219 cri.go:89] found id: ""
	I0806 08:34:01.017315   79219 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0806 08:34:01.035305   79219 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0806 08:34:01.046210   79219 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0806 08:34:01.046233   79219 kubeadm.go:157] found existing configuration files:
	
	I0806 08:34:01.046290   79219 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0806 08:34:01.056904   79219 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0806 08:34:01.056974   79219 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0806 08:34:01.066640   79219 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0806 08:34:01.076042   79219 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0806 08:34:01.076114   79219 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0806 08:34:01.085735   79219 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0806 08:34:01.094602   79219 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0806 08:34:01.094661   79219 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0806 08:34:01.104160   79219 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0806 08:34:01.113176   79219 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0806 08:34:01.113249   79219 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0806 08:34:01.122679   79219 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0806 08:34:01.133032   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:01.259256   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:02.252011   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:02.481629   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:02.570215   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:02.710597   79219 api_server.go:52] waiting for apiserver process to appear ...
	I0806 08:34:02.710700   79219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:03.210803   79219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:03.711111   79219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:03.738971   79219 api_server.go:72] duration metric: took 1.028375323s to wait for apiserver process to appear ...
	I0806 08:34:03.739003   79219 api_server.go:88] waiting for apiserver healthz status ...
	I0806 08:34:03.739025   79219 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8443/healthz ...
	I0806 08:34:03.962232   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:03.962678   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | unable to find current IP address of domain default-k8s-diff-port-990751 in network mk-default-k8s-diff-port-990751
	I0806 08:34:03.962709   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | I0806 08:34:03.962638   80608 retry.go:31] will retry after 2.113116057s: waiting for machine to come up
	I0806 08:34:06.078010   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:06.078537   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | unable to find current IP address of domain default-k8s-diff-port-990751 in network mk-default-k8s-diff-port-990751
	I0806 08:34:06.078570   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | I0806 08:34:06.078503   80608 retry.go:31] will retry after 2.580076274s: waiting for machine to come up
	I0806 08:34:08.662313   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:08.662651   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | unable to find current IP address of domain default-k8s-diff-port-990751 in network mk-default-k8s-diff-port-990751
	I0806 08:34:08.662673   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | I0806 08:34:08.662609   80608 retry.go:31] will retry after 4.365982677s: waiting for machine to come up
	I0806 08:34:06.791233   79219 api_server.go:279] https://192.168.39.190:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0806 08:34:06.791272   79219 api_server.go:103] status: https://192.168.39.190:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0806 08:34:06.791311   79219 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8443/healthz ...
	I0806 08:34:06.804038   79219 api_server.go:279] https://192.168.39.190:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0806 08:34:06.804074   79219 api_server.go:103] status: https://192.168.39.190:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0806 08:34:07.239192   79219 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8443/healthz ...
	I0806 08:34:07.244973   79219 api_server.go:279] https://192.168.39.190:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0806 08:34:07.245005   79219 api_server.go:103] status: https://192.168.39.190:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0806 08:34:07.739556   79219 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8443/healthz ...
	I0806 08:34:07.754354   79219 api_server.go:279] https://192.168.39.190:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0806 08:34:07.754382   79219 api_server.go:103] status: https://192.168.39.190:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0806 08:34:08.240114   79219 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8443/healthz ...
	I0806 08:34:08.244799   79219 api_server.go:279] https://192.168.39.190:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0806 08:34:08.244829   79219 api_server.go:103] status: https://192.168.39.190:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0806 08:34:08.739407   79219 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8443/healthz ...
	I0806 08:34:08.743770   79219 api_server.go:279] https://192.168.39.190:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0806 08:34:08.743804   79219 api_server.go:103] status: https://192.168.39.190:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0806 08:34:09.239855   79219 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8443/healthz ...
	I0806 08:34:09.244078   79219 api_server.go:279] https://192.168.39.190:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0806 08:34:09.244135   79219 api_server.go:103] status: https://192.168.39.190:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0806 08:34:09.739859   79219 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8443/healthz ...
	I0806 08:34:09.744236   79219 api_server.go:279] https://192.168.39.190:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0806 08:34:09.744272   79219 api_server.go:103] status: https://192.168.39.190:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0806 08:34:10.240036   79219 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8443/healthz ...
	I0806 08:34:10.244217   79219 api_server.go:279] https://192.168.39.190:8443/healthz returned 200:
	ok
	I0806 08:34:10.250333   79219 api_server.go:141] control plane version: v1.30.3
	I0806 08:34:10.250361   79219 api_server.go:131] duration metric: took 6.511350709s to wait for apiserver health ...
	I0806 08:34:10.250370   79219 cni.go:84] Creating CNI manager for ""
	I0806 08:34:10.250376   79219 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0806 08:34:10.252472   79219 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0806 08:34:10.253779   79219 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0806 08:34:10.264772   79219 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0806 08:34:10.282904   79219 system_pods.go:43] waiting for kube-system pods to appear ...
	I0806 08:34:10.292343   79219 system_pods.go:59] 8 kube-system pods found
	I0806 08:34:10.292397   79219 system_pods.go:61] "coredns-7db6d8ff4d-4xxwm" [eeb47dea-32e9-458f-9ad9-2af6d4e68cc0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0806 08:34:10.292408   79219 system_pods.go:61] "etcd-embed-certs-324808" [1e0a82cd-6704-4da3-8992-15c68a7aaf70] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0806 08:34:10.292419   79219 system_pods.go:61] "kube-apiserver-embed-certs-324808" [a33c54d9-1fc5-4243-8fe9-d535c40908c3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0806 08:34:10.292426   79219 system_pods.go:61] "kube-controller-manager-embed-certs-324808" [b1c519b8-9c96-4c87-90f3-cc277cfe7763] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0806 08:34:10.292432   79219 system_pods.go:61] "kube-proxy-8lbzv" [0c45e6c0-9b7f-4c48-bff5-38d4a5949bec] Running
	I0806 08:34:10.292438   79219 system_pods.go:61] "kube-scheduler-embed-certs-324808" [78c2827f-341a-4441-bf7e-eff947fc3ea8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0806 08:34:10.292445   79219 system_pods.go:61] "metrics-server-569cc877fc-8xb9k" [7a5fb326-11a7-48fa-bcde-a22026a1dccb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0806 08:34:10.292452   79219 system_pods.go:61] "storage-provisioner" [15c7d89a-e994-4cca-931c-f646157a15e8] Running
	I0806 08:34:10.292465   79219 system_pods.go:74] duration metric: took 9.536183ms to wait for pod list to return data ...
	I0806 08:34:10.292474   79219 node_conditions.go:102] verifying NodePressure condition ...
	I0806 08:34:10.295584   79219 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0806 08:34:10.295616   79219 node_conditions.go:123] node cpu capacity is 2
	I0806 08:34:10.295632   79219 node_conditions.go:105] duration metric: took 3.152403ms to run NodePressure ...
	I0806 08:34:10.295654   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:10.566724   79219 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0806 08:34:10.571777   79219 kubeadm.go:739] kubelet initialised
	I0806 08:34:10.571801   79219 kubeadm.go:740] duration metric: took 5.042307ms waiting for restarted kubelet to initialise ...
	I0806 08:34:10.571812   79219 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 08:34:10.576738   79219 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-4xxwm" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:10.581943   79219 pod_ready.go:97] node "embed-certs-324808" hosting pod "coredns-7db6d8ff4d-4xxwm" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-324808" has status "Ready":"False"
	I0806 08:34:10.581976   79219 pod_ready.go:81] duration metric: took 5.207743ms for pod "coredns-7db6d8ff4d-4xxwm" in "kube-system" namespace to be "Ready" ...
	E0806 08:34:10.581989   79219 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-324808" hosting pod "coredns-7db6d8ff4d-4xxwm" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-324808" has status "Ready":"False"
	I0806 08:34:10.581997   79219 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-324808" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:10.586983   79219 pod_ready.go:97] node "embed-certs-324808" hosting pod "etcd-embed-certs-324808" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-324808" has status "Ready":"False"
	I0806 08:34:10.587007   79219 pod_ready.go:81] duration metric: took 5.000306ms for pod "etcd-embed-certs-324808" in "kube-system" namespace to be "Ready" ...
	E0806 08:34:10.587016   79219 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-324808" hosting pod "etcd-embed-certs-324808" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-324808" has status "Ready":"False"
	I0806 08:34:10.587022   79219 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-324808" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:10.592042   79219 pod_ready.go:97] node "embed-certs-324808" hosting pod "kube-apiserver-embed-certs-324808" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-324808" has status "Ready":"False"
	I0806 08:34:10.592066   79219 pod_ready.go:81] duration metric: took 5.035031ms for pod "kube-apiserver-embed-certs-324808" in "kube-system" namespace to be "Ready" ...
	E0806 08:34:10.592074   79219 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-324808" hosting pod "kube-apiserver-embed-certs-324808" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-324808" has status "Ready":"False"
	I0806 08:34:10.592080   79219 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-324808" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:10.687156   79219 pod_ready.go:97] node "embed-certs-324808" hosting pod "kube-controller-manager-embed-certs-324808" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-324808" has status "Ready":"False"
	I0806 08:34:10.687193   79219 pod_ready.go:81] duration metric: took 95.100916ms for pod "kube-controller-manager-embed-certs-324808" in "kube-system" namespace to be "Ready" ...
	E0806 08:34:10.687207   79219 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-324808" hosting pod "kube-controller-manager-embed-certs-324808" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-324808" has status "Ready":"False"
	I0806 08:34:10.687217   79219 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-8lbzv" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:11.086180   79219 pod_ready.go:97] node "embed-certs-324808" hosting pod "kube-proxy-8lbzv" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-324808" has status "Ready":"False"
	I0806 08:34:11.086213   79219 pod_ready.go:81] duration metric: took 398.984678ms for pod "kube-proxy-8lbzv" in "kube-system" namespace to be "Ready" ...
	E0806 08:34:11.086224   79219 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-324808" hosting pod "kube-proxy-8lbzv" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-324808" has status "Ready":"False"
	I0806 08:34:11.086230   79219 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-324808" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:11.486971   79219 pod_ready.go:97] node "embed-certs-324808" hosting pod "kube-scheduler-embed-certs-324808" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-324808" has status "Ready":"False"
	I0806 08:34:11.486995   79219 pod_ready.go:81] duration metric: took 400.759449ms for pod "kube-scheduler-embed-certs-324808" in "kube-system" namespace to be "Ready" ...
	E0806 08:34:11.487005   79219 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-324808" hosting pod "kube-scheduler-embed-certs-324808" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-324808" has status "Ready":"False"
	I0806 08:34:11.487011   79219 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:11.887030   79219 pod_ready.go:97] node "embed-certs-324808" hosting pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-324808" has status "Ready":"False"
	I0806 08:34:11.887052   79219 pod_ready.go:81] duration metric: took 400.033763ms for pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace to be "Ready" ...
	E0806 08:34:11.887061   79219 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-324808" hosting pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-324808" has status "Ready":"False"
	I0806 08:34:11.887068   79219 pod_ready.go:38] duration metric: took 1.315245982s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 08:34:11.887084   79219 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0806 08:34:11.899633   79219 ops.go:34] apiserver oom_adj: -16
	I0806 08:34:11.899649   79219 kubeadm.go:597] duration metric: took 10.948272769s to restartPrimaryControlPlane
	I0806 08:34:11.899657   79219 kubeadm.go:394] duration metric: took 11.005530169s to StartCluster
	I0806 08:34:11.899676   79219 settings.go:142] acquiring lock: {Name:mkb0bfbcae3b4205ef43487a9de84cfd8afd2e5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:34:11.899745   79219 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19370-13057/kubeconfig
	I0806 08:34:11.901341   79219 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/kubeconfig: {Name:mk5c066914acf8e36556b29792f75b98076ca34a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:34:11.901603   79219 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.190 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0806 08:34:11.901676   79219 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0806 08:34:11.901753   79219 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-324808"
	I0806 08:34:11.901785   79219 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-324808"
	W0806 08:34:11.901796   79219 addons.go:243] addon storage-provisioner should already be in state true
	I0806 08:34:11.901788   79219 addons.go:69] Setting default-storageclass=true in profile "embed-certs-324808"
	I0806 08:34:11.901834   79219 host.go:66] Checking if "embed-certs-324808" exists ...
	I0806 08:34:11.901851   79219 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-324808"
	I0806 08:34:11.901854   79219 config.go:182] Loaded profile config "embed-certs-324808": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 08:34:11.901842   79219 addons.go:69] Setting metrics-server=true in profile "embed-certs-324808"
	I0806 08:34:11.901921   79219 addons.go:234] Setting addon metrics-server=true in "embed-certs-324808"
	W0806 08:34:11.901940   79219 addons.go:243] addon metrics-server should already be in state true
	I0806 08:34:11.901998   79219 host.go:66] Checking if "embed-certs-324808" exists ...
	I0806 08:34:11.902216   79219 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:34:11.902252   79219 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:34:11.902280   79219 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:34:11.902308   79219 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:34:11.902383   79219 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:34:11.902411   79219 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:34:11.904343   79219 out.go:177] * Verifying Kubernetes components...
	I0806 08:34:11.905934   79219 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 08:34:11.917271   79219 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32971
	I0806 08:34:11.917375   79219 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38975
	I0806 08:34:11.917833   79219 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:34:11.917874   79219 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:34:11.917981   79219 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33569
	I0806 08:34:11.918394   79219 main.go:141] libmachine: Using API Version  1
	I0806 08:34:11.918416   79219 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:34:11.918453   79219 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:34:11.918530   79219 main.go:141] libmachine: Using API Version  1
	I0806 08:34:11.918552   79219 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:34:11.918761   79219 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:34:11.918819   79219 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:34:11.918913   79219 main.go:141] libmachine: Using API Version  1
	I0806 08:34:11.918939   79219 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:34:11.918965   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetState
	I0806 08:34:11.919280   79219 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:34:11.919322   79219 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:34:11.919367   79219 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:34:11.919873   79219 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:34:11.919901   79219 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:34:11.921815   79219 addons.go:234] Setting addon default-storageclass=true in "embed-certs-324808"
	W0806 08:34:11.921834   79219 addons.go:243] addon default-storageclass should already be in state true
	I0806 08:34:11.921856   79219 host.go:66] Checking if "embed-certs-324808" exists ...
	I0806 08:34:11.922132   79219 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:34:11.922168   79219 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:34:11.936191   79219 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40029
	I0806 08:34:11.936678   79219 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:34:11.937209   79219 main.go:141] libmachine: Using API Version  1
	I0806 08:34:11.937234   79219 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:34:11.940428   79219 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35087
	I0806 08:34:11.940508   79219 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38421
	I0806 08:34:11.940845   79219 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:34:11.941345   79219 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:34:11.941346   79219 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:34:11.941707   79219 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:34:11.941752   79219 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:34:11.941831   79219 main.go:141] libmachine: Using API Version  1
	I0806 08:34:11.941872   79219 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:34:11.941904   79219 main.go:141] libmachine: Using API Version  1
	I0806 08:34:11.941917   79219 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:34:11.942273   79219 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:34:11.942359   79219 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:34:11.942514   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetState
	I0806 08:34:11.942532   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetState
	I0806 08:34:11.944317   79219 main.go:141] libmachine: (embed-certs-324808) Calling .DriverName
	I0806 08:34:11.944503   79219 main.go:141] libmachine: (embed-certs-324808) Calling .DriverName
	I0806 08:34:11.946511   79219 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0806 08:34:11.946525   79219 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 08:34:11.947929   79219 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0806 08:34:11.947948   79219 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0806 08:34:11.947976   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHHostname
	I0806 08:34:11.948044   79219 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0806 08:34:11.948066   79219 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0806 08:34:11.948085   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHHostname
	I0806 08:34:11.951758   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:34:11.951988   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:34:11.952181   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:34:11.952205   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:34:11.952530   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:34:11.952565   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHPort
	I0806 08:34:11.952566   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:34:11.952693   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHPort
	I0806 08:34:11.952819   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHKeyPath
	I0806 08:34:11.952858   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHKeyPath
	I0806 08:34:11.952970   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHUsername
	I0806 08:34:11.952974   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHUsername
	I0806 08:34:11.953143   79219 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/embed-certs-324808/id_rsa Username:docker}
	I0806 08:34:11.953207   79219 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/embed-certs-324808/id_rsa Username:docker}
	I0806 08:34:11.959949   79219 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35275
	I0806 08:34:11.960353   79219 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:34:11.960900   79219 main.go:141] libmachine: Using API Version  1
	I0806 08:34:11.960929   79219 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:34:11.961241   79219 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:34:11.961451   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetState
	I0806 08:34:11.963031   79219 main.go:141] libmachine: (embed-certs-324808) Calling .DriverName
	I0806 08:34:11.963333   79219 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0806 08:34:11.963349   79219 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0806 08:34:11.963366   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHHostname
	I0806 08:34:11.966079   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:34:11.966448   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:34:11.966481   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:34:11.966559   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHPort
	I0806 08:34:11.966737   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHKeyPath
	I0806 08:34:11.966881   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHUsername
	I0806 08:34:11.967013   79219 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/embed-certs-324808/id_rsa Username:docker}
	I0806 08:34:12.081433   79219 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 08:34:12.100701   79219 node_ready.go:35] waiting up to 6m0s for node "embed-certs-324808" to be "Ready" ...
	I0806 08:34:12.196131   79219 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0806 08:34:12.196151   79219 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0806 08:34:12.201603   79219 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0806 08:34:12.217590   79219 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0806 08:34:12.217626   79219 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0806 08:34:12.239354   79219 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0806 08:34:12.239377   79219 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0806 08:34:12.254418   79219 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0806 08:34:12.266126   79219 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0806 08:34:13.441725   79219 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.187269206s)
	I0806 08:34:13.441780   79219 main.go:141] libmachine: Making call to close driver server
	I0806 08:34:13.441791   79219 main.go:141] libmachine: (embed-certs-324808) Calling .Close
	I0806 08:34:13.441856   79219 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.240222324s)
	I0806 08:34:13.441895   79219 main.go:141] libmachine: Making call to close driver server
	I0806 08:34:13.441910   79219 main.go:141] libmachine: (embed-certs-324808) Calling .Close
	I0806 08:34:13.442096   79219 main.go:141] libmachine: (embed-certs-324808) DBG | Closing plugin on server side
	I0806 08:34:13.442116   79219 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:34:13.442128   79219 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:34:13.442136   79219 main.go:141] libmachine: Making call to close driver server
	I0806 08:34:13.442147   79219 main.go:141] libmachine: (embed-certs-324808) Calling .Close
	I0806 08:34:13.442211   79219 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:34:13.442225   79219 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:34:13.442235   79219 main.go:141] libmachine: Making call to close driver server
	I0806 08:34:13.442245   79219 main.go:141] libmachine: (embed-certs-324808) Calling .Close
	I0806 08:34:13.442414   79219 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:34:13.442424   79219 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:34:13.443716   79219 main.go:141] libmachine: (embed-certs-324808) DBG | Closing plugin on server side
	I0806 08:34:13.443727   79219 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:34:13.443748   79219 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:34:13.448919   79219 main.go:141] libmachine: Making call to close driver server
	I0806 08:34:13.448945   79219 main.go:141] libmachine: (embed-certs-324808) Calling .Close
	I0806 08:34:13.449225   79219 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:34:13.449242   79219 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:34:13.521481   79219 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.255292404s)
	I0806 08:34:13.521542   79219 main.go:141] libmachine: Making call to close driver server
	I0806 08:34:13.521561   79219 main.go:141] libmachine: (embed-certs-324808) Calling .Close
	I0806 08:34:13.521875   79219 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:34:13.521898   79219 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:34:13.521899   79219 main.go:141] libmachine: (embed-certs-324808) DBG | Closing plugin on server side
	I0806 08:34:13.521916   79219 main.go:141] libmachine: Making call to close driver server
	I0806 08:34:13.521927   79219 main.go:141] libmachine: (embed-certs-324808) Calling .Close
	I0806 08:34:13.522147   79219 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:34:13.522165   79219 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:34:13.522175   79219 addons.go:475] Verifying addon metrics-server=true in "embed-certs-324808"
	I0806 08:34:13.522185   79219 main.go:141] libmachine: (embed-certs-324808) DBG | Closing plugin on server side
	I0806 08:34:13.524209   79219 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0806 08:34:13.032813   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.033366   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Found IP for machine: 192.168.50.235
	I0806 08:34:13.033394   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Reserving static IP address...
	I0806 08:34:13.033414   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has current primary IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.033956   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-990751", mac: "52:54:00:fb:51:bd", ip: "192.168.50.235"} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:13.033992   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Reserved static IP address: 192.168.50.235
	I0806 08:34:13.034012   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | skip adding static IP to network mk-default-k8s-diff-port-990751 - found existing host DHCP lease matching {name: "default-k8s-diff-port-990751", mac: "52:54:00:fb:51:bd", ip: "192.168.50.235"}
	I0806 08:34:13.034031   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | Getting to WaitForSSH function...
	I0806 08:34:13.034046   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for SSH to be available...
	I0806 08:34:13.036483   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.036843   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:13.036874   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.037013   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | Using SSH client type: external
	I0806 08:34:13.037042   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | Using SSH private key: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/default-k8s-diff-port-990751/id_rsa (-rw-------)
	I0806 08:34:13.037066   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.235 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19370-13057/.minikube/machines/default-k8s-diff-port-990751/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0806 08:34:13.037078   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | About to run SSH command:
	I0806 08:34:13.037119   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | exit 0
	I0806 08:34:13.164932   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | SSH cmd err, output: <nil>: 
	I0806 08:34:13.165301   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetConfigRaw
	I0806 08:34:13.165892   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetIP
	I0806 08:34:13.169003   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.169408   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:13.169449   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.169783   79614 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/default-k8s-diff-port-990751/config.json ...
	I0806 08:34:13.170431   79614 machine.go:94] provisionDockerMachine start ...
	I0806 08:34:13.170452   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .DriverName
	I0806 08:34:13.170703   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHHostname
	I0806 08:34:13.173536   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.173994   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:13.174022   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.174241   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHPort
	I0806 08:34:13.174456   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHKeyPath
	I0806 08:34:13.174632   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHKeyPath
	I0806 08:34:13.174789   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHUsername
	I0806 08:34:13.175056   79614 main.go:141] libmachine: Using SSH client type: native
	I0806 08:34:13.175329   79614 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.235 22 <nil> <nil>}
	I0806 08:34:13.175344   79614 main.go:141] libmachine: About to run SSH command:
	hostname
	I0806 08:34:13.281288   79614 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0806 08:34:13.281322   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetMachineName
	I0806 08:34:13.281573   79614 buildroot.go:166] provisioning hostname "default-k8s-diff-port-990751"
	I0806 08:34:13.281606   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetMachineName
	I0806 08:34:13.281808   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHHostname
	I0806 08:34:13.284463   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.284825   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:13.284860   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.284988   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHPort
	I0806 08:34:13.285178   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHKeyPath
	I0806 08:34:13.285350   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHKeyPath
	I0806 08:34:13.285468   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHUsername
	I0806 08:34:13.285614   79614 main.go:141] libmachine: Using SSH client type: native
	I0806 08:34:13.285807   79614 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.235 22 <nil> <nil>}
	I0806 08:34:13.285821   79614 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-990751 && echo "default-k8s-diff-port-990751" | sudo tee /etc/hostname
	I0806 08:34:13.413970   79614 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-990751
	
	I0806 08:34:13.414005   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHHostname
	I0806 08:34:13.417279   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.417708   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:13.417757   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.417947   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHPort
	I0806 08:34:13.418153   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHKeyPath
	I0806 08:34:13.418337   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHKeyPath
	I0806 08:34:13.418495   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHUsername
	I0806 08:34:13.418683   79614 main.go:141] libmachine: Using SSH client type: native
	I0806 08:34:13.418935   79614 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.235 22 <nil> <nil>}
	I0806 08:34:13.418970   79614 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-990751' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-990751/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-990751' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0806 08:34:13.534550   79614 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 08:34:13.534578   79614 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19370-13057/.minikube CaCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19370-13057/.minikube}
	I0806 08:34:13.534630   79614 buildroot.go:174] setting up certificates
	I0806 08:34:13.534647   79614 provision.go:84] configureAuth start
	I0806 08:34:13.534664   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetMachineName
	I0806 08:34:13.534962   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetIP
	I0806 08:34:13.537582   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.538144   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:13.538173   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.538329   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHHostname
	I0806 08:34:13.541132   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.541422   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:13.541447   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.541638   79614 provision.go:143] copyHostCerts
	I0806 08:34:13.541693   79614 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem, removing ...
	I0806 08:34:13.541703   79614 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem
	I0806 08:34:13.541761   79614 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem (1078 bytes)
	I0806 08:34:13.541862   79614 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem, removing ...
	I0806 08:34:13.541870   79614 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem
	I0806 08:34:13.541891   79614 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem (1123 bytes)
	I0806 08:34:13.541943   79614 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem, removing ...
	I0806 08:34:13.541950   79614 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem
	I0806 08:34:13.541966   79614 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem (1675 bytes)
	I0806 08:34:13.542013   79614 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-990751 san=[127.0.0.1 192.168.50.235 default-k8s-diff-port-990751 localhost minikube]
	I0806 08:34:13.632290   79614 provision.go:177] copyRemoteCerts
	I0806 08:34:13.632345   79614 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0806 08:34:13.632389   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHHostname
	I0806 08:34:13.635332   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.635619   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:13.635644   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.635868   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHPort
	I0806 08:34:13.636069   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHKeyPath
	I0806 08:34:13.636208   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHUsername
	I0806 08:34:13.636381   79614 sshutil.go:53] new ssh client: &{IP:192.168.50.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/default-k8s-diff-port-990751/id_rsa Username:docker}
	I0806 08:34:13.719415   79614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0806 08:34:13.745981   79614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0806 08:34:13.775906   79614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0806 08:34:13.806948   79614 provision.go:87] duration metric: took 272.283009ms to configureAuth
	I0806 08:34:13.806986   79614 buildroot.go:189] setting minikube options for container-runtime
	I0806 08:34:13.807234   79614 config.go:182] Loaded profile config "default-k8s-diff-port-990751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 08:34:13.807328   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHHostname
	I0806 08:34:13.810500   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.810895   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:13.810928   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.811074   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHPort
	I0806 08:34:13.811302   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHKeyPath
	I0806 08:34:13.811497   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHKeyPath
	I0806 08:34:13.811627   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHUsername
	I0806 08:34:13.811896   79614 main.go:141] libmachine: Using SSH client type: native
	I0806 08:34:13.812124   79614 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.235 22 <nil> <nil>}
	I0806 08:34:13.812148   79614 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0806 08:34:13.525425   79219 addons.go:510] duration metric: took 1.623751418s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0806 08:34:14.104863   79219 node_ready.go:53] node "embed-certs-324808" has status "Ready":"False"
	I0806 08:34:14.353523   79927 start.go:364] duration metric: took 3m15.487259208s to acquireMachinesLock for "old-k8s-version-409903"
	I0806 08:34:14.353614   79927 start.go:96] Skipping create...Using existing machine configuration
	I0806 08:34:14.353629   79927 fix.go:54] fixHost starting: 
	I0806 08:34:14.354059   79927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:34:14.354096   79927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:34:14.373696   79927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33985
	I0806 08:34:14.374148   79927 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:34:14.374646   79927 main.go:141] libmachine: Using API Version  1
	I0806 08:34:14.374672   79927 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:34:14.374995   79927 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:34:14.375277   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .DriverName
	I0806 08:34:14.375472   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetState
	I0806 08:34:14.377209   79927 fix.go:112] recreateIfNeeded on old-k8s-version-409903: state=Stopped err=<nil>
	I0806 08:34:14.377251   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .DriverName
	W0806 08:34:14.377387   79927 fix.go:138] unexpected machine state, will restart: <nil>
	I0806 08:34:14.379029   79927 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-409903" ...
	I0806 08:34:14.113926   79614 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0806 08:34:14.113953   79614 machine.go:97] duration metric: took 943.50747ms to provisionDockerMachine
	I0806 08:34:14.113967   79614 start.go:293] postStartSetup for "default-k8s-diff-port-990751" (driver="kvm2")
	I0806 08:34:14.113982   79614 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0806 08:34:14.114004   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .DriverName
	I0806 08:34:14.114313   79614 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0806 08:34:14.114339   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHHostname
	I0806 08:34:14.116618   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:14.117028   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:14.117067   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:14.117162   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHPort
	I0806 08:34:14.117371   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHKeyPath
	I0806 08:34:14.117545   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHUsername
	I0806 08:34:14.117738   79614 sshutil.go:53] new ssh client: &{IP:192.168.50.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/default-k8s-diff-port-990751/id_rsa Username:docker}
	I0806 08:34:14.199514   79614 ssh_runner.go:195] Run: cat /etc/os-release
	I0806 08:34:14.204063   79614 info.go:137] Remote host: Buildroot 2023.02.9
	I0806 08:34:14.204084   79614 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-13057/.minikube/addons for local assets ...
	I0806 08:34:14.204139   79614 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-13057/.minikube/files for local assets ...
	I0806 08:34:14.204219   79614 filesync.go:149] local asset: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem -> 202462.pem in /etc/ssl/certs
	I0806 08:34:14.204308   79614 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0806 08:34:14.213847   79614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem --> /etc/ssl/certs/202462.pem (1708 bytes)
	I0806 08:34:14.241524   79614 start.go:296] duration metric: took 127.543182ms for postStartSetup
	I0806 08:34:14.241570   79614 fix.go:56] duration metric: took 19.588116399s for fixHost
	I0806 08:34:14.241590   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHHostname
	I0806 08:34:14.244333   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:14.244702   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:14.244733   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:14.244936   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHPort
	I0806 08:34:14.245178   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHKeyPath
	I0806 08:34:14.245338   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHKeyPath
	I0806 08:34:14.245497   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHUsername
	I0806 08:34:14.245637   79614 main.go:141] libmachine: Using SSH client type: native
	I0806 08:34:14.245783   79614 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.235 22 <nil> <nil>}
	I0806 08:34:14.245793   79614 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0806 08:34:14.353301   79614 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722933254.326161993
	
	I0806 08:34:14.353325   79614 fix.go:216] guest clock: 1722933254.326161993
	I0806 08:34:14.353339   79614 fix.go:229] Guest: 2024-08-06 08:34:14.326161993 +0000 UTC Remote: 2024-08-06 08:34:14.2415744 +0000 UTC m=+235.446489438 (delta=84.587593ms)
	I0806 08:34:14.353389   79614 fix.go:200] guest clock delta is within tolerance: 84.587593ms
	I0806 08:34:14.353398   79614 start.go:83] releasing machines lock for "default-k8s-diff-port-990751", held for 19.699973637s
	I0806 08:34:14.353432   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .DriverName
	I0806 08:34:14.353751   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetIP
	I0806 08:34:14.356590   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:14.357031   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:14.357065   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:14.357349   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .DriverName
	I0806 08:34:14.357896   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .DriverName
	I0806 08:34:14.358125   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .DriverName
	I0806 08:34:14.358222   79614 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0806 08:34:14.358265   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHHostname
	I0806 08:34:14.358387   79614 ssh_runner.go:195] Run: cat /version.json
	I0806 08:34:14.358415   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHHostname
	I0806 08:34:14.361131   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:14.361444   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:14.361590   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:14.361629   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:14.361801   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHPort
	I0806 08:34:14.361905   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:14.361940   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:14.362025   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHKeyPath
	I0806 08:34:14.362228   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHUsername
	I0806 08:34:14.362242   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHPort
	I0806 08:34:14.362420   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHKeyPath
	I0806 08:34:14.362424   79614 sshutil.go:53] new ssh client: &{IP:192.168.50.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/default-k8s-diff-port-990751/id_rsa Username:docker}
	I0806 08:34:14.362600   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHUsername
	I0806 08:34:14.362717   79614 sshutil.go:53] new ssh client: &{IP:192.168.50.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/default-k8s-diff-port-990751/id_rsa Username:docker}
	I0806 08:34:14.469232   79614 ssh_runner.go:195] Run: systemctl --version
	I0806 08:34:14.476929   79614 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0806 08:34:14.629487   79614 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0806 08:34:14.637276   79614 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0806 08:34:14.637349   79614 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0806 08:34:14.654612   79614 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0806 08:34:14.654634   79614 start.go:495] detecting cgroup driver to use...
	I0806 08:34:14.654699   79614 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0806 08:34:14.672255   79614 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 08:34:14.689135   79614 docker.go:217] disabling cri-docker service (if available) ...
	I0806 08:34:14.689203   79614 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0806 08:34:14.705745   79614 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0806 08:34:14.721267   79614 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0806 08:34:14.841773   79614 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0806 08:34:14.982757   79614 docker.go:233] disabling docker service ...
	I0806 08:34:14.982837   79614 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0806 08:34:14.997637   79614 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0806 08:34:15.011404   79614 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0806 08:34:15.167108   79614 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0806 08:34:15.286749   79614 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0806 08:34:15.301818   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 08:34:15.320848   79614 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0806 08:34:15.320920   79614 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:15.331331   79614 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0806 08:34:15.331414   79614 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:15.342763   79614 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:15.353534   79614 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:15.365593   79614 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0806 08:34:15.379804   79614 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:15.390200   79614 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:15.410545   79614 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:15.423050   79614 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0806 08:34:15.432810   79614 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0806 08:34:15.432878   79614 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0806 08:34:15.447298   79614 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0806 08:34:15.457992   79614 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 08:34:15.606288   79614 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0806 08:34:15.792322   79614 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0806 08:34:15.792411   79614 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0806 08:34:15.798556   79614 start.go:563] Will wait 60s for crictl version
	I0806 08:34:15.798615   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:34:15.802732   79614 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0806 08:34:15.856880   79614 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0806 08:34:15.856963   79614 ssh_runner.go:195] Run: crio --version
	I0806 08:34:15.890496   79614 ssh_runner.go:195] Run: crio --version
	I0806 08:34:15.924486   79614 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0806 08:34:14.380306   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .Start
	I0806 08:34:14.380486   79927 main.go:141] libmachine: (old-k8s-version-409903) Ensuring networks are active...
	I0806 08:34:14.381155   79927 main.go:141] libmachine: (old-k8s-version-409903) Ensuring network default is active
	I0806 08:34:14.381514   79927 main.go:141] libmachine: (old-k8s-version-409903) Ensuring network mk-old-k8s-version-409903 is active
	I0806 08:34:14.381820   79927 main.go:141] libmachine: (old-k8s-version-409903) Getting domain xml...
	I0806 08:34:14.382679   79927 main.go:141] libmachine: (old-k8s-version-409903) Creating domain...
	I0806 08:34:15.725416   79927 main.go:141] libmachine: (old-k8s-version-409903) Waiting to get IP...
	I0806 08:34:15.726492   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:15.727078   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:34:15.727276   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:34:15.727179   80849 retry.go:31] will retry after 303.235731ms: waiting for machine to come up
	I0806 08:34:16.031996   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:16.032592   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:34:16.032621   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:34:16.032542   80849 retry.go:31] will retry after 355.466914ms: waiting for machine to come up
	I0806 08:34:16.390212   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:16.390736   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:34:16.390768   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:34:16.390674   80849 retry.go:31] will retry after 432.942387ms: waiting for machine to come up
	I0806 08:34:16.825739   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:16.826366   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:34:16.826390   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:34:16.826317   80849 retry.go:31] will retry after 481.17687ms: waiting for machine to come up
	I0806 08:34:17.309005   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:17.309768   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:34:17.309799   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:34:17.309745   80849 retry.go:31] will retry after 631.184181ms: waiting for machine to come up
	I0806 08:34:17.942220   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:17.942742   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:34:17.942766   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:34:17.942701   80849 retry.go:31] will retry after 602.253362ms: waiting for machine to come up
	I0806 08:34:18.546333   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:18.546918   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:34:18.546944   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:34:18.546855   80849 retry.go:31] will retry after 1.061939923s: waiting for machine to come up
	I0806 08:34:15.925709   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetIP
	I0806 08:34:15.929420   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:15.929901   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:15.929931   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:15.930159   79614 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0806 08:34:15.934916   79614 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 08:34:15.949815   79614 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-990751 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-990751 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.235 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0806 08:34:15.949971   79614 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0806 08:34:15.950054   79614 ssh_runner.go:195] Run: sudo crictl images --output json
	I0806 08:34:16.004551   79614 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0806 08:34:16.004631   79614 ssh_runner.go:195] Run: which lz4
	I0806 08:34:16.011388   79614 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0806 08:34:16.017475   79614 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0806 08:34:16.017511   79614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0806 08:34:17.564750   79614 crio.go:462] duration metric: took 1.553401521s to copy over tarball
	I0806 08:34:17.564840   79614 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0806 08:34:16.106922   79219 node_ready.go:53] node "embed-certs-324808" has status "Ready":"False"
	I0806 08:34:17.104735   79219 node_ready.go:49] node "embed-certs-324808" has status "Ready":"True"
	I0806 08:34:17.104778   79219 node_ready.go:38] duration metric: took 5.004030652s for node "embed-certs-324808" to be "Ready" ...
	I0806 08:34:17.104790   79219 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 08:34:17.113668   79219 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-4xxwm" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:17.120777   79219 pod_ready.go:92] pod "coredns-7db6d8ff4d-4xxwm" in "kube-system" namespace has status "Ready":"True"
	I0806 08:34:17.120801   79219 pod_ready.go:81] duration metric: took 7.097875ms for pod "coredns-7db6d8ff4d-4xxwm" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:17.120811   79219 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-324808" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:17.126581   79219 pod_ready.go:92] pod "etcd-embed-certs-324808" in "kube-system" namespace has status "Ready":"True"
	I0806 08:34:17.126606   79219 pod_ready.go:81] duration metric: took 5.788426ms for pod "etcd-embed-certs-324808" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:17.126619   79219 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-324808" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:18.136739   79219 pod_ready.go:92] pod "kube-apiserver-embed-certs-324808" in "kube-system" namespace has status "Ready":"True"
	I0806 08:34:18.136767   79219 pod_ready.go:81] duration metric: took 1.010139212s for pod "kube-apiserver-embed-certs-324808" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:18.136781   79219 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-324808" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:19.610165   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:19.610567   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:34:19.610598   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:34:19.610542   80849 retry.go:31] will retry after 1.339464672s: waiting for machine to come up
	I0806 08:34:20.951505   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:20.952049   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:34:20.952080   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:34:20.951975   80849 retry.go:31] will retry after 1.622574331s: waiting for machine to come up
	I0806 08:34:22.576150   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:22.576559   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:34:22.576582   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:34:22.576508   80849 retry.go:31] will retry after 1.408827574s: waiting for machine to come up
	I0806 08:34:19.920581   79614 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.355705846s)
	I0806 08:34:19.920609   79614 crio.go:469] duration metric: took 2.355825462s to extract the tarball
	I0806 08:34:19.920619   79614 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0806 08:34:19.959118   79614 ssh_runner.go:195] Run: sudo crictl images --output json
	I0806 08:34:20.002413   79614 crio.go:514] all images are preloaded for cri-o runtime.
	I0806 08:34:20.002449   79614 cache_images.go:84] Images are preloaded, skipping loading
	I0806 08:34:20.002461   79614 kubeadm.go:934] updating node { 192.168.50.235 8444 v1.30.3 crio true true} ...
	I0806 08:34:20.002588   79614 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-990751 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.235
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-990751 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0806 08:34:20.002655   79614 ssh_runner.go:195] Run: crio config
	I0806 08:34:20.058943   79614 cni.go:84] Creating CNI manager for ""
	I0806 08:34:20.058970   79614 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0806 08:34:20.058981   79614 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0806 08:34:20.059010   79614 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.235 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-990751 NodeName:default-k8s-diff-port-990751 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.235"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.235 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0806 08:34:20.059247   79614 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.235
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-990751"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.235
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.235"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0806 08:34:20.059337   79614 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0806 08:34:20.072766   79614 binaries.go:44] Found k8s binaries, skipping transfer
	I0806 08:34:20.072876   79614 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0806 08:34:20.085730   79614 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0806 08:34:20.105230   79614 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0806 08:34:20.124800   79614 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0806 08:34:20.146547   79614 ssh_runner.go:195] Run: grep 192.168.50.235	control-plane.minikube.internal$ /etc/hosts
	I0806 08:34:20.150692   79614 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.235	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 08:34:20.164006   79614 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 08:34:20.286820   79614 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 08:34:20.304141   79614 certs.go:68] Setting up /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/default-k8s-diff-port-990751 for IP: 192.168.50.235
	I0806 08:34:20.304166   79614 certs.go:194] generating shared ca certs ...
	I0806 08:34:20.304187   79614 certs.go:226] acquiring lock for ca certs: {Name:mkf5c5aed91f4108054d9f2c742cad66c86afbed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:34:20.304386   79614 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.key
	I0806 08:34:20.304448   79614 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.key
	I0806 08:34:20.304464   79614 certs.go:256] generating profile certs ...
	I0806 08:34:20.304567   79614 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/default-k8s-diff-port-990751/client.key
	I0806 08:34:20.304648   79614 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/default-k8s-diff-port-990751/apiserver.key.aa7eccb7
	I0806 08:34:20.304715   79614 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/default-k8s-diff-port-990751/proxy-client.key
	I0806 08:34:20.304872   79614 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246.pem (1338 bytes)
	W0806 08:34:20.304919   79614 certs.go:480] ignoring /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246_empty.pem, impossibly tiny 0 bytes
	I0806 08:34:20.304929   79614 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem (1675 bytes)
	I0806 08:34:20.304960   79614 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem (1078 bytes)
	I0806 08:34:20.304989   79614 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem (1123 bytes)
	I0806 08:34:20.305019   79614 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem (1675 bytes)
	I0806 08:34:20.305089   79614 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem (1708 bytes)
	I0806 08:34:20.305821   79614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0806 08:34:20.341201   79614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0806 08:34:20.375406   79614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0806 08:34:20.407409   79614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0806 08:34:20.449555   79614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/default-k8s-diff-port-990751/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0806 08:34:20.479479   79614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/default-k8s-diff-port-990751/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0806 08:34:20.518719   79614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/default-k8s-diff-port-990751/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0806 08:34:20.544212   79614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/default-k8s-diff-port-990751/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0806 08:34:20.569413   79614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0806 08:34:20.594573   79614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246.pem --> /usr/share/ca-certificates/20246.pem (1338 bytes)
	I0806 08:34:20.620183   79614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem --> /usr/share/ca-certificates/202462.pem (1708 bytes)
	I0806 08:34:20.648311   79614 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0806 08:34:20.670845   79614 ssh_runner.go:195] Run: openssl version
	I0806 08:34:20.677437   79614 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202462.pem && ln -fs /usr/share/ca-certificates/202462.pem /etc/ssl/certs/202462.pem"
	I0806 08:34:20.688990   79614 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202462.pem
	I0806 08:34:20.694512   79614 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  6 07:17 /usr/share/ca-certificates/202462.pem
	I0806 08:34:20.694580   79614 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202462.pem
	I0806 08:34:20.701362   79614 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202462.pem /etc/ssl/certs/3ec20f2e.0"
	I0806 08:34:20.712966   79614 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0806 08:34:20.724455   79614 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0806 08:34:20.729111   79614 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  6 07:07 /usr/share/ca-certificates/minikubeCA.pem
	I0806 08:34:20.729165   79614 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0806 08:34:20.734973   79614 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0806 08:34:20.746387   79614 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20246.pem && ln -fs /usr/share/ca-certificates/20246.pem /etc/ssl/certs/20246.pem"
	I0806 08:34:20.757763   79614 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20246.pem
	I0806 08:34:20.762396   79614 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  6 07:17 /usr/share/ca-certificates/20246.pem
	I0806 08:34:20.762454   79614 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20246.pem
	I0806 08:34:20.768261   79614 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20246.pem /etc/ssl/certs/51391683.0"
	I0806 08:34:20.779553   79614 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0806 08:34:20.785854   79614 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0806 08:34:20.794054   79614 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0806 08:34:20.802652   79614 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0806 08:34:20.809929   79614 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0806 08:34:20.816886   79614 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0806 08:34:20.823667   79614 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0806 08:34:20.830355   79614 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-990751 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-990751 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.235 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 08:34:20.830456   79614 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0806 08:34:20.830527   79614 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0806 08:34:20.878368   79614 cri.go:89] found id: ""
	I0806 08:34:20.878455   79614 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0806 08:34:20.889175   79614 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0806 08:34:20.889196   79614 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0806 08:34:20.889250   79614 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0806 08:34:20.899842   79614 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0806 08:34:20.901037   79614 kubeconfig.go:125] found "default-k8s-diff-port-990751" server: "https://192.168.50.235:8444"
	I0806 08:34:20.902781   79614 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0806 08:34:20.914810   79614 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.235
	I0806 08:34:20.914847   79614 kubeadm.go:1160] stopping kube-system containers ...
	I0806 08:34:20.914859   79614 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0806 08:34:20.914905   79614 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0806 08:34:20.956719   79614 cri.go:89] found id: ""
	I0806 08:34:20.956772   79614 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0806 08:34:20.977367   79614 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0806 08:34:20.989491   79614 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0806 08:34:20.989513   79614 kubeadm.go:157] found existing configuration files:
	
	I0806 08:34:20.989565   79614 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0806 08:34:21.000759   79614 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0806 08:34:21.000820   79614 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0806 08:34:21.012404   79614 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0806 08:34:21.023449   79614 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0806 08:34:21.023536   79614 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0806 08:34:21.035354   79614 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0806 08:34:21.045285   79614 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0806 08:34:21.045347   79614 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0806 08:34:21.057182   79614 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0806 08:34:21.067431   79614 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0806 08:34:21.067497   79614 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0806 08:34:21.077955   79614 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0806 08:34:21.088824   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:21.209627   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:21.772624   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:22.022615   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:22.107829   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:22.235446   79614 api_server.go:52] waiting for apiserver process to appear ...
	I0806 08:34:22.235552   79614 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:22.735843   79614 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:23.236385   79614 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:23.314994   79614 api_server.go:72] duration metric: took 1.079547638s to wait for apiserver process to appear ...
	I0806 08:34:23.315025   79614 api_server.go:88] waiting for apiserver healthz status ...
	I0806 08:34:23.315071   79614 api_server.go:253] Checking apiserver healthz at https://192.168.50.235:8444/healthz ...
	I0806 08:34:23.315581   79614 api_server.go:269] stopped: https://192.168.50.235:8444/healthz: Get "https://192.168.50.235:8444/healthz": dial tcp 192.168.50.235:8444: connect: connection refused
	I0806 08:34:23.815428   79614 api_server.go:253] Checking apiserver healthz at https://192.168.50.235:8444/healthz ...
	I0806 08:34:20.145118   79219 pod_ready.go:102] pod "kube-controller-manager-embed-certs-324808" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:23.051556   79219 pod_ready.go:102] pod "kube-controller-manager-embed-certs-324808" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:23.077556   79219 pod_ready.go:92] pod "kube-controller-manager-embed-certs-324808" in "kube-system" namespace has status "Ready":"True"
	I0806 08:34:23.077581   79219 pod_ready.go:81] duration metric: took 4.94079164s for pod "kube-controller-manager-embed-certs-324808" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:23.077594   79219 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8lbzv" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:23.091745   79219 pod_ready.go:92] pod "kube-proxy-8lbzv" in "kube-system" namespace has status "Ready":"True"
	I0806 08:34:23.091773   79219 pod_ready.go:81] duration metric: took 14.170363ms for pod "kube-proxy-8lbzv" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:23.091789   79219 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-324808" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:23.105567   79219 pod_ready.go:92] pod "kube-scheduler-embed-certs-324808" in "kube-system" namespace has status "Ready":"True"
	I0806 08:34:23.105597   79219 pod_ready.go:81] duration metric: took 13.798762ms for pod "kube-scheduler-embed-certs-324808" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:23.105612   79219 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:23.987256   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:23.987658   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:34:23.987683   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:34:23.987617   80849 retry.go:31] will retry after 2.000071561s: waiting for machine to come up
	I0806 08:34:25.989501   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:25.989880   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:34:25.989906   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:34:25.989838   80849 retry.go:31] will retry after 2.508269501s: waiting for machine to come up
	I0806 08:34:28.501449   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:28.501842   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:34:28.501874   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:34:28.501802   80849 retry.go:31] will retry after 2.7583106s: waiting for machine to come up
	I0806 08:34:26.696273   79614 api_server.go:279] https://192.168.50.235:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0806 08:34:26.696325   79614 api_server.go:103] status: https://192.168.50.235:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0806 08:34:26.696342   79614 api_server.go:253] Checking apiserver healthz at https://192.168.50.235:8444/healthz ...
	I0806 08:34:26.704047   79614 api_server.go:279] https://192.168.50.235:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0806 08:34:26.704078   79614 api_server.go:103] status: https://192.168.50.235:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0806 08:34:26.815341   79614 api_server.go:253] Checking apiserver healthz at https://192.168.50.235:8444/healthz ...
	I0806 08:34:26.819686   79614 api_server.go:279] https://192.168.50.235:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0806 08:34:26.819718   79614 api_server.go:103] status: https://192.168.50.235:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0806 08:34:27.315242   79614 api_server.go:253] Checking apiserver healthz at https://192.168.50.235:8444/healthz ...
	I0806 08:34:27.320753   79614 api_server.go:279] https://192.168.50.235:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0806 08:34:27.320789   79614 api_server.go:103] status: https://192.168.50.235:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0806 08:34:27.815164   79614 api_server.go:253] Checking apiserver healthz at https://192.168.50.235:8444/healthz ...
	I0806 08:34:27.821943   79614 api_server.go:279] https://192.168.50.235:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0806 08:34:27.821967   79614 api_server.go:103] status: https://192.168.50.235:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0806 08:34:28.315169   79614 api_server.go:253] Checking apiserver healthz at https://192.168.50.235:8444/healthz ...
	I0806 08:34:28.321921   79614 api_server.go:279] https://192.168.50.235:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0806 08:34:28.321945   79614 api_server.go:103] status: https://192.168.50.235:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0806 08:34:28.815717   79614 api_server.go:253] Checking apiserver healthz at https://192.168.50.235:8444/healthz ...
	I0806 08:34:28.819946   79614 api_server.go:279] https://192.168.50.235:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0806 08:34:28.819973   79614 api_server.go:103] status: https://192.168.50.235:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0806 08:34:25.112333   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:27.112488   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:29.315643   79614 api_server.go:253] Checking apiserver healthz at https://192.168.50.235:8444/healthz ...
	I0806 08:34:29.319784   79614 api_server.go:279] https://192.168.50.235:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0806 08:34:29.319813   79614 api_server.go:103] status: https://192.168.50.235:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0806 08:34:29.815945   79614 api_server.go:253] Checking apiserver healthz at https://192.168.50.235:8444/healthz ...
	I0806 08:34:29.821441   79614 api_server.go:279] https://192.168.50.235:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0806 08:34:29.821476   79614 api_server.go:103] status: https://192.168.50.235:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0806 08:34:30.315997   79614 api_server.go:253] Checking apiserver healthz at https://192.168.50.235:8444/healthz ...
	I0806 08:34:30.321179   79614 api_server.go:279] https://192.168.50.235:8444/healthz returned 200:
	ok
	I0806 08:34:30.327083   79614 api_server.go:141] control plane version: v1.30.3
	I0806 08:34:30.327113   79614 api_server.go:131] duration metric: took 7.012080775s to wait for apiserver health ...
	I0806 08:34:30.327122   79614 cni.go:84] Creating CNI manager for ""
	I0806 08:34:30.327128   79614 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0806 08:34:30.328860   79614 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0806 08:34:30.330232   79614 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0806 08:34:30.341069   79614 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0806 08:34:30.359470   79614 system_pods.go:43] waiting for kube-system pods to appear ...
	I0806 08:34:30.369165   79614 system_pods.go:59] 8 kube-system pods found
	I0806 08:34:30.369209   79614 system_pods.go:61] "coredns-7db6d8ff4d-gx8tt" [48f83ba8-a238-4baf-94af-88370fefcb02] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0806 08:34:30.369227   79614 system_pods.go:61] "etcd-default-k8s-diff-port-990751" [bf342ba4-1e11-40bf-80fd-02b402043ecf] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0806 08:34:30.369246   79614 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-990751" [11ee42c4-c9e9-41cf-ace2-deac0f30153a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0806 08:34:30.369256   79614 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-990751" [f17fca30-0afc-4ed8-a8cc-cb64e69723f3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0806 08:34:30.369265   79614 system_pods.go:61] "kube-proxy-lpf88" [866c10f0-2c44-4c03-b460-ff2194bd1ec6] Running
	I0806 08:34:30.369272   79614 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-990751" [2e591ea7-07bd-464f-bf0e-bc24bfcca016] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0806 08:34:30.369282   79614 system_pods.go:61] "metrics-server-569cc877fc-hnxd4" [f369ac70-49b7-448a-9852-89232fb66da9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0806 08:34:30.369290   79614 system_pods.go:61] "storage-provisioner" [0faff3fc-d34b-418e-972b-95a2e36b577a] Running
	I0806 08:34:30.369299   79614 system_pods.go:74] duration metric: took 9.80217ms to wait for pod list to return data ...
	I0806 08:34:30.369311   79614 node_conditions.go:102] verifying NodePressure condition ...
	I0806 08:34:30.372222   79614 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0806 08:34:30.372250   79614 node_conditions.go:123] node cpu capacity is 2
	I0806 08:34:30.372261   79614 node_conditions.go:105] duration metric: took 2.944552ms to run NodePressure ...
	I0806 08:34:30.372279   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:30.639677   79614 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0806 08:34:30.643923   79614 kubeadm.go:739] kubelet initialised
	I0806 08:34:30.643952   79614 kubeadm.go:740] duration metric: took 4.248408ms waiting for restarted kubelet to initialise ...
	I0806 08:34:30.643962   79614 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 08:34:30.648766   79614 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-gx8tt" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:30.653672   79614 pod_ready.go:97] node "default-k8s-diff-port-990751" hosting pod "coredns-7db6d8ff4d-gx8tt" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-990751" has status "Ready":"False"
	I0806 08:34:30.653703   79614 pod_ready.go:81] duration metric: took 4.90961ms for pod "coredns-7db6d8ff4d-gx8tt" in "kube-system" namespace to be "Ready" ...
	E0806 08:34:30.653714   79614 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-990751" hosting pod "coredns-7db6d8ff4d-gx8tt" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-990751" has status "Ready":"False"
	I0806 08:34:30.653723   79614 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-990751" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:30.658080   79614 pod_ready.go:97] node "default-k8s-diff-port-990751" hosting pod "etcd-default-k8s-diff-port-990751" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-990751" has status "Ready":"False"
	I0806 08:34:30.658106   79614 pod_ready.go:81] duration metric: took 4.370083ms for pod "etcd-default-k8s-diff-port-990751" in "kube-system" namespace to be "Ready" ...
	E0806 08:34:30.658119   79614 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-990751" hosting pod "etcd-default-k8s-diff-port-990751" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-990751" has status "Ready":"False"
	I0806 08:34:30.658125   79614 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-990751" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:30.662246   79614 pod_ready.go:97] node "default-k8s-diff-port-990751" hosting pod "kube-apiserver-default-k8s-diff-port-990751" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-990751" has status "Ready":"False"
	I0806 08:34:30.662271   79614 pod_ready.go:81] duration metric: took 4.139553ms for pod "kube-apiserver-default-k8s-diff-port-990751" in "kube-system" namespace to be "Ready" ...
	E0806 08:34:30.662280   79614 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-990751" hosting pod "kube-apiserver-default-k8s-diff-port-990751" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-990751" has status "Ready":"False"
	I0806 08:34:30.662286   79614 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-990751" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:30.763246   79614 pod_ready.go:97] node "default-k8s-diff-port-990751" hosting pod "kube-controller-manager-default-k8s-diff-port-990751" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-990751" has status "Ready":"False"
	I0806 08:34:30.763286   79614 pod_ready.go:81] duration metric: took 100.991212ms for pod "kube-controller-manager-default-k8s-diff-port-990751" in "kube-system" namespace to be "Ready" ...
	E0806 08:34:30.763302   79614 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-990751" hosting pod "kube-controller-manager-default-k8s-diff-port-990751" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-990751" has status "Ready":"False"
	I0806 08:34:30.763311   79614 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-lpf88" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:31.162972   79614 pod_ready.go:97] node "default-k8s-diff-port-990751" hosting pod "kube-proxy-lpf88" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-990751" has status "Ready":"False"
	I0806 08:34:31.162998   79614 pod_ready.go:81] duration metric: took 399.678401ms for pod "kube-proxy-lpf88" in "kube-system" namespace to be "Ready" ...
	E0806 08:34:31.163006   79614 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-990751" hosting pod "kube-proxy-lpf88" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-990751" has status "Ready":"False"
	I0806 08:34:31.163012   79614 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-990751" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:31.562538   79614 pod_ready.go:97] node "default-k8s-diff-port-990751" hosting pod "kube-scheduler-default-k8s-diff-port-990751" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-990751" has status "Ready":"False"
	I0806 08:34:31.562564   79614 pod_ready.go:81] duration metric: took 399.544055ms for pod "kube-scheduler-default-k8s-diff-port-990751" in "kube-system" namespace to be "Ready" ...
	E0806 08:34:31.562575   79614 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-990751" hosting pod "kube-scheduler-default-k8s-diff-port-990751" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-990751" has status "Ready":"False"
	I0806 08:34:31.562582   79614 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:31.963410   79614 pod_ready.go:97] node "default-k8s-diff-port-990751" hosting pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-990751" has status "Ready":"False"
	I0806 08:34:31.963438   79614 pod_ready.go:81] duration metric: took 400.849931ms for pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace to be "Ready" ...
	E0806 08:34:31.963449   79614 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-990751" hosting pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-990751" has status "Ready":"False"
	I0806 08:34:31.963456   79614 pod_ready.go:38] duration metric: took 1.319483278s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 08:34:31.963472   79614 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0806 08:34:31.975982   79614 ops.go:34] apiserver oom_adj: -16
	I0806 08:34:31.976006   79614 kubeadm.go:597] duration metric: took 11.086801084s to restartPrimaryControlPlane
	I0806 08:34:31.976016   79614 kubeadm.go:394] duration metric: took 11.145668347s to StartCluster
	I0806 08:34:31.976035   79614 settings.go:142] acquiring lock: {Name:mkb0bfbcae3b4205ef43487a9de84cfd8afd2e5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:34:31.976131   79614 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19370-13057/kubeconfig
	I0806 08:34:31.977626   79614 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/kubeconfig: {Name:mk5c066914acf8e36556b29792f75b98076ca34a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:34:31.977871   79614 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.235 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0806 08:34:31.977927   79614 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0806 08:34:31.978004   79614 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-990751"
	I0806 08:34:31.978058   79614 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-990751"
	W0806 08:34:31.978070   79614 addons.go:243] addon storage-provisioner should already be in state true
	I0806 08:34:31.978060   79614 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-990751"
	I0806 08:34:31.978092   79614 config.go:182] Loaded profile config "default-k8s-diff-port-990751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 08:34:31.978076   79614 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-990751"
	I0806 08:34:31.978124   79614 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-990751"
	I0806 08:34:31.978154   79614 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-990751"
	I0806 08:34:31.978111   79614 host.go:66] Checking if "default-k8s-diff-port-990751" exists ...
	W0806 08:34:31.978172   79614 addons.go:243] addon metrics-server should already be in state true
	I0806 08:34:31.978250   79614 host.go:66] Checking if "default-k8s-diff-port-990751" exists ...
	I0806 08:34:31.978595   79614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:34:31.978651   79614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:34:31.978602   79614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:34:31.978691   79614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:34:31.978650   79614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:34:31.978806   79614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:34:31.979749   79614 out.go:177] * Verifying Kubernetes components...
	I0806 08:34:31.981485   79614 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 08:34:31.994362   79614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36289
	I0806 08:34:31.995043   79614 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:34:31.995594   79614 main.go:141] libmachine: Using API Version  1
	I0806 08:34:31.995610   79614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:34:31.995940   79614 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:34:31.996152   79614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36601
	I0806 08:34:31.996485   79614 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:34:31.996569   79614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:34:31.996613   79614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:34:31.996785   79614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42273
	I0806 08:34:31.997043   79614 main.go:141] libmachine: Using API Version  1
	I0806 08:34:31.997062   79614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:34:31.997335   79614 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:34:31.997424   79614 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:34:31.997601   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetState
	I0806 08:34:31.997759   79614 main.go:141] libmachine: Using API Version  1
	I0806 08:34:31.997775   79614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:34:31.998047   79614 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:34:31.998511   79614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:34:31.998546   79614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:34:32.001381   79614 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-990751"
	W0806 08:34:32.001397   79614 addons.go:243] addon default-storageclass should already be in state true
	I0806 08:34:32.001427   79614 host.go:66] Checking if "default-k8s-diff-port-990751" exists ...
	I0806 08:34:32.001690   79614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:34:32.001721   79614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:34:32.015095   79614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41633
	I0806 08:34:32.015597   79614 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:34:32.016197   79614 main.go:141] libmachine: Using API Version  1
	I0806 08:34:32.016220   79614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:34:32.016245   79614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33731
	I0806 08:34:32.016951   79614 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:34:32.016953   79614 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:34:32.017491   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetState
	I0806 08:34:32.017572   79614 main.go:141] libmachine: Using API Version  1
	I0806 08:34:32.017601   79614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:34:32.017951   79614 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:34:32.018222   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetState
	I0806 08:34:32.020339   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .DriverName
	I0806 08:34:32.020347   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .DriverName
	I0806 08:34:32.021417   79614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40949
	I0806 08:34:32.021814   79614 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:34:32.022232   79614 main.go:141] libmachine: Using API Version  1
	I0806 08:34:32.022254   79614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:34:32.022609   79614 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:34:32.022690   79614 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 08:34:32.022690   79614 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0806 08:34:32.023365   79614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:34:32.023410   79614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:34:32.024304   79614 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0806 08:34:32.024326   79614 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0806 08:34:32.024343   79614 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0806 08:34:32.024358   79614 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0806 08:34:32.024387   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHHostname
	I0806 08:34:32.024346   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHHostname
	I0806 08:34:32.029046   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:32.029292   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:32.029490   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:32.029516   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:32.029672   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:32.029676   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHPort
	I0806 08:34:32.029694   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:32.029840   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHPort
	I0806 08:34:32.029866   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHKeyPath
	I0806 08:34:32.030000   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHKeyPath
	I0806 08:34:32.030122   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHUsername
	I0806 08:34:32.030124   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHUsername
	I0806 08:34:32.030265   79614 sshutil.go:53] new ssh client: &{IP:192.168.50.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/default-k8s-diff-port-990751/id_rsa Username:docker}
	I0806 08:34:32.030265   79614 sshutil.go:53] new ssh client: &{IP:192.168.50.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/default-k8s-diff-port-990751/id_rsa Username:docker}
	I0806 08:34:32.043523   79614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40021
	I0806 08:34:32.043915   79614 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:34:32.044560   79614 main.go:141] libmachine: Using API Version  1
	I0806 08:34:32.044583   79614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:34:32.044969   79614 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:34:32.045214   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetState
	I0806 08:34:32.046980   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .DriverName
	I0806 08:34:32.047261   79614 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0806 08:34:32.047277   79614 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0806 08:34:32.047296   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHHostname
	I0806 08:34:32.050654   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:32.050991   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:32.051020   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:32.051152   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHPort
	I0806 08:34:32.051329   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHKeyPath
	I0806 08:34:32.051495   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHUsername
	I0806 08:34:32.051613   79614 sshutil.go:53] new ssh client: &{IP:192.168.50.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/default-k8s-diff-port-990751/id_rsa Username:docker}
	I0806 08:34:32.192056   79614 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 08:34:32.231428   79614 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-990751" to be "Ready" ...
	I0806 08:34:32.278049   79614 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0806 08:34:32.378290   79614 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0806 08:34:32.378317   79614 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0806 08:34:32.399728   79614 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0806 08:34:32.419412   79614 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0806 08:34:32.419441   79614 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0806 08:34:32.470691   79614 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0806 08:34:32.470726   79614 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0806 08:34:32.514191   79614 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0806 08:34:32.590737   79614 main.go:141] libmachine: Making call to close driver server
	I0806 08:34:32.590772   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .Close
	I0806 08:34:32.591110   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | Closing plugin on server side
	I0806 08:34:32.591177   79614 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:34:32.591190   79614 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:34:32.591202   79614 main.go:141] libmachine: Making call to close driver server
	I0806 08:34:32.591212   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .Close
	I0806 08:34:32.591465   79614 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:34:32.591492   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | Closing plugin on server side
	I0806 08:34:32.591505   79614 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:34:32.597276   79614 main.go:141] libmachine: Making call to close driver server
	I0806 08:34:32.597297   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .Close
	I0806 08:34:32.597547   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | Closing plugin on server side
	I0806 08:34:32.597564   79614 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:34:32.597579   79614 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:34:33.401000   79614 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.001229332s)
	I0806 08:34:33.401045   79614 main.go:141] libmachine: Making call to close driver server
	I0806 08:34:33.401062   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .Close
	I0806 08:34:33.401346   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | Closing plugin on server side
	I0806 08:34:33.401402   79614 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:34:33.401426   79614 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:34:33.401443   79614 main.go:141] libmachine: Making call to close driver server
	I0806 08:34:33.401452   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .Close
	I0806 08:34:33.401692   79614 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:34:33.401720   79614 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:34:33.455633   79614 main.go:141] libmachine: Making call to close driver server
	I0806 08:34:33.455663   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .Close
	I0806 08:34:33.455947   79614 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:34:33.455964   79614 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:34:33.455964   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | Closing plugin on server side
	I0806 08:34:33.455972   79614 main.go:141] libmachine: Making call to close driver server
	I0806 08:34:33.455980   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .Close
	I0806 08:34:33.456288   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | Closing plugin on server side
	I0806 08:34:33.456286   79614 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:34:33.456328   79614 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:34:33.456347   79614 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-990751"
	I0806 08:34:33.459411   79614 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0806 08:34:31.262960   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:31.263282   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:34:31.263304   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:34:31.263238   80849 retry.go:31] will retry after 4.76596973s: waiting for machine to come up
	I0806 08:34:33.460780   79614 addons.go:510] duration metric: took 1.482853001s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0806 08:34:29.611714   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:31.612385   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:34.111936   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:37.353538   78922 start.go:364] duration metric: took 58.142260904s to acquireMachinesLock for "no-preload-703340"
	I0806 08:34:37.353587   78922 start.go:96] Skipping create...Using existing machine configuration
	I0806 08:34:37.353598   78922 fix.go:54] fixHost starting: 
	I0806 08:34:37.354016   78922 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:34:37.354050   78922 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:34:37.373838   78922 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37209
	I0806 08:34:37.374283   78922 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:34:37.374874   78922 main.go:141] libmachine: Using API Version  1
	I0806 08:34:37.374910   78922 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:34:37.375250   78922 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:34:37.375448   78922 main.go:141] libmachine: (no-preload-703340) Calling .DriverName
	I0806 08:34:37.375584   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetState
	I0806 08:34:37.377251   78922 fix.go:112] recreateIfNeeded on no-preload-703340: state=Stopped err=<nil>
	I0806 08:34:37.377275   78922 main.go:141] libmachine: (no-preload-703340) Calling .DriverName
	W0806 08:34:37.377421   78922 fix.go:138] unexpected machine state, will restart: <nil>
	I0806 08:34:37.379343   78922 out.go:177] * Restarting existing kvm2 VM for "no-preload-703340" ...
	I0806 08:34:36.030512   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.030950   79927 main.go:141] libmachine: (old-k8s-version-409903) Found IP for machine: 192.168.61.158
	I0806 08:34:36.030972   79927 main.go:141] libmachine: (old-k8s-version-409903) Reserving static IP address...
	I0806 08:34:36.030989   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has current primary IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.031349   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "old-k8s-version-409903", mac: "52:54:00:12:30:88", ip: "192.168.61.158"} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:36.031376   79927 main.go:141] libmachine: (old-k8s-version-409903) Reserved static IP address: 192.168.61.158
	I0806 08:34:36.031398   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | skip adding static IP to network mk-old-k8s-version-409903 - found existing host DHCP lease matching {name: "old-k8s-version-409903", mac: "52:54:00:12:30:88", ip: "192.168.61.158"}
	I0806 08:34:36.031416   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | Getting to WaitForSSH function...
	I0806 08:34:36.031428   79927 main.go:141] libmachine: (old-k8s-version-409903) Waiting for SSH to be available...
	I0806 08:34:36.033684   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.034094   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:36.034124   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.034327   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | Using SSH client type: external
	I0806 08:34:36.034366   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | Using SSH private key: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/old-k8s-version-409903/id_rsa (-rw-------)
	I0806 08:34:36.034395   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.158 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19370-13057/.minikube/machines/old-k8s-version-409903/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0806 08:34:36.034407   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | About to run SSH command:
	I0806 08:34:36.034415   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | exit 0
	I0806 08:34:36.164605   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | SSH cmd err, output: <nil>: 
	I0806 08:34:36.164952   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetConfigRaw
	I0806 08:34:36.165509   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetIP
	I0806 08:34:36.168135   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.168471   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:36.168499   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.168756   79927 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/old-k8s-version-409903/config.json ...
	I0806 08:34:36.168956   79927 machine.go:94] provisionDockerMachine start ...
	I0806 08:34:36.168974   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .DriverName
	I0806 08:34:36.169186   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHHostname
	I0806 08:34:36.171367   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.171702   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:36.171723   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.171902   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHPort
	I0806 08:34:36.172080   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:34:36.172240   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:34:36.172400   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHUsername
	I0806 08:34:36.172569   79927 main.go:141] libmachine: Using SSH client type: native
	I0806 08:34:36.172798   79927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.158 22 <nil> <nil>}
	I0806 08:34:36.172812   79927 main.go:141] libmachine: About to run SSH command:
	hostname
	I0806 08:34:36.288738   79927 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0806 08:34:36.288770   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetMachineName
	I0806 08:34:36.289073   79927 buildroot.go:166] provisioning hostname "old-k8s-version-409903"
	I0806 08:34:36.289088   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetMachineName
	I0806 08:34:36.289272   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHHostname
	I0806 08:34:36.291958   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.292326   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:36.292354   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.292462   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHPort
	I0806 08:34:36.292662   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:34:36.292807   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:34:36.292953   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHUsername
	I0806 08:34:36.293122   79927 main.go:141] libmachine: Using SSH client type: native
	I0806 08:34:36.293287   79927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.158 22 <nil> <nil>}
	I0806 08:34:36.293300   79927 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-409903 && echo "old-k8s-version-409903" | sudo tee /etc/hostname
	I0806 08:34:36.427591   79927 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-409903
	
	I0806 08:34:36.427620   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHHostname
	I0806 08:34:36.430395   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.430724   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:36.430758   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.430900   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHPort
	I0806 08:34:36.431133   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:34:36.431311   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:34:36.431476   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHUsername
	I0806 08:34:36.431662   79927 main.go:141] libmachine: Using SSH client type: native
	I0806 08:34:36.431876   79927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.158 22 <nil> <nil>}
	I0806 08:34:36.431895   79927 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-409903' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-409903/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-409903' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0806 08:34:36.554542   79927 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 08:34:36.554574   79927 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19370-13057/.minikube CaCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19370-13057/.minikube}
	I0806 08:34:36.554613   79927 buildroot.go:174] setting up certificates
	I0806 08:34:36.554629   79927 provision.go:84] configureAuth start
	I0806 08:34:36.554646   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetMachineName
	I0806 08:34:36.554932   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetIP
	I0806 08:34:36.557899   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.558351   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:36.558373   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.558530   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHHostname
	I0806 08:34:36.560878   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.561153   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:36.561182   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.561339   79927 provision.go:143] copyHostCerts
	I0806 08:34:36.561400   79927 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem, removing ...
	I0806 08:34:36.561410   79927 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem
	I0806 08:34:36.561462   79927 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem (1078 bytes)
	I0806 08:34:36.561563   79927 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem, removing ...
	I0806 08:34:36.561572   79927 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem
	I0806 08:34:36.561594   79927 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem (1123 bytes)
	I0806 08:34:36.561647   79927 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem, removing ...
	I0806 08:34:36.561653   79927 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem
	I0806 08:34:36.561670   79927 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem (1675 bytes)
	I0806 08:34:36.561752   79927 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-409903 san=[127.0.0.1 192.168.61.158 localhost minikube old-k8s-version-409903]
	I0806 08:34:36.646180   79927 provision.go:177] copyRemoteCerts
	I0806 08:34:36.646233   79927 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0806 08:34:36.646257   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHHostname
	I0806 08:34:36.649020   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.649380   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:36.649405   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.649564   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHPort
	I0806 08:34:36.649791   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:34:36.649962   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHUsername
	I0806 08:34:36.650092   79927 sshutil.go:53] new ssh client: &{IP:192.168.61.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/old-k8s-version-409903/id_rsa Username:docker}
	I0806 08:34:36.738988   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0806 08:34:36.765104   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0806 08:34:36.791946   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0806 08:34:36.817128   79927 provision.go:87] duration metric: took 262.482885ms to configureAuth
	I0806 08:34:36.817157   79927 buildroot.go:189] setting minikube options for container-runtime
	I0806 08:34:36.817352   79927 config.go:182] Loaded profile config "old-k8s-version-409903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0806 08:34:36.817435   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHHostname
	I0806 08:34:36.820142   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.820480   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:36.820520   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.820659   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHPort
	I0806 08:34:36.820892   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:34:36.821095   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:34:36.821289   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHUsername
	I0806 08:34:36.821459   79927 main.go:141] libmachine: Using SSH client type: native
	I0806 08:34:36.821703   79927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.158 22 <nil> <nil>}
	I0806 08:34:36.821722   79927 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0806 08:34:37.099949   79927 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0806 08:34:37.099979   79927 machine.go:97] duration metric: took 931.010538ms to provisionDockerMachine
	I0806 08:34:37.099993   79927 start.go:293] postStartSetup for "old-k8s-version-409903" (driver="kvm2")
	I0806 08:34:37.100007   79927 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0806 08:34:37.100028   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .DriverName
	I0806 08:34:37.100379   79927 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0806 08:34:37.100414   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHHostname
	I0806 08:34:37.103297   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:37.103691   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:37.103717   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:37.103887   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHPort
	I0806 08:34:37.104106   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:34:37.104263   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHUsername
	I0806 08:34:37.104402   79927 sshutil.go:53] new ssh client: &{IP:192.168.61.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/old-k8s-version-409903/id_rsa Username:docker}
	I0806 08:34:37.192035   79927 ssh_runner.go:195] Run: cat /etc/os-release
	I0806 08:34:37.196693   79927 info.go:137] Remote host: Buildroot 2023.02.9
	I0806 08:34:37.196712   79927 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-13057/.minikube/addons for local assets ...
	I0806 08:34:37.196776   79927 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-13057/.minikube/files for local assets ...
	I0806 08:34:37.196861   79927 filesync.go:149] local asset: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem -> 202462.pem in /etc/ssl/certs
	I0806 08:34:37.196957   79927 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0806 08:34:37.206906   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem --> /etc/ssl/certs/202462.pem (1708 bytes)
	I0806 08:34:37.232270   79927 start.go:296] duration metric: took 132.262362ms for postStartSetup
	I0806 08:34:37.232311   79927 fix.go:56] duration metric: took 22.878683419s for fixHost
	I0806 08:34:37.232332   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHHostname
	I0806 08:34:37.235278   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:37.235721   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:37.235751   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:37.235946   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHPort
	I0806 08:34:37.236168   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:34:37.236337   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:34:37.236491   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHUsername
	I0806 08:34:37.236640   79927 main.go:141] libmachine: Using SSH client type: native
	I0806 08:34:37.236793   79927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.158 22 <nil> <nil>}
	I0806 08:34:37.236803   79927 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0806 08:34:37.353370   79927 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722933277.327882532
	
	I0806 08:34:37.353397   79927 fix.go:216] guest clock: 1722933277.327882532
	I0806 08:34:37.353408   79927 fix.go:229] Guest: 2024-08-06 08:34:37.327882532 +0000 UTC Remote: 2024-08-06 08:34:37.232315481 +0000 UTC m=+218.506631326 (delta=95.567051ms)
	I0806 08:34:37.353439   79927 fix.go:200] guest clock delta is within tolerance: 95.567051ms
	I0806 08:34:37.353451   79927 start.go:83] releasing machines lock for "old-k8s-version-409903", held for 22.999876259s
	I0806 08:34:37.353490   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .DriverName
	I0806 08:34:37.353775   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetIP
	I0806 08:34:37.356788   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:37.357197   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:37.357224   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:37.357434   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .DriverName
	I0806 08:34:37.358035   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .DriverName
	I0806 08:34:37.358238   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .DriverName
	I0806 08:34:37.358320   79927 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0806 08:34:37.358360   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHHostname
	I0806 08:34:37.358478   79927 ssh_runner.go:195] Run: cat /version.json
	I0806 08:34:37.358505   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHHostname
	I0806 08:34:37.361125   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:37.361429   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:37.361464   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:37.361490   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:37.361681   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHPort
	I0806 08:34:37.361846   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:34:37.361889   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:37.361913   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:37.362018   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHUsername
	I0806 08:34:37.362078   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHPort
	I0806 08:34:37.362188   79927 sshutil.go:53] new ssh client: &{IP:192.168.61.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/old-k8s-version-409903/id_rsa Username:docker}
	I0806 08:34:37.362319   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:34:37.362452   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHUsername
	I0806 08:34:37.362596   79927 sshutil.go:53] new ssh client: &{IP:192.168.61.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/old-k8s-version-409903/id_rsa Username:docker}
	I0806 08:34:37.468465   79927 ssh_runner.go:195] Run: systemctl --version
	I0806 08:34:37.475436   79927 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0806 08:34:37.627725   79927 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0806 08:34:37.635346   79927 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0806 08:34:37.635432   79927 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0806 08:34:37.652546   79927 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0806 08:34:37.652571   79927 start.go:495] detecting cgroup driver to use...
	I0806 08:34:37.652642   79927 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0806 08:34:37.671386   79927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 08:34:37.686408   79927 docker.go:217] disabling cri-docker service (if available) ...
	I0806 08:34:37.686473   79927 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0806 08:34:37.701731   79927 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0806 08:34:37.717528   79927 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0806 08:34:37.845160   79927 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0806 08:34:37.999278   79927 docker.go:233] disabling docker service ...
	I0806 08:34:37.999361   79927 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0806 08:34:38.017101   79927 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0806 08:34:38.033189   79927 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0806 08:34:38.213620   79927 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0806 08:34:38.374790   79927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0806 08:34:38.393394   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 08:34:38.419807   79927 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0806 08:34:38.419878   79927 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:38.434870   79927 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0806 08:34:38.434946   79927 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:38.449029   79927 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:38.460440   79927 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:38.473654   79927 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0806 08:34:38.486032   79927 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0806 08:34:38.495861   79927 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0806 08:34:38.495927   79927 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0806 08:34:38.511021   79927 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0806 08:34:38.521985   79927 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 08:34:38.651919   79927 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0806 08:34:34.235598   79614 node_ready.go:53] node "default-k8s-diff-port-990751" has status "Ready":"False"
	I0806 08:34:36.734741   79614 node_ready.go:53] node "default-k8s-diff-port-990751" has status "Ready":"False"
	I0806 08:34:37.235469   79614 node_ready.go:49] node "default-k8s-diff-port-990751" has status "Ready":"True"
	I0806 08:34:37.235496   79614 node_ready.go:38] duration metric: took 5.004029432s for node "default-k8s-diff-port-990751" to be "Ready" ...
	I0806 08:34:37.235506   79614 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 08:34:37.242299   79614 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-gx8tt" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:37.247917   79614 pod_ready.go:92] pod "coredns-7db6d8ff4d-gx8tt" in "kube-system" namespace has status "Ready":"True"
	I0806 08:34:37.247936   79614 pod_ready.go:81] duration metric: took 5.613775ms for pod "coredns-7db6d8ff4d-gx8tt" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:37.247944   79614 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-990751" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:38.822367   79927 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0806 08:34:38.822444   79927 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0806 08:34:38.828148   79927 start.go:563] Will wait 60s for crictl version
	I0806 08:34:38.828211   79927 ssh_runner.go:195] Run: which crictl
	I0806 08:34:38.832084   79927 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0806 08:34:38.872571   79927 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0806 08:34:38.872658   79927 ssh_runner.go:195] Run: crio --version
	I0806 08:34:38.903507   79927 ssh_runner.go:195] Run: crio --version
	I0806 08:34:38.934955   79927 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0806 08:34:36.612552   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:38.614644   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:37.380808   78922 main.go:141] libmachine: (no-preload-703340) Calling .Start
	I0806 08:34:37.381005   78922 main.go:141] libmachine: (no-preload-703340) Ensuring networks are active...
	I0806 08:34:37.381749   78922 main.go:141] libmachine: (no-preload-703340) Ensuring network default is active
	I0806 08:34:37.382052   78922 main.go:141] libmachine: (no-preload-703340) Ensuring network mk-no-preload-703340 is active
	I0806 08:34:37.382556   78922 main.go:141] libmachine: (no-preload-703340) Getting domain xml...
	I0806 08:34:37.383386   78922 main.go:141] libmachine: (no-preload-703340) Creating domain...
	I0806 08:34:38.779397   78922 main.go:141] libmachine: (no-preload-703340) Waiting to get IP...
	I0806 08:34:38.780262   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:38.780737   78922 main.go:141] libmachine: (no-preload-703340) DBG | unable to find current IP address of domain no-preload-703340 in network mk-no-preload-703340
	I0806 08:34:38.780809   78922 main.go:141] libmachine: (no-preload-703340) DBG | I0806 08:34:38.780723   81046 retry.go:31] will retry after 194.059215ms: waiting for machine to come up
	I0806 08:34:38.976254   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:38.976734   78922 main.go:141] libmachine: (no-preload-703340) DBG | unable to find current IP address of domain no-preload-703340 in network mk-no-preload-703340
	I0806 08:34:38.976769   78922 main.go:141] libmachine: (no-preload-703340) DBG | I0806 08:34:38.976653   81046 retry.go:31] will retry after 288.140112ms: waiting for machine to come up
	I0806 08:34:39.266297   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:39.267031   78922 main.go:141] libmachine: (no-preload-703340) DBG | unable to find current IP address of domain no-preload-703340 in network mk-no-preload-703340
	I0806 08:34:39.267060   78922 main.go:141] libmachine: (no-preload-703340) DBG | I0806 08:34:39.266948   81046 retry.go:31] will retry after 354.804756ms: waiting for machine to come up
	I0806 08:34:39.623548   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:39.624083   78922 main.go:141] libmachine: (no-preload-703340) DBG | unable to find current IP address of domain no-preload-703340 in network mk-no-preload-703340
	I0806 08:34:39.624111   78922 main.go:141] libmachine: (no-preload-703340) DBG | I0806 08:34:39.623994   81046 retry.go:31] will retry after 418.851823ms: waiting for machine to come up
	I0806 08:34:40.044734   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:40.045142   78922 main.go:141] libmachine: (no-preload-703340) DBG | unable to find current IP address of domain no-preload-703340 in network mk-no-preload-703340
	I0806 08:34:40.045165   78922 main.go:141] libmachine: (no-preload-703340) DBG | I0806 08:34:40.045103   81046 retry.go:31] will retry after 525.29917ms: waiting for machine to come up
	I0806 08:34:40.571928   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:40.572395   78922 main.go:141] libmachine: (no-preload-703340) DBG | unable to find current IP address of domain no-preload-703340 in network mk-no-preload-703340
	I0806 08:34:40.572431   78922 main.go:141] libmachine: (no-preload-703340) DBG | I0806 08:34:40.572332   81046 retry.go:31] will retry after 609.965809ms: waiting for machine to come up
	I0806 08:34:41.184209   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:41.184793   78922 main.go:141] libmachine: (no-preload-703340) DBG | unable to find current IP address of domain no-preload-703340 in network mk-no-preload-703340
	I0806 08:34:41.184820   78922 main.go:141] libmachine: (no-preload-703340) DBG | I0806 08:34:41.184739   81046 retry.go:31] will retry after 1.120596s: waiting for machine to come up
	I0806 08:34:38.936206   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetIP
	I0806 08:34:38.939581   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:38.940049   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:38.940076   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:38.940336   79927 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0806 08:34:38.944801   79927 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 08:34:38.959454   79927 kubeadm.go:883] updating cluster {Name:old-k8s-version-409903 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-409903 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.158 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0806 08:34:38.959581   79927 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0806 08:34:38.959647   79927 ssh_runner.go:195] Run: sudo crictl images --output json
	I0806 08:34:39.014798   79927 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0806 08:34:39.014908   79927 ssh_runner.go:195] Run: which lz4
	I0806 08:34:39.019642   79927 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0806 08:34:39.025125   79927 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0806 08:34:39.025165   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0806 08:34:40.772902   79927 crio.go:462] duration metric: took 1.753273519s to copy over tarball
	I0806 08:34:40.772994   79927 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0806 08:34:39.256627   79614 pod_ready.go:102] pod "etcd-default-k8s-diff-port-990751" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:41.257158   79614 pod_ready.go:102] pod "etcd-default-k8s-diff-port-990751" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:42.754699   79614 pod_ready.go:92] pod "etcd-default-k8s-diff-port-990751" in "kube-system" namespace has status "Ready":"True"
	I0806 08:34:42.754729   79614 pod_ready.go:81] duration metric: took 5.506777292s for pod "etcd-default-k8s-diff-port-990751" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:42.754742   79614 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-990751" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:42.761260   79614 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-990751" in "kube-system" namespace has status "Ready":"True"
	I0806 08:34:42.761282   79614 pod_ready.go:81] duration metric: took 6.531767ms for pod "kube-apiserver-default-k8s-diff-port-990751" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:42.761292   79614 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-990751" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:42.769860   79614 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-990751" in "kube-system" namespace has status "Ready":"True"
	I0806 08:34:42.769884   79614 pod_ready.go:81] duration metric: took 8.585216ms for pod "kube-controller-manager-default-k8s-diff-port-990751" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:42.769898   79614 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lpf88" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:42.775420   79614 pod_ready.go:92] pod "kube-proxy-lpf88" in "kube-system" namespace has status "Ready":"True"
	I0806 08:34:42.775445   79614 pod_ready.go:81] duration metric: took 5.540587ms for pod "kube-proxy-lpf88" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:42.775453   79614 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-990751" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:42.781317   79614 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-990751" in "kube-system" namespace has status "Ready":"True"
	I0806 08:34:42.781341   79614 pod_ready.go:81] duration metric: took 5.88019ms for pod "kube-scheduler-default-k8s-diff-port-990751" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:42.781353   79614 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:40.614686   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:43.114736   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:42.306869   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:42.307378   78922 main.go:141] libmachine: (no-preload-703340) DBG | unable to find current IP address of domain no-preload-703340 in network mk-no-preload-703340
	I0806 08:34:42.307408   78922 main.go:141] libmachine: (no-preload-703340) DBG | I0806 08:34:42.307341   81046 retry.go:31] will retry after 1.26489185s: waiting for machine to come up
	I0806 08:34:43.574072   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:43.574494   78922 main.go:141] libmachine: (no-preload-703340) DBG | unable to find current IP address of domain no-preload-703340 in network mk-no-preload-703340
	I0806 08:34:43.574517   78922 main.go:141] libmachine: (no-preload-703340) DBG | I0806 08:34:43.574440   81046 retry.go:31] will retry after 1.547737608s: waiting for machine to come up
	I0806 08:34:45.124236   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:45.124794   78922 main.go:141] libmachine: (no-preload-703340) DBG | unable to find current IP address of domain no-preload-703340 in network mk-no-preload-703340
	I0806 08:34:45.124822   78922 main.go:141] libmachine: (no-preload-703340) DBG | I0806 08:34:45.124745   81046 retry.go:31] will retry after 1.491615644s: waiting for machine to come up
	I0806 08:34:46.617663   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:46.618159   78922 main.go:141] libmachine: (no-preload-703340) DBG | unable to find current IP address of domain no-preload-703340 in network mk-no-preload-703340
	I0806 08:34:46.618183   78922 main.go:141] libmachine: (no-preload-703340) DBG | I0806 08:34:46.618117   81046 retry.go:31] will retry after 2.284217057s: waiting for machine to come up
	I0806 08:34:43.977375   79927 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.204351244s)
	I0806 08:34:43.977404   79927 crio.go:469] duration metric: took 3.204457213s to extract the tarball
	I0806 08:34:43.977412   79927 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0806 08:34:44.023819   79927 ssh_runner.go:195] Run: sudo crictl images --output json
	I0806 08:34:44.070991   79927 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0806 08:34:44.071016   79927 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0806 08:34:44.071089   79927 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0806 08:34:44.071129   79927 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0806 08:34:44.071138   79927 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0806 08:34:44.071090   79927 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 08:34:44.071184   79927 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0806 08:34:44.071192   79927 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0806 08:34:44.071210   79927 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0806 08:34:44.071255   79927 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0806 08:34:44.072641   79927 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0806 08:34:44.072728   79927 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 08:34:44.072743   79927 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0806 08:34:44.072642   79927 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0806 08:34:44.072749   79927 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0806 08:34:44.072811   79927 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0806 08:34:44.073085   79927 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0806 08:34:44.073298   79927 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0806 08:34:44.239872   79927 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0806 08:34:44.287412   79927 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0806 08:34:44.287458   79927 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0806 08:34:44.287504   79927 ssh_runner.go:195] Run: which crictl
	I0806 08:34:44.292050   79927 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0806 08:34:44.309229   79927 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0806 08:34:44.336611   79927 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0806 08:34:44.371389   79927 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0806 08:34:44.371447   79927 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0806 08:34:44.371500   79927 ssh_runner.go:195] Run: which crictl
	I0806 08:34:44.375821   79927 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0806 08:34:44.412139   79927 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0806 08:34:44.415159   79927 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0806 08:34:44.415339   79927 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0806 08:34:44.418296   79927 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0806 08:34:44.423718   79927 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0806 08:34:44.437866   79927 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0806 08:34:44.494716   79927 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0806 08:34:44.494765   79927 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0806 08:34:44.494819   79927 ssh_runner.go:195] Run: which crictl
	I0806 08:34:44.526849   79927 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0806 08:34:44.526898   79927 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0806 08:34:44.526961   79927 ssh_runner.go:195] Run: which crictl
	I0806 08:34:44.555530   79927 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0806 08:34:44.555587   79927 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0806 08:34:44.555620   79927 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0806 08:34:44.555634   79927 ssh_runner.go:195] Run: which crictl
	I0806 08:34:44.555655   79927 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0806 08:34:44.555692   79927 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0806 08:34:44.555707   79927 ssh_runner.go:195] Run: which crictl
	I0806 08:34:44.555714   79927 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0806 08:34:44.555743   79927 ssh_runner.go:195] Run: which crictl
	I0806 08:34:44.555748   79927 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0806 08:34:44.555792   79927 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0806 08:34:44.561072   79927 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0806 08:34:44.624216   79927 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0806 08:34:44.639877   79927 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0806 08:34:44.639895   79927 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0806 08:34:44.639979   79927 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0806 08:34:44.653684   79927 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0806 08:34:44.711774   79927 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0806 08:34:44.711804   79927 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0806 08:34:44.964733   79927 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 08:34:45.112727   79927 cache_images.go:92] duration metric: took 1.041682516s to LoadCachedImages
	W0806 08:34:45.112813   79927 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0806 08:34:45.112837   79927 kubeadm.go:934] updating node { 192.168.61.158 8443 v1.20.0 crio true true} ...
	I0806 08:34:45.112964   79927 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-409903 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.158
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-409903 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0806 08:34:45.113047   79927 ssh_runner.go:195] Run: crio config
	I0806 08:34:45.165383   79927 cni.go:84] Creating CNI manager for ""
	I0806 08:34:45.165413   79927 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0806 08:34:45.165429   79927 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0806 08:34:45.165455   79927 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.158 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-409903 NodeName:old-k8s-version-409903 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.158"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.158 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0806 08:34:45.165687   79927 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.158
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-409903"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.158
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.158"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0806 08:34:45.165760   79927 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0806 08:34:45.177296   79927 binaries.go:44] Found k8s binaries, skipping transfer
	I0806 08:34:45.177371   79927 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0806 08:34:45.187299   79927 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0806 08:34:45.205986   79927 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0806 08:34:45.225072   79927 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0806 08:34:45.244222   79927 ssh_runner.go:195] Run: grep 192.168.61.158	control-plane.minikube.internal$ /etc/hosts
	I0806 08:34:45.248728   79927 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.158	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 08:34:45.263094   79927 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 08:34:45.399106   79927 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 08:34:45.417589   79927 certs.go:68] Setting up /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/old-k8s-version-409903 for IP: 192.168.61.158
	I0806 08:34:45.417611   79927 certs.go:194] generating shared ca certs ...
	I0806 08:34:45.417630   79927 certs.go:226] acquiring lock for ca certs: {Name:mkf5c5aed91f4108054d9f2c742cad66c86afbed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:34:45.417779   79927 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.key
	I0806 08:34:45.417820   79927 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.key
	I0806 08:34:45.417829   79927 certs.go:256] generating profile certs ...
	I0806 08:34:45.417913   79927 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/old-k8s-version-409903/client.key
	I0806 08:34:45.417970   79927 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/old-k8s-version-409903/apiserver.key.1ffd6785
	I0806 08:34:45.418060   79927 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/old-k8s-version-409903/proxy-client.key
	I0806 08:34:45.418242   79927 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246.pem (1338 bytes)
	W0806 08:34:45.418273   79927 certs.go:480] ignoring /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246_empty.pem, impossibly tiny 0 bytes
	I0806 08:34:45.418282   79927 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem (1675 bytes)
	I0806 08:34:45.418302   79927 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem (1078 bytes)
	I0806 08:34:45.418322   79927 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem (1123 bytes)
	I0806 08:34:45.418350   79927 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem (1675 bytes)
	I0806 08:34:45.418389   79927 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem (1708 bytes)
	I0806 08:34:45.419018   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0806 08:34:45.464550   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0806 08:34:45.500764   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0806 08:34:45.549225   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0806 08:34:45.597480   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/old-k8s-version-409903/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0806 08:34:45.639834   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/old-k8s-version-409903/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0806 08:34:45.680789   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/old-k8s-version-409903/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0806 08:34:45.737986   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/old-k8s-version-409903/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0806 08:34:45.766471   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem --> /usr/share/ca-certificates/202462.pem (1708 bytes)
	I0806 08:34:45.797047   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0806 08:34:45.825789   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246.pem --> /usr/share/ca-certificates/20246.pem (1338 bytes)
	I0806 08:34:45.860217   79927 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0806 08:34:45.883841   79927 ssh_runner.go:195] Run: openssl version
	I0806 08:34:45.891103   79927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202462.pem && ln -fs /usr/share/ca-certificates/202462.pem /etc/ssl/certs/202462.pem"
	I0806 08:34:45.905260   79927 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202462.pem
	I0806 08:34:45.910759   79927 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  6 07:17 /usr/share/ca-certificates/202462.pem
	I0806 08:34:45.910837   79927 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202462.pem
	I0806 08:34:45.918440   79927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202462.pem /etc/ssl/certs/3ec20f2e.0"
	I0806 08:34:45.932479   79927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0806 08:34:45.945678   79927 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0806 08:34:45.951612   79927 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  6 07:07 /usr/share/ca-certificates/minikubeCA.pem
	I0806 08:34:45.951691   79927 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0806 08:34:45.957749   79927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0806 08:34:45.969987   79927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20246.pem && ln -fs /usr/share/ca-certificates/20246.pem /etc/ssl/certs/20246.pem"
	I0806 08:34:45.982878   79927 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20246.pem
	I0806 08:34:45.987953   79927 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  6 07:17 /usr/share/ca-certificates/20246.pem
	I0806 08:34:45.988032   79927 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20246.pem
	I0806 08:34:45.994219   79927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20246.pem /etc/ssl/certs/51391683.0"
	I0806 08:34:46.006666   79927 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0806 08:34:46.011624   79927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0806 08:34:46.018416   79927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0806 08:34:46.025220   79927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0806 08:34:46.031854   79927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0806 08:34:46.038570   79927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0806 08:34:46.045142   79927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0806 08:34:46.052198   79927 kubeadm.go:392] StartCluster: {Name:old-k8s-version-409903 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-409903 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.158 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 08:34:46.052313   79927 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0806 08:34:46.052372   79927 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0806 08:34:46.098382   79927 cri.go:89] found id: ""
	I0806 08:34:46.098454   79927 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0806 08:34:46.110861   79927 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0806 08:34:46.110888   79927 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0806 08:34:46.110956   79927 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0806 08:34:46.123420   79927 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0806 08:34:46.124617   79927 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-409903" does not appear in /home/jenkins/minikube-integration/19370-13057/kubeconfig
	I0806 08:34:46.125562   79927 kubeconfig.go:62] /home/jenkins/minikube-integration/19370-13057/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-409903" cluster setting kubeconfig missing "old-k8s-version-409903" context setting]
	I0806 08:34:46.127017   79927 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/kubeconfig: {Name:mk5c066914acf8e36556b29792f75b98076ca34a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:34:46.173767   79927 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0806 08:34:46.186472   79927 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.158
	I0806 08:34:46.186517   79927 kubeadm.go:1160] stopping kube-system containers ...
	I0806 08:34:46.186532   79927 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0806 08:34:46.186592   79927 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0806 08:34:46.227817   79927 cri.go:89] found id: ""
	I0806 08:34:46.227914   79927 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0806 08:34:46.248248   79927 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0806 08:34:46.259555   79927 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0806 08:34:46.259578   79927 kubeadm.go:157] found existing configuration files:
	
	I0806 08:34:46.259647   79927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0806 08:34:46.271246   79927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0806 08:34:46.271311   79927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0806 08:34:46.283559   79927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0806 08:34:46.295584   79927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0806 08:34:46.295660   79927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0806 08:34:46.307223   79927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0806 08:34:46.318046   79927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0806 08:34:46.318136   79927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0806 08:34:46.328827   79927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0806 08:34:46.339577   79927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0806 08:34:46.339665   79927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0806 08:34:46.352107   79927 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0806 08:34:46.369776   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:46.505082   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:47.036627   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:47.285779   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:47.397777   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:47.517913   79927 api_server.go:52] waiting for apiserver process to appear ...
	I0806 08:34:47.518028   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:48.018399   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:48.518708   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:44.788404   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:46.789746   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:45.115046   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:47.611343   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:48.905056   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:48.905522   78922 main.go:141] libmachine: (no-preload-703340) DBG | unable to find current IP address of domain no-preload-703340 in network mk-no-preload-703340
	I0806 08:34:48.905548   78922 main.go:141] libmachine: (no-preload-703340) DBG | I0806 08:34:48.905484   81046 retry.go:31] will retry after 2.647234658s: waiting for machine to come up
	I0806 08:34:51.556304   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:51.556689   78922 main.go:141] libmachine: (no-preload-703340) DBG | unable to find current IP address of domain no-preload-703340 in network mk-no-preload-703340
	I0806 08:34:51.556712   78922 main.go:141] libmachine: (no-preload-703340) DBG | I0806 08:34:51.556663   81046 retry.go:31] will retry after 4.275379711s: waiting for machine to come up
	I0806 08:34:49.018572   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:49.518975   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:50.018355   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:50.518866   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:51.018156   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:51.518855   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:52.018270   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:52.518584   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:53.018606   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:53.518118   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:49.288691   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:51.787435   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:53.787824   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:49.612621   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:52.112274   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:55.834423   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:55.834961   78922 main.go:141] libmachine: (no-preload-703340) Found IP for machine: 192.168.72.126
	I0806 08:34:55.834980   78922 main.go:141] libmachine: (no-preload-703340) Reserving static IP address...
	I0806 08:34:55.834991   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has current primary IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:55.835314   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "no-preload-703340", mac: "52:54:00:a1:fa:d3", ip: "192.168.72.126"} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:34:55.835348   78922 main.go:141] libmachine: (no-preload-703340) Reserved static IP address: 192.168.72.126
	I0806 08:34:55.835363   78922 main.go:141] libmachine: (no-preload-703340) DBG | skip adding static IP to network mk-no-preload-703340 - found existing host DHCP lease matching {name: "no-preload-703340", mac: "52:54:00:a1:fa:d3", ip: "192.168.72.126"}
	I0806 08:34:55.835376   78922 main.go:141] libmachine: (no-preload-703340) DBG | Getting to WaitForSSH function...
	I0806 08:34:55.835388   78922 main.go:141] libmachine: (no-preload-703340) Waiting for SSH to be available...
	I0806 08:34:55.837645   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:55.838001   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:34:55.838030   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:55.838167   78922 main.go:141] libmachine: (no-preload-703340) DBG | Using SSH client type: external
	I0806 08:34:55.838193   78922 main.go:141] libmachine: (no-preload-703340) DBG | Using SSH private key: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/no-preload-703340/id_rsa (-rw-------)
	I0806 08:34:55.838234   78922 main.go:141] libmachine: (no-preload-703340) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.126 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19370-13057/.minikube/machines/no-preload-703340/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0806 08:34:55.838256   78922 main.go:141] libmachine: (no-preload-703340) DBG | About to run SSH command:
	I0806 08:34:55.838272   78922 main.go:141] libmachine: (no-preload-703340) DBG | exit 0
	I0806 08:34:55.968557   78922 main.go:141] libmachine: (no-preload-703340) DBG | SSH cmd err, output: <nil>: 
	I0806 08:34:55.968893   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetConfigRaw
	I0806 08:34:55.969485   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetIP
	I0806 08:34:55.971998   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:55.972457   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:34:55.972487   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:55.972694   78922 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/no-preload-703340/config.json ...
	I0806 08:34:55.972889   78922 machine.go:94] provisionDockerMachine start ...
	I0806 08:34:55.972908   78922 main.go:141] libmachine: (no-preload-703340) Calling .DriverName
	I0806 08:34:55.973115   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHHostname
	I0806 08:34:55.975361   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:55.975734   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:34:55.975753   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:55.975905   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHPort
	I0806 08:34:55.976051   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHKeyPath
	I0806 08:34:55.976265   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHKeyPath
	I0806 08:34:55.976424   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHUsername
	I0806 08:34:55.976635   78922 main.go:141] libmachine: Using SSH client type: native
	I0806 08:34:55.976891   78922 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.126 22 <nil> <nil>}
	I0806 08:34:55.976905   78922 main.go:141] libmachine: About to run SSH command:
	hostname
	I0806 08:34:56.084844   78922 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0806 08:34:56.084905   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetMachineName
	I0806 08:34:56.085226   78922 buildroot.go:166] provisioning hostname "no-preload-703340"
	I0806 08:34:56.085253   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetMachineName
	I0806 08:34:56.085477   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHHostname
	I0806 08:34:56.088358   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:56.088775   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:34:56.088806   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:56.088904   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHPort
	I0806 08:34:56.089070   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHKeyPath
	I0806 08:34:56.089262   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHKeyPath
	I0806 08:34:56.089417   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHUsername
	I0806 08:34:56.089600   78922 main.go:141] libmachine: Using SSH client type: native
	I0806 08:34:56.089810   78922 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.126 22 <nil> <nil>}
	I0806 08:34:56.089840   78922 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-703340 && echo "no-preload-703340" | sudo tee /etc/hostname
	I0806 08:34:56.216217   78922 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-703340
	
	I0806 08:34:56.216257   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHHostname
	I0806 08:34:56.218949   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:56.219274   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:34:56.219309   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:56.219488   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHPort
	I0806 08:34:56.219699   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHKeyPath
	I0806 08:34:56.219899   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHKeyPath
	I0806 08:34:56.220054   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHUsername
	I0806 08:34:56.220231   78922 main.go:141] libmachine: Using SSH client type: native
	I0806 08:34:56.220513   78922 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.126 22 <nil> <nil>}
	I0806 08:34:56.220544   78922 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-703340' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-703340/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-703340' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0806 08:34:56.337357   78922 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 08:34:56.337390   78922 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19370-13057/.minikube CaCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19370-13057/.minikube}
	I0806 08:34:56.337428   78922 buildroot.go:174] setting up certificates
	I0806 08:34:56.337439   78922 provision.go:84] configureAuth start
	I0806 08:34:56.337454   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetMachineName
	I0806 08:34:56.337736   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetIP
	I0806 08:34:56.340297   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:56.340655   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:34:56.340680   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:56.340898   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHHostname
	I0806 08:34:56.343013   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:56.343356   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:34:56.343395   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:56.343544   78922 provision.go:143] copyHostCerts
	I0806 08:34:56.343613   78922 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem, removing ...
	I0806 08:34:56.343626   78922 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem
	I0806 08:34:56.343692   78922 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem (1078 bytes)
	I0806 08:34:56.343806   78922 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem, removing ...
	I0806 08:34:56.343815   78922 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem
	I0806 08:34:56.343838   78922 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem (1123 bytes)
	I0806 08:34:56.343915   78922 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem, removing ...
	I0806 08:34:56.343929   78922 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem
	I0806 08:34:56.343948   78922 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem (1675 bytes)
	I0806 08:34:56.344044   78922 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem org=jenkins.no-preload-703340 san=[127.0.0.1 192.168.72.126 localhost minikube no-preload-703340]
	I0806 08:34:56.441531   78922 provision.go:177] copyRemoteCerts
	I0806 08:34:56.441588   78922 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0806 08:34:56.441610   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHHostname
	I0806 08:34:56.444202   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:56.444549   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:34:56.444574   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:56.444786   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHPort
	I0806 08:34:56.444977   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHKeyPath
	I0806 08:34:56.445125   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHUsername
	I0806 08:34:56.445327   78922 sshutil.go:53] new ssh client: &{IP:192.168.72.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/no-preload-703340/id_rsa Username:docker}
	I0806 08:34:56.530980   78922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0806 08:34:56.557880   78922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0806 08:34:56.584108   78922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0806 08:34:56.608415   78922 provision.go:87] duration metric: took 270.961494ms to configureAuth
	I0806 08:34:56.608449   78922 buildroot.go:189] setting minikube options for container-runtime
	I0806 08:34:56.608675   78922 config.go:182] Loaded profile config "no-preload-703340": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-rc.0
	I0806 08:34:56.608794   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHHostname
	I0806 08:34:56.611937   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:56.612383   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:34:56.612409   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:56.612585   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHPort
	I0806 08:34:56.612774   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHKeyPath
	I0806 08:34:56.612953   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHKeyPath
	I0806 08:34:56.613134   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHUsername
	I0806 08:34:56.613282   78922 main.go:141] libmachine: Using SSH client type: native
	I0806 08:34:56.613436   78922 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.126 22 <nil> <nil>}
	I0806 08:34:56.613450   78922 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0806 08:34:56.904446   78922 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0806 08:34:56.904474   78922 machine.go:97] duration metric: took 931.572346ms to provisionDockerMachine
	I0806 08:34:56.904484   78922 start.go:293] postStartSetup for "no-preload-703340" (driver="kvm2")
	I0806 08:34:56.904496   78922 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0806 08:34:56.904517   78922 main.go:141] libmachine: (no-preload-703340) Calling .DriverName
	I0806 08:34:56.905043   78922 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0806 08:34:56.905076   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHHostname
	I0806 08:34:56.907707   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:56.908101   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:34:56.908124   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:56.908356   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHPort
	I0806 08:34:56.908586   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHKeyPath
	I0806 08:34:56.908753   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHUsername
	I0806 08:34:56.908924   78922 sshutil.go:53] new ssh client: &{IP:192.168.72.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/no-preload-703340/id_rsa Username:docker}
	I0806 08:34:56.995555   78922 ssh_runner.go:195] Run: cat /etc/os-release
	I0806 08:34:56.999971   78922 info.go:137] Remote host: Buildroot 2023.02.9
	I0806 08:34:57.000004   78922 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-13057/.minikube/addons for local assets ...
	I0806 08:34:57.000079   78922 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-13057/.minikube/files for local assets ...
	I0806 08:34:57.000150   78922 filesync.go:149] local asset: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem -> 202462.pem in /etc/ssl/certs
	I0806 08:34:57.000243   78922 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0806 08:34:57.010178   78922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem --> /etc/ssl/certs/202462.pem (1708 bytes)
	I0806 08:34:57.035704   78922 start.go:296] duration metric: took 131.206204ms for postStartSetup
	I0806 08:34:57.035751   78922 fix.go:56] duration metric: took 19.682153204s for fixHost
	I0806 08:34:57.035775   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHHostname
	I0806 08:34:57.038230   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:57.038565   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:34:57.038596   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:57.038809   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHPort
	I0806 08:34:57.039016   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHKeyPath
	I0806 08:34:57.039197   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHKeyPath
	I0806 08:34:57.039308   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHUsername
	I0806 08:34:57.039486   78922 main.go:141] libmachine: Using SSH client type: native
	I0806 08:34:57.039697   78922 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.126 22 <nil> <nil>}
	I0806 08:34:57.039712   78922 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0806 08:34:57.149272   78922 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722933297.121300330
	
	I0806 08:34:57.149291   78922 fix.go:216] guest clock: 1722933297.121300330
	I0806 08:34:57.149298   78922 fix.go:229] Guest: 2024-08-06 08:34:57.12130033 +0000 UTC Remote: 2024-08-06 08:34:57.035756139 +0000 UTC m=+360.415626547 (delta=85.544191ms)
	I0806 08:34:57.149319   78922 fix.go:200] guest clock delta is within tolerance: 85.544191ms
	I0806 08:34:57.149326   78922 start.go:83] releasing machines lock for "no-preload-703340", held for 19.795759576s
	I0806 08:34:57.149348   78922 main.go:141] libmachine: (no-preload-703340) Calling .DriverName
	I0806 08:34:57.149624   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetIP
	I0806 08:34:57.152286   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:57.152699   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:34:57.152731   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:57.152905   78922 main.go:141] libmachine: (no-preload-703340) Calling .DriverName
	I0806 08:34:57.153488   78922 main.go:141] libmachine: (no-preload-703340) Calling .DriverName
	I0806 08:34:57.153680   78922 main.go:141] libmachine: (no-preload-703340) Calling .DriverName
	I0806 08:34:57.153764   78922 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0806 08:34:57.153803   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHHostname
	I0806 08:34:57.154068   78922 ssh_runner.go:195] Run: cat /version.json
	I0806 08:34:57.154090   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHHostname
	I0806 08:34:57.156678   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:57.157026   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:34:57.157053   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:57.157123   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:57.157327   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHPort
	I0806 08:34:57.157492   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHKeyPath
	I0806 08:34:57.157580   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:34:57.157621   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHUsername
	I0806 08:34:57.157602   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:57.157701   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHPort
	I0806 08:34:57.157919   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHKeyPath
	I0806 08:34:57.157933   78922 sshutil.go:53] new ssh client: &{IP:192.168.72.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/no-preload-703340/id_rsa Username:docker}
	I0806 08:34:57.158089   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHUsername
	I0806 08:34:57.158243   78922 sshutil.go:53] new ssh client: &{IP:192.168.72.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/no-preload-703340/id_rsa Username:docker}
	I0806 08:34:57.237574   78922 ssh_runner.go:195] Run: systemctl --version
	I0806 08:34:57.260498   78922 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0806 08:34:57.403619   78922 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0806 08:34:57.410198   78922 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0806 08:34:57.410267   78922 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0806 08:34:57.426321   78922 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0806 08:34:57.426345   78922 start.go:495] detecting cgroup driver to use...
	I0806 08:34:57.426414   78922 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0806 08:34:57.442340   78922 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 08:34:57.458008   78922 docker.go:217] disabling cri-docker service (if available) ...
	I0806 08:34:57.458061   78922 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0806 08:34:57.471516   78922 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0806 08:34:57.484996   78922 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0806 08:34:57.604605   78922 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0806 08:34:57.753590   78922 docker.go:233] disabling docker service ...
	I0806 08:34:57.753651   78922 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0806 08:34:57.768534   78922 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0806 08:34:57.782187   78922 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0806 08:34:57.936046   78922 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0806 08:34:58.076402   78922 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0806 08:34:58.091386   78922 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 08:34:58.112810   78922 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0806 08:34:58.112873   78922 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:58.123376   78922 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0806 08:34:58.123438   78922 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:58.133981   78922 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:58.144891   78922 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:58.156276   78922 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0806 08:34:58.166927   78922 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:58.176956   78922 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:58.193969   78922 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:58.204122   78922 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0806 08:34:58.213745   78922 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0806 08:34:58.213859   78922 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0806 08:34:58.226556   78922 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0806 08:34:58.236413   78922 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 08:34:58.359776   78922 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0806 08:34:58.505947   78922 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0806 08:34:58.506016   78922 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0806 08:34:58.511591   78922 start.go:563] Will wait 60s for crictl version
	I0806 08:34:58.511649   78922 ssh_runner.go:195] Run: which crictl
	I0806 08:34:58.516259   78922 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0806 08:34:58.568580   78922 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0806 08:34:58.568658   78922 ssh_runner.go:195] Run: crio --version
	I0806 08:34:58.607964   78922 ssh_runner.go:195] Run: crio --version
	I0806 08:34:58.642564   78922 out.go:177] * Preparing Kubernetes v1.31.0-rc.0 on CRI-O 1.29.1 ...
	I0806 08:34:54.018818   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:54.518987   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:55.018584   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:55.518848   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:56.018094   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:56.518052   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:57.018765   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:57.518128   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:58.018977   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:58.518233   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:55.789183   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:58.288863   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:54.612520   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:56.614634   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:59.115966   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:58.643802   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetIP
	I0806 08:34:58.646395   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:58.646714   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:34:58.646744   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:58.646948   78922 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0806 08:34:58.651136   78922 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 08:34:58.664347   78922 kubeadm.go:883] updating cluster {Name:no-preload-703340 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-rc.0 ClusterName:no-preload-703340 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.126 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m
0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0806 08:34:58.664479   78922 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime crio
	I0806 08:34:58.664511   78922 ssh_runner.go:195] Run: sudo crictl images --output json
	I0806 08:34:58.705043   78922 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-rc.0". assuming images are not preloaded.
	I0806 08:34:58.705080   78922 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-rc.0 registry.k8s.io/kube-controller-manager:v1.31.0-rc.0 registry.k8s.io/kube-scheduler:v1.31.0-rc.0 registry.k8s.io/kube-proxy:v1.31.0-rc.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0806 08:34:58.705157   78922 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 08:34:58.705170   78922 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0806 08:34:58.705177   78922 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0806 08:34:58.705245   78922 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0806 08:34:58.705272   78922 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0806 08:34:58.705281   78922 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0806 08:34:58.705160   78922 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0806 08:34:58.705469   78922 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0806 08:34:58.706962   78922 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0806 08:34:58.706993   78922 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 08:34:58.706986   78922 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0806 08:34:58.707018   78922 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0806 08:34:58.706968   78922 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0806 08:34:58.706989   78922 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0806 08:34:58.707066   78922 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0806 08:34:58.707315   78922 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0806 08:34:58.835957   78922 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0806 08:34:58.839572   78922 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0806 08:34:58.846955   78922 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0806 08:34:58.865707   78922 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0806 08:34:58.872729   78922 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0806 08:34:58.878484   78922 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0806 08:34:58.898935   78922 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0806 08:34:58.919621   78922 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-rc.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-rc.0" does not exist at hash "fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c" in container runtime
	I0806 08:34:58.919654   78922 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0806 08:34:58.919691   78922 ssh_runner.go:195] Run: which crictl
	I0806 08:34:58.949493   78922 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0806 08:34:58.949544   78922 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0806 08:34:58.949597   78922 ssh_runner.go:195] Run: which crictl
	I0806 08:34:58.971429   78922 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0806 08:34:58.971476   78922 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0806 08:34:58.971522   78922 ssh_runner.go:195] Run: which crictl
	I0806 08:34:59.015321   78922 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-rc.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-rc.0" does not exist at hash "0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c" in container runtime
	I0806 08:34:59.015359   78922 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0806 08:34:59.015397   78922 ssh_runner.go:195] Run: which crictl
	I0806 08:34:59.107718   78922 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-rc.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-rc.0" does not exist at hash "41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318" in container runtime
	I0806 08:34:59.107761   78922 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0806 08:34:59.107797   78922 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-rc.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-rc.0" does not exist at hash "c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0" in container runtime
	I0806 08:34:59.107867   78922 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0806 08:34:59.107904   78922 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0806 08:34:59.107987   78922 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0806 08:34:59.107809   78922 ssh_runner.go:195] Run: which crictl
	I0806 08:34:59.108037   78922 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0806 08:34:59.107994   78922 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0806 08:34:59.108037   78922 ssh_runner.go:195] Run: which crictl
	I0806 08:34:59.120323   78922 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0806 08:34:59.218864   78922 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0
	I0806 08:34:59.218982   78922 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-rc.0
	I0806 08:34:59.222696   78922 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0806 08:34:59.222735   78922 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0806 08:34:59.222794   78922 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0
	I0806 08:34:59.222820   78922 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0806 08:34:59.222854   78922 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0806 08:34:59.222882   78922 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-rc.0
	I0806 08:34:59.222900   78922 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-rc.0
	I0806 08:34:59.222933   78922 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.15-0
	I0806 08:34:59.222964   78922 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-rc.0
	I0806 08:34:59.227260   78922 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-rc.0 (exists)
	I0806 08:34:59.227280   78922 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-rc.0
	I0806 08:34:59.227319   78922 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-rc.0
	I0806 08:34:59.268220   78922 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-rc.0 (exists)
	I0806 08:34:59.268240   78922 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0
	I0806 08:34:59.268291   78922 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0806 08:34:59.268325   78922 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0806 08:34:59.268332   78922 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-rc.0
	I0806 08:34:59.268349   78922 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-rc.0 (exists)
	I0806 08:34:59.613734   78922 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 08:35:01.623115   78922 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-rc.0: (2.395772878s)
	I0806 08:35:01.623140   78922 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-rc.0: (2.354795538s)
	I0806 08:35:01.623152   78922 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0 from cache
	I0806 08:35:01.623161   78922 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-rc.0 (exists)
	I0806 08:35:01.623168   78922 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-rc.0
	I0806 08:35:01.623217   78922 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-rc.0
	I0806 08:35:01.623218   78922 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.009459709s)
	I0806 08:35:01.623310   78922 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0806 08:35:01.623340   78922 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 08:35:01.623366   78922 ssh_runner.go:195] Run: which crictl
	I0806 08:34:59.019117   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:59.518121   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:00.018102   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:00.518402   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:01.019125   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:01.518141   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:02.018063   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:02.518834   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:03.018074   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:03.518904   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:00.789181   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:02.790704   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:01.612449   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:03.655402   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:03.781865   78922 ssh_runner.go:235] Completed: which crictl: (2.158478864s)
	I0806 08:35:03.781942   78922 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 08:35:03.782003   78922 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-rc.0: (2.158736373s)
	I0806 08:35:03.782023   78922 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0 from cache
	I0806 08:35:03.782042   78922 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-rc.0
	I0806 08:35:03.782112   78922 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-rc.0
	I0806 08:35:05.247976   78922 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.466007397s)
	I0806 08:35:05.248028   78922 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-rc.0: (1.465889339s)
	I0806 08:35:05.248049   78922 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0806 08:35:05.248054   78922 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0 from cache
	I0806 08:35:05.248065   78922 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0806 08:35:05.248117   78922 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0806 08:35:05.248166   78922 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0806 08:35:04.018047   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:04.518891   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:05.018447   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:05.518506   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:06.019133   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:06.518070   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:07.018362   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:07.518320   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:08.019034   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:08.518740   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:05.288137   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:07.288969   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:06.113473   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:08.612266   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:09.135551   78922 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.887407478s)
	I0806 08:35:09.135591   78922 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0806 08:35:09.135592   78922 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (3.887405091s)
	I0806 08:35:09.135605   78922 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0806 08:35:09.135618   78922 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0806 08:35:09.135645   78922 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0806 08:35:11.131984   78922 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.996319459s)
	I0806 08:35:11.132013   78922 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0806 08:35:11.132024   78922 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-rc.0
	I0806 08:35:11.132070   78922 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-rc.0
	I0806 08:35:09.018349   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:09.518189   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:10.018824   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:10.518260   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:11.018914   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:11.518709   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:12.018349   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:12.518719   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:13.019022   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:13.518318   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:09.787576   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:11.789165   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:10.614185   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:13.112391   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:13.395512   78922 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-rc.0: (2.263417409s)
	I0806 08:35:13.395548   78922 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-rc.0 from cache
	I0806 08:35:13.395560   78922 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0806 08:35:13.395608   78922 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0806 08:35:14.150250   78922 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0806 08:35:14.150301   78922 cache_images.go:123] Successfully loaded all cached images
	I0806 08:35:14.150309   78922 cache_images.go:92] duration metric: took 15.445214072s to LoadCachedImages
	I0806 08:35:14.150322   78922 kubeadm.go:934] updating node { 192.168.72.126 8443 v1.31.0-rc.0 crio true true} ...
	I0806 08:35:14.150540   78922 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-703340 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.126
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-rc.0 ClusterName:no-preload-703340 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0806 08:35:14.150640   78922 ssh_runner.go:195] Run: crio config
	I0806 08:35:14.210763   78922 cni.go:84] Creating CNI manager for ""
	I0806 08:35:14.210794   78922 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0806 08:35:14.210806   78922 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0806 08:35:14.210842   78922 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.126 APIServerPort:8443 KubernetesVersion:v1.31.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-703340 NodeName:no-preload-703340 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.126"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.126 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0806 08:35:14.210979   78922 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.126
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-703340"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.126
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.126"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0806 08:35:14.211036   78922 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-rc.0
	I0806 08:35:14.222106   78922 binaries.go:44] Found k8s binaries, skipping transfer
	I0806 08:35:14.222189   78922 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0806 08:35:14.231944   78922 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0806 08:35:14.250491   78922 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0806 08:35:14.268955   78922 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0806 08:35:14.289661   78922 ssh_runner.go:195] Run: grep 192.168.72.126	control-plane.minikube.internal$ /etc/hosts
	I0806 08:35:14.294005   78922 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.126	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 08:35:14.307700   78922 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 08:35:14.451462   78922 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 08:35:14.478155   78922 certs.go:68] Setting up /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/no-preload-703340 for IP: 192.168.72.126
	I0806 08:35:14.478177   78922 certs.go:194] generating shared ca certs ...
	I0806 08:35:14.478191   78922 certs.go:226] acquiring lock for ca certs: {Name:mkf5c5aed91f4108054d9f2c742cad66c86afbed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:35:14.478365   78922 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.key
	I0806 08:35:14.478420   78922 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.key
	I0806 08:35:14.478434   78922 certs.go:256] generating profile certs ...
	I0806 08:35:14.478529   78922 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/no-preload-703340/client.key
	I0806 08:35:14.478615   78922 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/no-preload-703340/apiserver.key.0b170e14
	I0806 08:35:14.478665   78922 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/no-preload-703340/proxy-client.key
	I0806 08:35:14.478822   78922 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246.pem (1338 bytes)
	W0806 08:35:14.478871   78922 certs.go:480] ignoring /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246_empty.pem, impossibly tiny 0 bytes
	I0806 08:35:14.478885   78922 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem (1675 bytes)
	I0806 08:35:14.478941   78922 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem (1078 bytes)
	I0806 08:35:14.478978   78922 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem (1123 bytes)
	I0806 08:35:14.479014   78922 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem (1675 bytes)
	I0806 08:35:14.479078   78922 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem (1708 bytes)
	I0806 08:35:14.479773   78922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0806 08:35:14.527838   78922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0806 08:35:14.579788   78922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0806 08:35:14.614739   78922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0806 08:35:14.657559   78922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/no-preload-703340/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0806 08:35:14.699354   78922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/no-preload-703340/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0806 08:35:14.724875   78922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/no-preload-703340/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0806 08:35:14.751020   78922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/no-preload-703340/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0806 08:35:14.778300   78922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem --> /usr/share/ca-certificates/202462.pem (1708 bytes)
	I0806 08:35:14.805018   78922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0806 08:35:14.830791   78922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246.pem --> /usr/share/ca-certificates/20246.pem (1338 bytes)
	I0806 08:35:14.856120   78922 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0806 08:35:14.874909   78922 ssh_runner.go:195] Run: openssl version
	I0806 08:35:14.880931   78922 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202462.pem && ln -fs /usr/share/ca-certificates/202462.pem /etc/ssl/certs/202462.pem"
	I0806 08:35:14.892741   78922 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202462.pem
	I0806 08:35:14.897430   78922 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  6 07:17 /usr/share/ca-certificates/202462.pem
	I0806 08:35:14.897489   78922 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202462.pem
	I0806 08:35:14.903324   78922 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202462.pem /etc/ssl/certs/3ec20f2e.0"
	I0806 08:35:14.914505   78922 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0806 08:35:14.925465   78922 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0806 08:35:14.930490   78922 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  6 07:07 /usr/share/ca-certificates/minikubeCA.pem
	I0806 08:35:14.930559   78922 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0806 08:35:14.936384   78922 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0806 08:35:14.947836   78922 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20246.pem && ln -fs /usr/share/ca-certificates/20246.pem /etc/ssl/certs/20246.pem"
	I0806 08:35:14.959376   78922 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20246.pem
	I0806 08:35:14.964118   78922 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  6 07:17 /usr/share/ca-certificates/20246.pem
	I0806 08:35:14.964168   78922 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20246.pem
	I0806 08:35:14.970266   78922 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20246.pem /etc/ssl/certs/51391683.0"
	I0806 08:35:14.982244   78922 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0806 08:35:14.987367   78922 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0806 08:35:14.993508   78922 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0806 08:35:14.999400   78922 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0806 08:35:15.005751   78922 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0806 08:35:15.011588   78922 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0806 08:35:15.017766   78922 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0806 08:35:15.023853   78922 kubeadm.go:392] StartCluster: {Name:no-preload-703340 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-rc.0 ClusterName:no-preload-703340 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.126 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 08:35:15.024060   78922 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0806 08:35:15.024113   78922 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0806 08:35:15.072318   78922 cri.go:89] found id: ""
	I0806 08:35:15.072415   78922 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0806 08:35:15.083158   78922 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0806 08:35:15.083182   78922 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0806 08:35:15.083225   78922 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0806 08:35:15.093301   78922 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0806 08:35:15.094369   78922 kubeconfig.go:125] found "no-preload-703340" server: "https://192.168.72.126:8443"
	I0806 08:35:15.096545   78922 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0806 08:35:15.106190   78922 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.126
	I0806 08:35:15.106226   78922 kubeadm.go:1160] stopping kube-system containers ...
	I0806 08:35:15.106237   78922 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0806 08:35:15.106288   78922 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0806 08:35:15.145974   78922 cri.go:89] found id: ""
	I0806 08:35:15.146060   78922 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0806 08:35:15.163821   78922 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0806 08:35:15.174473   78922 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0806 08:35:15.174498   78922 kubeadm.go:157] found existing configuration files:
	
	I0806 08:35:15.174562   78922 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0806 08:35:15.185227   78922 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0806 08:35:15.185309   78922 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0806 08:35:15.195614   78922 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0806 08:35:15.205197   78922 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0806 08:35:15.205261   78922 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0806 08:35:15.215511   78922 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0806 08:35:15.225354   78922 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0806 08:35:15.225412   78922 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0806 08:35:15.234912   78922 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0806 08:35:15.244638   78922 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0806 08:35:15.244703   78922 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0806 08:35:15.255080   78922 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0806 08:35:15.265861   78922 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:35:15.399192   78922 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:35:16.182816   78922 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:35:16.397386   78922 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:35:16.463980   78922 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:35:16.550229   78922 api_server.go:52] waiting for apiserver process to appear ...
	I0806 08:35:16.550347   78922 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:14.018930   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:14.518251   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:15.018981   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:15.518764   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:16.018114   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:16.518157   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:17.018124   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:17.518604   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:18.018742   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:18.518755   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:14.290050   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:16.788472   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:18.788526   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:15.112664   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:17.113214   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:17.050447   78922 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:17.551220   78922 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:17.603538   78922 api_server.go:72] duration metric: took 1.053308623s to wait for apiserver process to appear ...
	I0806 08:35:17.603574   78922 api_server.go:88] waiting for apiserver healthz status ...
	I0806 08:35:17.603595   78922 api_server.go:253] Checking apiserver healthz at https://192.168.72.126:8443/healthz ...
	I0806 08:35:17.604064   78922 api_server.go:269] stopped: https://192.168.72.126:8443/healthz: Get "https://192.168.72.126:8443/healthz": dial tcp 192.168.72.126:8443: connect: connection refused
	I0806 08:35:18.104696   78922 api_server.go:253] Checking apiserver healthz at https://192.168.72.126:8443/healthz ...
	I0806 08:35:20.458841   78922 api_server.go:279] https://192.168.72.126:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0806 08:35:20.458876   78922 api_server.go:103] status: https://192.168.72.126:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0806 08:35:20.458893   78922 api_server.go:253] Checking apiserver healthz at https://192.168.72.126:8443/healthz ...
	I0806 08:35:20.495570   78922 api_server.go:279] https://192.168.72.126:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0806 08:35:20.495605   78922 api_server.go:103] status: https://192.168.72.126:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0806 08:35:20.603728   78922 api_server.go:253] Checking apiserver healthz at https://192.168.72.126:8443/healthz ...
	I0806 08:35:20.628351   78922 api_server.go:279] https://192.168.72.126:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0806 08:35:20.628409   78922 api_server.go:103] status: https://192.168.72.126:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0806 08:35:21.103717   78922 api_server.go:253] Checking apiserver healthz at https://192.168.72.126:8443/healthz ...
	I0806 08:35:21.109031   78922 api_server.go:279] https://192.168.72.126:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0806 08:35:21.109064   78922 api_server.go:103] status: https://192.168.72.126:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0806 08:35:21.604400   78922 api_server.go:253] Checking apiserver healthz at https://192.168.72.126:8443/healthz ...
	I0806 08:35:21.613667   78922 api_server.go:279] https://192.168.72.126:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0806 08:35:21.613706   78922 api_server.go:103] status: https://192.168.72.126:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0806 08:35:22.103921   78922 api_server.go:253] Checking apiserver healthz at https://192.168.72.126:8443/healthz ...
	I0806 08:35:22.111933   78922 api_server.go:279] https://192.168.72.126:8443/healthz returned 200:
	ok
	I0806 08:35:22.118406   78922 api_server.go:141] control plane version: v1.31.0-rc.0
	I0806 08:35:22.118430   78922 api_server.go:131] duration metric: took 4.514848902s to wait for apiserver health ...
	I0806 08:35:22.118438   78922 cni.go:84] Creating CNI manager for ""
	I0806 08:35:22.118444   78922 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0806 08:35:22.119689   78922 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0806 08:35:19.018302   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:19.518693   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:20.018804   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:20.518878   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:21.018974   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:21.518747   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:22.018530   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:22.519082   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:23.019088   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:23.518932   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:20.788766   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:23.288382   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:19.614878   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:22.113841   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:22.121321   78922 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0806 08:35:22.132760   78922 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0806 08:35:22.151770   78922 system_pods.go:43] waiting for kube-system pods to appear ...
	I0806 08:35:22.161775   78922 system_pods.go:59] 8 kube-system pods found
	I0806 08:35:22.161806   78922 system_pods.go:61] "coredns-6f6b679f8f-ft9vg" [0f9ea981-764b-4777-bf78-8798570b20d8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0806 08:35:22.161818   78922 system_pods.go:61] "etcd-no-preload-703340" [6adb80b2-0d33-4d53-a868-f83226cc7c26] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0806 08:35:22.161826   78922 system_pods.go:61] "kube-apiserver-no-preload-703340" [4a415305-2f94-42c5-901d-49f0b0114cf6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0806 08:35:22.161832   78922 system_pods.go:61] "kube-controller-manager-no-preload-703340" [0351f8f4-5046-4068-ada3-103394ee9ae2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0806 08:35:22.161837   78922 system_pods.go:61] "kube-proxy-7t2sd" [27f3aa37-eca2-450b-a98d-b3246a7e48ac] Running
	I0806 08:35:22.161841   78922 system_pods.go:61] "kube-scheduler-no-preload-703340" [ad4f6239-f5e1-4a76-91ea-8da5478e8992] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0806 08:35:22.161845   78922 system_pods.go:61] "metrics-server-6867b74b74-qz4vx" [800676b3-4940-4ae2-ad37-7adbd67ea8fa] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0806 08:35:22.161849   78922 system_pods.go:61] "storage-provisioner" [8fa414bf-3923-4c20-acd7-a48fba6f4044] Running
	I0806 08:35:22.161855   78922 system_pods.go:74] duration metric: took 10.062633ms to wait for pod list to return data ...
	I0806 08:35:22.161865   78922 node_conditions.go:102] verifying NodePressure condition ...
	I0806 08:35:22.165604   78922 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0806 08:35:22.165626   78922 node_conditions.go:123] node cpu capacity is 2
	I0806 08:35:22.165637   78922 node_conditions.go:105] duration metric: took 3.767899ms to run NodePressure ...
	I0806 08:35:22.165656   78922 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:35:22.445769   78922 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0806 08:35:22.451056   78922 kubeadm.go:739] kubelet initialised
	I0806 08:35:22.451078   78922 kubeadm.go:740] duration metric: took 5.279835ms waiting for restarted kubelet to initialise ...
	I0806 08:35:22.451085   78922 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 08:35:22.458199   78922 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6f6b679f8f-ft9vg" in "kube-system" namespace to be "Ready" ...
	I0806 08:35:22.463913   78922 pod_ready.go:97] node "no-preload-703340" hosting pod "coredns-6f6b679f8f-ft9vg" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-703340" has status "Ready":"False"
	I0806 08:35:22.463936   78922 pod_ready.go:81] duration metric: took 5.712258ms for pod "coredns-6f6b679f8f-ft9vg" in "kube-system" namespace to be "Ready" ...
	E0806 08:35:22.463945   78922 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-703340" hosting pod "coredns-6f6b679f8f-ft9vg" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-703340" has status "Ready":"False"
	I0806 08:35:22.463952   78922 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-703340" in "kube-system" namespace to be "Ready" ...
	I0806 08:35:22.471896   78922 pod_ready.go:97] node "no-preload-703340" hosting pod "etcd-no-preload-703340" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-703340" has status "Ready":"False"
	I0806 08:35:22.471921   78922 pod_ready.go:81] duration metric: took 7.962236ms for pod "etcd-no-preload-703340" in "kube-system" namespace to be "Ready" ...
	E0806 08:35:22.471930   78922 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-703340" hosting pod "etcd-no-preload-703340" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-703340" has status "Ready":"False"
	I0806 08:35:22.471936   78922 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-703340" in "kube-system" namespace to be "Ready" ...
	I0806 08:35:22.478365   78922 pod_ready.go:97] node "no-preload-703340" hosting pod "kube-apiserver-no-preload-703340" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-703340" has status "Ready":"False"
	I0806 08:35:22.478394   78922 pod_ready.go:81] duration metric: took 6.451038ms for pod "kube-apiserver-no-preload-703340" in "kube-system" namespace to be "Ready" ...
	E0806 08:35:22.478405   78922 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-703340" hosting pod "kube-apiserver-no-preload-703340" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-703340" has status "Ready":"False"
	I0806 08:35:22.478414   78922 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-703340" in "kube-system" namespace to be "Ready" ...
	I0806 08:35:22.559022   78922 pod_ready.go:97] node "no-preload-703340" hosting pod "kube-controller-manager-no-preload-703340" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-703340" has status "Ready":"False"
	I0806 08:35:22.559058   78922 pod_ready.go:81] duration metric: took 80.632686ms for pod "kube-controller-manager-no-preload-703340" in "kube-system" namespace to be "Ready" ...
	E0806 08:35:22.559071   78922 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-703340" hosting pod "kube-controller-manager-no-preload-703340" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-703340" has status "Ready":"False"
	I0806 08:35:22.559080   78922 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-7t2sd" in "kube-system" namespace to be "Ready" ...
	I0806 08:35:22.955121   78922 pod_ready.go:92] pod "kube-proxy-7t2sd" in "kube-system" namespace has status "Ready":"True"
	I0806 08:35:22.955142   78922 pod_ready.go:81] duration metric: took 396.053517ms for pod "kube-proxy-7t2sd" in "kube-system" namespace to be "Ready" ...
	I0806 08:35:22.955152   78922 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-703340" in "kube-system" namespace to be "Ready" ...
	I0806 08:35:24.961266   78922 pod_ready.go:102] pod "kube-scheduler-no-preload-703340" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:24.018764   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:24.519010   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:25.019090   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:25.518260   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:26.018784   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:26.518903   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:27.019011   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:27.518740   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:28.018299   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:28.518391   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:25.787771   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:27.788719   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:24.612790   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:26.612992   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:29.112518   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:26.962028   78922 pod_ready.go:102] pod "kube-scheduler-no-preload-703340" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:29.461941   78922 pod_ready.go:102] pod "kube-scheduler-no-preload-703340" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:29.018449   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:29.518446   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:30.018179   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:30.518270   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:31.018913   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:31.518140   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:32.018621   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:32.518573   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:33.018512   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:33.518604   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:29.790220   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:32.287668   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:31.112680   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:33.612484   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:31.961716   78922 pod_ready.go:102] pod "kube-scheduler-no-preload-703340" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:34.461368   78922 pod_ready.go:102] pod "kube-scheduler-no-preload-703340" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:34.961460   78922 pod_ready.go:92] pod "kube-scheduler-no-preload-703340" in "kube-system" namespace has status "Ready":"True"
	I0806 08:35:34.961483   78922 pod_ready.go:81] duration metric: took 12.006324097s for pod "kube-scheduler-no-preload-703340" in "kube-system" namespace to be "Ready" ...
	I0806 08:35:34.961495   78922 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace to be "Ready" ...
	I0806 08:35:34.018132   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:34.518476   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:35.018797   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:35.518507   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:36.018976   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:36.518062   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:37.018463   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:37.518648   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:38.018861   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:38.518236   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:34.288032   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:36.787530   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:38.789235   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:35.612530   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:38.114833   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:36.967646   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:38.969106   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:41.469056   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:39.018424   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:39.518065   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:40.018381   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:40.518979   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:41.018310   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:41.518277   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:42.018881   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:42.518157   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:43.018166   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:43.518740   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:41.287534   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:43.288666   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:40.115187   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:42.611895   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:43.967648   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:45.968277   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:44.018075   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:44.518545   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:45.018342   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:45.519066   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:46.018796   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:46.519031   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:47.018876   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:47.518649   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:35:47.518746   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:35:47.559141   79927 cri.go:89] found id: ""
	I0806 08:35:47.559175   79927 logs.go:276] 0 containers: []
	W0806 08:35:47.559188   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:35:47.559195   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:35:47.559253   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:35:47.596840   79927 cri.go:89] found id: ""
	I0806 08:35:47.596900   79927 logs.go:276] 0 containers: []
	W0806 08:35:47.596913   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:35:47.596921   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:35:47.596995   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:35:47.634932   79927 cri.go:89] found id: ""
	I0806 08:35:47.634961   79927 logs.go:276] 0 containers: []
	W0806 08:35:47.634969   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:35:47.634975   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:35:47.635031   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:35:47.672170   79927 cri.go:89] found id: ""
	I0806 08:35:47.672202   79927 logs.go:276] 0 containers: []
	W0806 08:35:47.672213   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:35:47.672220   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:35:47.672284   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:35:47.709643   79927 cri.go:89] found id: ""
	I0806 08:35:47.709672   79927 logs.go:276] 0 containers: []
	W0806 08:35:47.709683   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:35:47.709690   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:35:47.709752   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:35:47.750036   79927 cri.go:89] found id: ""
	I0806 08:35:47.750067   79927 logs.go:276] 0 containers: []
	W0806 08:35:47.750076   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:35:47.750082   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:35:47.750149   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:35:47.791837   79927 cri.go:89] found id: ""
	I0806 08:35:47.791862   79927 logs.go:276] 0 containers: []
	W0806 08:35:47.791872   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:35:47.791879   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:35:47.791938   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:35:47.831972   79927 cri.go:89] found id: ""
	I0806 08:35:47.831997   79927 logs.go:276] 0 containers: []
	W0806 08:35:47.832005   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:35:47.832014   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:35:47.832028   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:35:47.962228   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:35:47.962252   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:35:47.962267   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:35:48.031276   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:35:48.031314   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:35:48.075869   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:35:48.075912   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:35:48.131817   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:35:48.131859   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:35:45.788042   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:47.789501   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:44.612796   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:47.112009   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:48.468289   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:50.469101   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:50.647351   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:50.661916   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:35:50.662002   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:35:50.707294   79927 cri.go:89] found id: ""
	I0806 08:35:50.707323   79927 logs.go:276] 0 containers: []
	W0806 08:35:50.707335   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:35:50.707342   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:35:50.707396   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:35:50.744111   79927 cri.go:89] found id: ""
	I0806 08:35:50.744133   79927 logs.go:276] 0 containers: []
	W0806 08:35:50.744149   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:35:50.744160   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:35:50.744205   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:35:50.781482   79927 cri.go:89] found id: ""
	I0806 08:35:50.781513   79927 logs.go:276] 0 containers: []
	W0806 08:35:50.781524   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:35:50.781531   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:35:50.781591   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:35:50.828788   79927 cri.go:89] found id: ""
	I0806 08:35:50.828817   79927 logs.go:276] 0 containers: []
	W0806 08:35:50.828827   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:35:50.828834   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:35:50.828896   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:35:50.865168   79927 cri.go:89] found id: ""
	I0806 08:35:50.865199   79927 logs.go:276] 0 containers: []
	W0806 08:35:50.865211   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:35:50.865219   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:35:50.865273   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:35:50.902814   79927 cri.go:89] found id: ""
	I0806 08:35:50.902842   79927 logs.go:276] 0 containers: []
	W0806 08:35:50.902851   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:35:50.902856   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:35:50.902907   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:35:50.939861   79927 cri.go:89] found id: ""
	I0806 08:35:50.939891   79927 logs.go:276] 0 containers: []
	W0806 08:35:50.939899   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:35:50.939905   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:35:50.939961   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:35:50.984743   79927 cri.go:89] found id: ""
	I0806 08:35:50.984773   79927 logs.go:276] 0 containers: []
	W0806 08:35:50.984782   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:35:50.984793   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:35:50.984809   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:35:50.998476   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:35:50.998515   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:35:51.081100   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:35:51.081126   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:35:51.081139   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:35:51.157634   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:35:51.157679   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:35:51.210685   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:35:51.210716   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:35:50.288343   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:52.804477   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:49.612661   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:52.113950   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:54.114200   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:52.969251   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:55.468041   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:53.778620   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:53.793090   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:35:53.793154   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:35:53.837558   79927 cri.go:89] found id: ""
	I0806 08:35:53.837584   79927 logs.go:276] 0 containers: []
	W0806 08:35:53.837594   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:35:53.837599   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:35:53.837670   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:35:53.878855   79927 cri.go:89] found id: ""
	I0806 08:35:53.878884   79927 logs.go:276] 0 containers: []
	W0806 08:35:53.878893   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:35:53.878898   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:35:53.878945   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:35:53.918330   79927 cri.go:89] found id: ""
	I0806 08:35:53.918357   79927 logs.go:276] 0 containers: []
	W0806 08:35:53.918366   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:35:53.918371   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:35:53.918422   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:35:53.959156   79927 cri.go:89] found id: ""
	I0806 08:35:53.959186   79927 logs.go:276] 0 containers: []
	W0806 08:35:53.959196   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:35:53.959204   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:35:53.959309   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:35:53.996775   79927 cri.go:89] found id: ""
	I0806 08:35:53.996802   79927 logs.go:276] 0 containers: []
	W0806 08:35:53.996810   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:35:53.996816   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:35:53.996874   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:35:54.033047   79927 cri.go:89] found id: ""
	I0806 08:35:54.033072   79927 logs.go:276] 0 containers: []
	W0806 08:35:54.033082   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:35:54.033089   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:35:54.033152   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:35:54.069426   79927 cri.go:89] found id: ""
	I0806 08:35:54.069458   79927 logs.go:276] 0 containers: []
	W0806 08:35:54.069468   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:35:54.069475   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:35:54.069536   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:35:54.109114   79927 cri.go:89] found id: ""
	I0806 08:35:54.109143   79927 logs.go:276] 0 containers: []
	W0806 08:35:54.109153   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:35:54.109164   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:35:54.109179   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:35:54.161264   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:35:54.161298   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:35:54.175216   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:35:54.175247   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:35:54.254970   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:35:54.254999   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:35:54.255014   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:35:54.331561   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:35:54.331596   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:35:56.873110   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:56.889302   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:35:56.889378   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:35:56.931875   79927 cri.go:89] found id: ""
	I0806 08:35:56.931904   79927 logs.go:276] 0 containers: []
	W0806 08:35:56.931913   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:35:56.931919   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:35:56.931975   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:35:56.995549   79927 cri.go:89] found id: ""
	I0806 08:35:56.995587   79927 logs.go:276] 0 containers: []
	W0806 08:35:56.995601   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:35:56.995608   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:35:56.995675   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:35:57.042708   79927 cri.go:89] found id: ""
	I0806 08:35:57.042735   79927 logs.go:276] 0 containers: []
	W0806 08:35:57.042743   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:35:57.042749   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:35:57.042833   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:35:57.079160   79927 cri.go:89] found id: ""
	I0806 08:35:57.079187   79927 logs.go:276] 0 containers: []
	W0806 08:35:57.079197   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:35:57.079203   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:35:57.079260   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:35:57.116358   79927 cri.go:89] found id: ""
	I0806 08:35:57.116409   79927 logs.go:276] 0 containers: []
	W0806 08:35:57.116421   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:35:57.116429   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:35:57.116489   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:35:57.151051   79927 cri.go:89] found id: ""
	I0806 08:35:57.151079   79927 logs.go:276] 0 containers: []
	W0806 08:35:57.151087   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:35:57.151096   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:35:57.151148   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:35:57.189747   79927 cri.go:89] found id: ""
	I0806 08:35:57.189791   79927 logs.go:276] 0 containers: []
	W0806 08:35:57.189800   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:35:57.189806   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:35:57.189871   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:35:57.225251   79927 cri.go:89] found id: ""
	I0806 08:35:57.225277   79927 logs.go:276] 0 containers: []
	W0806 08:35:57.225287   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:35:57.225298   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:35:57.225314   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:35:57.238746   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:35:57.238773   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:35:57.312931   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:35:57.312959   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:35:57.312976   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:35:57.394104   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:35:57.394139   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:35:57.435993   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:35:57.436028   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:35:55.288712   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:57.789512   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:56.612875   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:58.613571   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:57.468429   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:59.969618   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:59.989556   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:00.004129   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:00.004205   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:00.042192   79927 cri.go:89] found id: ""
	I0806 08:36:00.042215   79927 logs.go:276] 0 containers: []
	W0806 08:36:00.042223   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:00.042228   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:00.042273   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:00.080921   79927 cri.go:89] found id: ""
	I0806 08:36:00.080956   79927 logs.go:276] 0 containers: []
	W0806 08:36:00.080966   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:00.080973   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:00.081033   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:00.123186   79927 cri.go:89] found id: ""
	I0806 08:36:00.123206   79927 logs.go:276] 0 containers: []
	W0806 08:36:00.123216   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:00.123223   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:00.123274   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:00.165252   79927 cri.go:89] found id: ""
	I0806 08:36:00.165281   79927 logs.go:276] 0 containers: []
	W0806 08:36:00.165289   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:00.165294   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:00.165340   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:00.201601   79927 cri.go:89] found id: ""
	I0806 08:36:00.201639   79927 logs.go:276] 0 containers: []
	W0806 08:36:00.201653   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:00.201662   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:00.201730   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:00.236933   79927 cri.go:89] found id: ""
	I0806 08:36:00.236964   79927 logs.go:276] 0 containers: []
	W0806 08:36:00.236975   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:00.236983   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:00.237043   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:00.273215   79927 cri.go:89] found id: ""
	I0806 08:36:00.273247   79927 logs.go:276] 0 containers: []
	W0806 08:36:00.273256   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:00.273261   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:00.273319   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:00.308075   79927 cri.go:89] found id: ""
	I0806 08:36:00.308105   79927 logs.go:276] 0 containers: []
	W0806 08:36:00.308116   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:00.308126   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:00.308141   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:00.360096   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:00.360146   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:00.374484   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:00.374521   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:00.454297   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:00.454318   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:00.454334   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:00.542665   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:00.542702   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:03.084297   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:03.098661   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:03.098737   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:03.134751   79927 cri.go:89] found id: ""
	I0806 08:36:03.134786   79927 logs.go:276] 0 containers: []
	W0806 08:36:03.134797   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:03.134805   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:03.134873   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:03.177412   79927 cri.go:89] found id: ""
	I0806 08:36:03.177442   79927 logs.go:276] 0 containers: []
	W0806 08:36:03.177453   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:03.177461   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:03.177516   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:03.213177   79927 cri.go:89] found id: ""
	I0806 08:36:03.213209   79927 logs.go:276] 0 containers: []
	W0806 08:36:03.213221   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:03.213229   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:03.213291   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:03.254569   79927 cri.go:89] found id: ""
	I0806 08:36:03.254594   79927 logs.go:276] 0 containers: []
	W0806 08:36:03.254603   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:03.254608   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:03.254656   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:03.292564   79927 cri.go:89] found id: ""
	I0806 08:36:03.292590   79927 logs.go:276] 0 containers: []
	W0806 08:36:03.292597   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:03.292610   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:03.292671   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:03.331572   79927 cri.go:89] found id: ""
	I0806 08:36:03.331596   79927 logs.go:276] 0 containers: []
	W0806 08:36:03.331605   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:03.331610   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:03.331658   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:03.366368   79927 cri.go:89] found id: ""
	I0806 08:36:03.366398   79927 logs.go:276] 0 containers: []
	W0806 08:36:03.366409   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:03.366417   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:03.366469   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:03.404345   79927 cri.go:89] found id: ""
	I0806 08:36:03.404387   79927 logs.go:276] 0 containers: []
	W0806 08:36:03.404399   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:03.404411   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:03.404425   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:03.454825   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:03.454864   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:03.469437   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:03.469466   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:03.553247   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:03.553274   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:03.553290   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:03.635461   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:03.635509   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:00.287469   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:02.787634   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:01.111573   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:03.613750   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:02.468084   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:04.468169   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:06.469036   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:06.175216   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:06.190083   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:06.190158   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:06.226428   79927 cri.go:89] found id: ""
	I0806 08:36:06.226459   79927 logs.go:276] 0 containers: []
	W0806 08:36:06.226468   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:06.226473   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:06.226529   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:06.263781   79927 cri.go:89] found id: ""
	I0806 08:36:06.263805   79927 logs.go:276] 0 containers: []
	W0806 08:36:06.263813   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:06.263819   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:06.263895   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:06.304335   79927 cri.go:89] found id: ""
	I0806 08:36:06.304386   79927 logs.go:276] 0 containers: []
	W0806 08:36:06.304397   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:06.304404   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:06.304466   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:06.341138   79927 cri.go:89] found id: ""
	I0806 08:36:06.341166   79927 logs.go:276] 0 containers: []
	W0806 08:36:06.341177   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:06.341184   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:06.341248   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:06.381547   79927 cri.go:89] found id: ""
	I0806 08:36:06.381577   79927 logs.go:276] 0 containers: []
	W0806 08:36:06.381587   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:06.381594   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:06.381662   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:06.417606   79927 cri.go:89] found id: ""
	I0806 08:36:06.417630   79927 logs.go:276] 0 containers: []
	W0806 08:36:06.417637   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:06.417643   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:06.417700   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:06.456258   79927 cri.go:89] found id: ""
	I0806 08:36:06.456287   79927 logs.go:276] 0 containers: []
	W0806 08:36:06.456298   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:06.456307   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:06.456404   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:06.494268   79927 cri.go:89] found id: ""
	I0806 08:36:06.494309   79927 logs.go:276] 0 containers: []
	W0806 08:36:06.494321   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:06.494332   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:06.494346   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:06.547670   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:06.547710   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:06.562020   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:06.562049   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:06.632691   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:06.632717   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:06.632729   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:06.721737   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:06.721780   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:04.787931   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:07.287463   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:06.111545   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:08.113637   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:08.968767   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:11.468847   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:09.265154   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:09.278664   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:09.278743   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:09.313464   79927 cri.go:89] found id: ""
	I0806 08:36:09.313494   79927 logs.go:276] 0 containers: []
	W0806 08:36:09.313506   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:09.313513   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:09.313561   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:09.352107   79927 cri.go:89] found id: ""
	I0806 08:36:09.352133   79927 logs.go:276] 0 containers: []
	W0806 08:36:09.352146   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:09.352153   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:09.352215   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:09.387547   79927 cri.go:89] found id: ""
	I0806 08:36:09.387581   79927 logs.go:276] 0 containers: []
	W0806 08:36:09.387594   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:09.387601   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:09.387684   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:09.422157   79927 cri.go:89] found id: ""
	I0806 08:36:09.422193   79927 logs.go:276] 0 containers: []
	W0806 08:36:09.422202   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:09.422207   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:09.422275   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:09.463065   79927 cri.go:89] found id: ""
	I0806 08:36:09.463106   79927 logs.go:276] 0 containers: []
	W0806 08:36:09.463122   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:09.463145   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:09.463209   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:09.499450   79927 cri.go:89] found id: ""
	I0806 08:36:09.499474   79927 logs.go:276] 0 containers: []
	W0806 08:36:09.499484   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:09.499491   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:09.499550   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:09.536411   79927 cri.go:89] found id: ""
	I0806 08:36:09.536442   79927 logs.go:276] 0 containers: []
	W0806 08:36:09.536453   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:09.536460   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:09.536533   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:09.572044   79927 cri.go:89] found id: ""
	I0806 08:36:09.572070   79927 logs.go:276] 0 containers: []
	W0806 08:36:09.572079   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:09.572087   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:09.572138   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:09.612385   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:09.612418   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:09.666790   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:09.666852   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:09.680707   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:09.680753   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:09.753446   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:09.753473   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:09.753491   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:12.336824   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:12.352545   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:12.352626   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:12.393158   79927 cri.go:89] found id: ""
	I0806 08:36:12.393188   79927 logs.go:276] 0 containers: []
	W0806 08:36:12.393199   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:12.393206   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:12.393255   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:12.428501   79927 cri.go:89] found id: ""
	I0806 08:36:12.428525   79927 logs.go:276] 0 containers: []
	W0806 08:36:12.428534   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:12.428539   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:12.428596   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:12.463033   79927 cri.go:89] found id: ""
	I0806 08:36:12.463062   79927 logs.go:276] 0 containers: []
	W0806 08:36:12.463072   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:12.463079   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:12.463136   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:12.500350   79927 cri.go:89] found id: ""
	I0806 08:36:12.500402   79927 logs.go:276] 0 containers: []
	W0806 08:36:12.500413   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:12.500421   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:12.500477   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:12.537323   79927 cri.go:89] found id: ""
	I0806 08:36:12.537350   79927 logs.go:276] 0 containers: []
	W0806 08:36:12.537360   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:12.537366   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:12.537414   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:12.575525   79927 cri.go:89] found id: ""
	I0806 08:36:12.575551   79927 logs.go:276] 0 containers: []
	W0806 08:36:12.575561   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:12.575568   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:12.575629   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:12.615375   79927 cri.go:89] found id: ""
	I0806 08:36:12.615436   79927 logs.go:276] 0 containers: []
	W0806 08:36:12.615451   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:12.615459   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:12.615531   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:12.655269   79927 cri.go:89] found id: ""
	I0806 08:36:12.655300   79927 logs.go:276] 0 containers: []
	W0806 08:36:12.655311   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:12.655324   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:12.655342   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:12.709097   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:12.709134   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:12.724330   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:12.724380   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:12.801075   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:12.801099   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:12.801111   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:12.885670   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:12.885704   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:09.288192   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:11.789794   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:10.613121   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:12.613812   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:13.969367   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:16.468276   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:15.427222   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:15.442092   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:15.442206   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:15.485634   79927 cri.go:89] found id: ""
	I0806 08:36:15.485657   79927 logs.go:276] 0 containers: []
	W0806 08:36:15.485665   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:15.485671   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:15.485715   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:15.520791   79927 cri.go:89] found id: ""
	I0806 08:36:15.520820   79927 logs.go:276] 0 containers: []
	W0806 08:36:15.520830   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:15.520837   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:15.520897   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:15.554041   79927 cri.go:89] found id: ""
	I0806 08:36:15.554065   79927 logs.go:276] 0 containers: []
	W0806 08:36:15.554073   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:15.554079   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:15.554143   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:15.587550   79927 cri.go:89] found id: ""
	I0806 08:36:15.587579   79927 logs.go:276] 0 containers: []
	W0806 08:36:15.587591   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:15.587598   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:15.587659   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:15.628657   79927 cri.go:89] found id: ""
	I0806 08:36:15.628681   79927 logs.go:276] 0 containers: []
	W0806 08:36:15.628689   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:15.628695   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:15.628750   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:15.665212   79927 cri.go:89] found id: ""
	I0806 08:36:15.665239   79927 logs.go:276] 0 containers: []
	W0806 08:36:15.665247   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:15.665252   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:15.665298   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:15.705675   79927 cri.go:89] found id: ""
	I0806 08:36:15.705701   79927 logs.go:276] 0 containers: []
	W0806 08:36:15.705709   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:15.705714   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:15.705762   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:15.739835   79927 cri.go:89] found id: ""
	I0806 08:36:15.739862   79927 logs.go:276] 0 containers: []
	W0806 08:36:15.739871   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:15.739879   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:15.739892   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:15.795719   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:15.795749   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:15.809940   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:15.809967   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:15.888710   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:15.888737   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:15.888753   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:15.972728   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:15.972772   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:18.513565   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:18.527480   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:18.527538   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:18.562783   79927 cri.go:89] found id: ""
	I0806 08:36:18.562812   79927 logs.go:276] 0 containers: []
	W0806 08:36:18.562824   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:18.562832   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:18.562896   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:18.600613   79927 cri.go:89] found id: ""
	I0806 08:36:18.600645   79927 logs.go:276] 0 containers: []
	W0806 08:36:18.600659   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:18.600668   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:18.600735   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:18.638074   79927 cri.go:89] found id: ""
	I0806 08:36:18.638104   79927 logs.go:276] 0 containers: []
	W0806 08:36:18.638114   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:18.638119   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:18.638181   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:18.679795   79927 cri.go:89] found id: ""
	I0806 08:36:18.679825   79927 logs.go:276] 0 containers: []
	W0806 08:36:18.679836   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:18.679844   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:18.679907   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:18.715535   79927 cri.go:89] found id: ""
	I0806 08:36:18.715568   79927 logs.go:276] 0 containers: []
	W0806 08:36:18.715577   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:18.715585   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:18.715650   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:18.752705   79927 cri.go:89] found id: ""
	I0806 08:36:18.752736   79927 logs.go:276] 0 containers: []
	W0806 08:36:18.752748   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:18.752759   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:18.752821   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:14.288048   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:16.288422   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:18.788928   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:15.112857   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:17.114585   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:18.468697   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:20.967985   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:18.791653   79927 cri.go:89] found id: ""
	I0806 08:36:18.791678   79927 logs.go:276] 0 containers: []
	W0806 08:36:18.791689   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:18.791696   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:18.791763   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:18.828328   79927 cri.go:89] found id: ""
	I0806 08:36:18.828357   79927 logs.go:276] 0 containers: []
	W0806 08:36:18.828378   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:18.828389   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:18.828408   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:18.888111   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:18.888148   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:18.902718   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:18.902750   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:18.986327   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:18.986352   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:18.986365   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:19.072858   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:19.072899   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:21.615497   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:21.631434   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:21.631492   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:21.671446   79927 cri.go:89] found id: ""
	I0806 08:36:21.671471   79927 logs.go:276] 0 containers: []
	W0806 08:36:21.671479   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:21.671484   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:21.671540   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:21.707598   79927 cri.go:89] found id: ""
	I0806 08:36:21.707624   79927 logs.go:276] 0 containers: []
	W0806 08:36:21.707634   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:21.707640   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:21.707686   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:21.753647   79927 cri.go:89] found id: ""
	I0806 08:36:21.753675   79927 logs.go:276] 0 containers: []
	W0806 08:36:21.753686   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:21.753692   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:21.753749   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:21.795284   79927 cri.go:89] found id: ""
	I0806 08:36:21.795311   79927 logs.go:276] 0 containers: []
	W0806 08:36:21.795321   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:21.795329   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:21.795388   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:21.832436   79927 cri.go:89] found id: ""
	I0806 08:36:21.832466   79927 logs.go:276] 0 containers: []
	W0806 08:36:21.832476   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:21.832483   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:21.832550   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:21.870295   79927 cri.go:89] found id: ""
	I0806 08:36:21.870327   79927 logs.go:276] 0 containers: []
	W0806 08:36:21.870338   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:21.870345   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:21.870412   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:21.908210   79927 cri.go:89] found id: ""
	I0806 08:36:21.908242   79927 logs.go:276] 0 containers: []
	W0806 08:36:21.908253   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:21.908261   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:21.908322   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:21.944055   79927 cri.go:89] found id: ""
	I0806 08:36:21.944084   79927 logs.go:276] 0 containers: []
	W0806 08:36:21.944095   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:21.944104   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:21.944125   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:22.000467   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:22.000506   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:22.014590   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:22.014629   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:22.083873   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:22.083907   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:22.083927   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:22.175993   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:22.176056   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:21.288435   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:23.788424   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:19.612021   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:21.612092   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:23.612557   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:22.968059   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:24.968111   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:24.720476   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:24.734469   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:24.734551   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:24.770663   79927 cri.go:89] found id: ""
	I0806 08:36:24.770692   79927 logs.go:276] 0 containers: []
	W0806 08:36:24.770705   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:24.770712   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:24.770768   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:24.807861   79927 cri.go:89] found id: ""
	I0806 08:36:24.807889   79927 logs.go:276] 0 containers: []
	W0806 08:36:24.807897   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:24.807908   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:24.807957   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:24.849827   79927 cri.go:89] found id: ""
	I0806 08:36:24.849857   79927 logs.go:276] 0 containers: []
	W0806 08:36:24.849869   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:24.849877   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:24.849937   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:24.885260   79927 cri.go:89] found id: ""
	I0806 08:36:24.885291   79927 logs.go:276] 0 containers: []
	W0806 08:36:24.885303   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:24.885315   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:24.885381   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:24.919918   79927 cri.go:89] found id: ""
	I0806 08:36:24.919949   79927 logs.go:276] 0 containers: []
	W0806 08:36:24.919957   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:24.919964   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:24.920026   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:24.956010   79927 cri.go:89] found id: ""
	I0806 08:36:24.956036   79927 logs.go:276] 0 containers: []
	W0806 08:36:24.956045   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:24.956050   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:24.956109   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:24.993126   79927 cri.go:89] found id: ""
	I0806 08:36:24.993154   79927 logs.go:276] 0 containers: []
	W0806 08:36:24.993165   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:24.993173   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:24.993240   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:25.026732   79927 cri.go:89] found id: ""
	I0806 08:36:25.026756   79927 logs.go:276] 0 containers: []
	W0806 08:36:25.026763   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:25.026771   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:25.026781   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:25.064076   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:25.064113   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:25.118820   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:25.118856   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:25.133640   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:25.133671   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:25.201686   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:25.201714   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:25.201729   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:27.801947   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:27.815472   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:27.815539   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:27.851073   79927 cri.go:89] found id: ""
	I0806 08:36:27.851103   79927 logs.go:276] 0 containers: []
	W0806 08:36:27.851119   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:27.851127   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:27.851184   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:27.886121   79927 cri.go:89] found id: ""
	I0806 08:36:27.886151   79927 logs.go:276] 0 containers: []
	W0806 08:36:27.886162   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:27.886169   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:27.886231   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:27.920950   79927 cri.go:89] found id: ""
	I0806 08:36:27.920979   79927 logs.go:276] 0 containers: []
	W0806 08:36:27.920990   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:27.920997   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:27.921055   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:27.956529   79927 cri.go:89] found id: ""
	I0806 08:36:27.956562   79927 logs.go:276] 0 containers: []
	W0806 08:36:27.956574   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:27.956581   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:27.956631   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:27.992272   79927 cri.go:89] found id: ""
	I0806 08:36:27.992298   79927 logs.go:276] 0 containers: []
	W0806 08:36:27.992306   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:27.992322   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:27.992405   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:28.027242   79927 cri.go:89] found id: ""
	I0806 08:36:28.027269   79927 logs.go:276] 0 containers: []
	W0806 08:36:28.027277   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:28.027283   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:28.027335   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:28.063043   79927 cri.go:89] found id: ""
	I0806 08:36:28.063068   79927 logs.go:276] 0 containers: []
	W0806 08:36:28.063076   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:28.063083   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:28.063148   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:28.099743   79927 cri.go:89] found id: ""
	I0806 08:36:28.099769   79927 logs.go:276] 0 containers: []
	W0806 08:36:28.099780   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:28.099791   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:28.099806   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:28.151354   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:28.151392   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:28.166704   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:28.166731   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:28.242621   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:28.242642   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:28.242656   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:28.324352   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:28.324405   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:25.789745   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:28.290656   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:26.112264   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:28.112769   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:27.466228   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:29.469998   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:30.869063   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:30.882309   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:30.882377   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:30.916264   79927 cri.go:89] found id: ""
	I0806 08:36:30.916296   79927 logs.go:276] 0 containers: []
	W0806 08:36:30.916306   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:30.916313   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:30.916390   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:30.951995   79927 cri.go:89] found id: ""
	I0806 08:36:30.952021   79927 logs.go:276] 0 containers: []
	W0806 08:36:30.952028   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:30.952034   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:30.952111   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:31.005229   79927 cri.go:89] found id: ""
	I0806 08:36:31.005270   79927 logs.go:276] 0 containers: []
	W0806 08:36:31.005281   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:31.005288   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:31.005350   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:31.040439   79927 cri.go:89] found id: ""
	I0806 08:36:31.040465   79927 logs.go:276] 0 containers: []
	W0806 08:36:31.040476   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:31.040482   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:31.040534   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:31.078880   79927 cri.go:89] found id: ""
	I0806 08:36:31.078910   79927 logs.go:276] 0 containers: []
	W0806 08:36:31.078920   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:31.078934   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:31.078996   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:31.115520   79927 cri.go:89] found id: ""
	I0806 08:36:31.115551   79927 logs.go:276] 0 containers: []
	W0806 08:36:31.115562   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:31.115569   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:31.115615   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:31.150450   79927 cri.go:89] found id: ""
	I0806 08:36:31.150483   79927 logs.go:276] 0 containers: []
	W0806 08:36:31.150497   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:31.150506   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:31.150565   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:31.187449   79927 cri.go:89] found id: ""
	I0806 08:36:31.187476   79927 logs.go:276] 0 containers: []
	W0806 08:36:31.187485   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:31.187493   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:31.187507   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:31.246232   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:31.246272   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:31.260691   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:31.260720   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:31.342356   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:31.342383   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:31.342401   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:31.426820   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:31.426865   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:30.787370   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:32.787711   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:30.611466   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:32.611550   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:31.968048   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:34.469062   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:33.974768   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:33.988377   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:33.988442   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:34.023284   79927 cri.go:89] found id: ""
	I0806 08:36:34.023316   79927 logs.go:276] 0 containers: []
	W0806 08:36:34.023328   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:34.023336   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:34.023397   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:34.058633   79927 cri.go:89] found id: ""
	I0806 08:36:34.058676   79927 logs.go:276] 0 containers: []
	W0806 08:36:34.058695   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:34.058704   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:34.058767   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:34.098343   79927 cri.go:89] found id: ""
	I0806 08:36:34.098379   79927 logs.go:276] 0 containers: []
	W0806 08:36:34.098390   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:34.098398   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:34.098471   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:34.134638   79927 cri.go:89] found id: ""
	I0806 08:36:34.134667   79927 logs.go:276] 0 containers: []
	W0806 08:36:34.134677   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:34.134683   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:34.134729   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:34.170118   79927 cri.go:89] found id: ""
	I0806 08:36:34.170144   79927 logs.go:276] 0 containers: []
	W0806 08:36:34.170153   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:34.170159   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:34.170207   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:34.204789   79927 cri.go:89] found id: ""
	I0806 08:36:34.204815   79927 logs.go:276] 0 containers: []
	W0806 08:36:34.204823   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:34.204829   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:34.204877   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:34.245170   79927 cri.go:89] found id: ""
	I0806 08:36:34.245194   79927 logs.go:276] 0 containers: []
	W0806 08:36:34.245203   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:34.245208   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:34.245264   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:34.280026   79927 cri.go:89] found id: ""
	I0806 08:36:34.280063   79927 logs.go:276] 0 containers: []
	W0806 08:36:34.280076   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:34.280086   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:34.280101   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:34.335363   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:34.335408   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:34.350201   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:34.350226   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:34.425205   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:34.425231   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:34.425244   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:34.506385   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:34.506419   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:37.047263   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:37.060179   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:37.060254   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:37.093168   79927 cri.go:89] found id: ""
	I0806 08:36:37.093201   79927 logs.go:276] 0 containers: []
	W0806 08:36:37.093209   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:37.093216   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:37.093264   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:37.131315   79927 cri.go:89] found id: ""
	I0806 08:36:37.131343   79927 logs.go:276] 0 containers: []
	W0806 08:36:37.131353   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:37.131360   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:37.131426   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:37.169799   79927 cri.go:89] found id: ""
	I0806 08:36:37.169829   79927 logs.go:276] 0 containers: []
	W0806 08:36:37.169839   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:37.169845   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:37.169898   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:37.205852   79927 cri.go:89] found id: ""
	I0806 08:36:37.205878   79927 logs.go:276] 0 containers: []
	W0806 08:36:37.205885   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:37.205891   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:37.205938   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:37.242469   79927 cri.go:89] found id: ""
	I0806 08:36:37.242493   79927 logs.go:276] 0 containers: []
	W0806 08:36:37.242501   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:37.242507   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:37.242554   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:37.278515   79927 cri.go:89] found id: ""
	I0806 08:36:37.278545   79927 logs.go:276] 0 containers: []
	W0806 08:36:37.278556   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:37.278563   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:37.278627   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:37.316157   79927 cri.go:89] found id: ""
	I0806 08:36:37.316181   79927 logs.go:276] 0 containers: []
	W0806 08:36:37.316189   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:37.316195   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:37.316249   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:37.353752   79927 cri.go:89] found id: ""
	I0806 08:36:37.353778   79927 logs.go:276] 0 containers: []
	W0806 08:36:37.353785   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:37.353796   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:37.353807   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:37.439435   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:37.439471   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:37.488866   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:37.488903   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:37.540031   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:37.540072   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:37.556109   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:37.556140   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:37.629023   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:35.288565   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:37.788336   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:35.113182   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:37.615523   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:36.967665   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:38.973884   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:41.467445   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:40.129579   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:40.143971   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:40.144050   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:40.180490   79927 cri.go:89] found id: ""
	I0806 08:36:40.180515   79927 logs.go:276] 0 containers: []
	W0806 08:36:40.180524   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:40.180530   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:40.180586   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:40.219812   79927 cri.go:89] found id: ""
	I0806 08:36:40.219860   79927 logs.go:276] 0 containers: []
	W0806 08:36:40.219869   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:40.219877   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:40.219939   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:40.256048   79927 cri.go:89] found id: ""
	I0806 08:36:40.256076   79927 logs.go:276] 0 containers: []
	W0806 08:36:40.256088   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:40.256106   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:40.256166   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:40.294693   79927 cri.go:89] found id: ""
	I0806 08:36:40.294720   79927 logs.go:276] 0 containers: []
	W0806 08:36:40.294729   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:40.294734   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:40.294792   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:40.331246   79927 cri.go:89] found id: ""
	I0806 08:36:40.331276   79927 logs.go:276] 0 containers: []
	W0806 08:36:40.331287   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:40.331295   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:40.331351   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:40.373039   79927 cri.go:89] found id: ""
	I0806 08:36:40.373068   79927 logs.go:276] 0 containers: []
	W0806 08:36:40.373095   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:40.373103   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:40.373163   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:40.408137   79927 cri.go:89] found id: ""
	I0806 08:36:40.408167   79927 logs.go:276] 0 containers: []
	W0806 08:36:40.408177   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:40.408184   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:40.408249   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:40.442996   79927 cri.go:89] found id: ""
	I0806 08:36:40.443035   79927 logs.go:276] 0 containers: []
	W0806 08:36:40.443043   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:40.443051   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:40.443063   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:40.499559   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:40.499599   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:40.514687   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:40.514716   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:40.586034   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:40.586059   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:40.586071   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:40.669756   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:40.669796   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:43.213555   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:43.229683   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:43.229751   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:43.267107   79927 cri.go:89] found id: ""
	I0806 08:36:43.267140   79927 logs.go:276] 0 containers: []
	W0806 08:36:43.267153   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:43.267161   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:43.267235   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:43.303405   79927 cri.go:89] found id: ""
	I0806 08:36:43.303435   79927 logs.go:276] 0 containers: []
	W0806 08:36:43.303443   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:43.303449   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:43.303509   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:43.338583   79927 cri.go:89] found id: ""
	I0806 08:36:43.338610   79927 logs.go:276] 0 containers: []
	W0806 08:36:43.338619   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:43.338628   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:43.338692   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:43.377413   79927 cri.go:89] found id: ""
	I0806 08:36:43.377441   79927 logs.go:276] 0 containers: []
	W0806 08:36:43.377449   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:43.377454   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:43.377501   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:43.413036   79927 cri.go:89] found id: ""
	I0806 08:36:43.413133   79927 logs.go:276] 0 containers: []
	W0806 08:36:43.413152   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:43.413160   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:43.413221   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:43.450216   79927 cri.go:89] found id: ""
	I0806 08:36:43.450245   79927 logs.go:276] 0 containers: []
	W0806 08:36:43.450253   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:43.450259   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:43.450305   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:43.485811   79927 cri.go:89] found id: ""
	I0806 08:36:43.485856   79927 logs.go:276] 0 containers: []
	W0806 08:36:43.485864   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:43.485870   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:43.485919   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:43.521586   79927 cri.go:89] found id: ""
	I0806 08:36:43.521617   79927 logs.go:276] 0 containers: []
	W0806 08:36:43.521628   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:43.521650   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:43.521673   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:43.561842   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:43.561889   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:43.617652   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:43.617683   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:43.631239   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:43.631267   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:43.707297   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:43.707317   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:43.707330   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:39.789019   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:42.287817   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:40.111420   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:42.112503   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:43.468602   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:45.974699   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:46.287309   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:46.301917   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:46.301991   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:46.335040   79927 cri.go:89] found id: ""
	I0806 08:36:46.335069   79927 logs.go:276] 0 containers: []
	W0806 08:36:46.335080   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:46.335088   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:46.335177   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:46.371902   79927 cri.go:89] found id: ""
	I0806 08:36:46.371930   79927 logs.go:276] 0 containers: []
	W0806 08:36:46.371939   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:46.371944   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:46.372000   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:46.411945   79927 cri.go:89] found id: ""
	I0806 08:36:46.412119   79927 logs.go:276] 0 containers: []
	W0806 08:36:46.412134   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:46.412145   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:46.412248   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:46.448578   79927 cri.go:89] found id: ""
	I0806 08:36:46.448612   79927 logs.go:276] 0 containers: []
	W0806 08:36:46.448624   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:46.448631   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:46.448691   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:46.492231   79927 cri.go:89] found id: ""
	I0806 08:36:46.492263   79927 logs.go:276] 0 containers: []
	W0806 08:36:46.492276   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:46.492283   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:46.492338   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:46.538502   79927 cri.go:89] found id: ""
	I0806 08:36:46.538528   79927 logs.go:276] 0 containers: []
	W0806 08:36:46.538547   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:46.538555   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:46.538610   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:46.577750   79927 cri.go:89] found id: ""
	I0806 08:36:46.577778   79927 logs.go:276] 0 containers: []
	W0806 08:36:46.577785   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:46.577790   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:46.577872   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:46.611717   79927 cri.go:89] found id: ""
	I0806 08:36:46.611742   79927 logs.go:276] 0 containers: []
	W0806 08:36:46.611750   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:46.611758   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:46.611770   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:46.629144   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:46.629178   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:46.714703   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:46.714723   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:46.714736   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:46.798978   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:46.799025   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:46.837373   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:46.837407   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:44.288387   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:46.288470   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:48.788762   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:44.611739   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:46.615083   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:49.113041   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:48.469024   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:50.967906   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:49.393231   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:49.407372   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:49.407436   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:49.444210   79927 cri.go:89] found id: ""
	I0806 08:36:49.444240   79927 logs.go:276] 0 containers: []
	W0806 08:36:49.444249   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:49.444254   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:49.444313   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:49.484165   79927 cri.go:89] found id: ""
	I0806 08:36:49.484190   79927 logs.go:276] 0 containers: []
	W0806 08:36:49.484200   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:49.484206   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:49.484278   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:49.519880   79927 cri.go:89] found id: ""
	I0806 08:36:49.519921   79927 logs.go:276] 0 containers: []
	W0806 08:36:49.519932   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:49.519939   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:49.519999   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:49.554954   79927 cri.go:89] found id: ""
	I0806 08:36:49.554979   79927 logs.go:276] 0 containers: []
	W0806 08:36:49.554988   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:49.554995   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:49.555045   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:49.593332   79927 cri.go:89] found id: ""
	I0806 08:36:49.593359   79927 logs.go:276] 0 containers: []
	W0806 08:36:49.593367   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:49.593373   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:49.593432   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:49.630736   79927 cri.go:89] found id: ""
	I0806 08:36:49.630766   79927 logs.go:276] 0 containers: []
	W0806 08:36:49.630774   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:49.630780   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:49.630839   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:49.667460   79927 cri.go:89] found id: ""
	I0806 08:36:49.667487   79927 logs.go:276] 0 containers: []
	W0806 08:36:49.667499   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:49.667505   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:49.667566   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:49.700161   79927 cri.go:89] found id: ""
	I0806 08:36:49.700190   79927 logs.go:276] 0 containers: []
	W0806 08:36:49.700199   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:49.700207   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:49.700218   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:49.786656   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:49.786704   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:49.825711   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:49.825745   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:49.879696   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:49.879731   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:49.893336   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:49.893369   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:49.969462   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:52.469939   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:52.483906   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:52.483974   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:52.523575   79927 cri.go:89] found id: ""
	I0806 08:36:52.523603   79927 logs.go:276] 0 containers: []
	W0806 08:36:52.523615   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:52.523622   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:52.523684   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:52.561590   79927 cri.go:89] found id: ""
	I0806 08:36:52.561620   79927 logs.go:276] 0 containers: []
	W0806 08:36:52.561630   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:52.561639   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:52.561698   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:52.598204   79927 cri.go:89] found id: ""
	I0806 08:36:52.598227   79927 logs.go:276] 0 containers: []
	W0806 08:36:52.598236   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:52.598241   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:52.598302   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:52.636993   79927 cri.go:89] found id: ""
	I0806 08:36:52.637019   79927 logs.go:276] 0 containers: []
	W0806 08:36:52.637027   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:52.637034   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:52.637085   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:52.675090   79927 cri.go:89] found id: ""
	I0806 08:36:52.675118   79927 logs.go:276] 0 containers: []
	W0806 08:36:52.675130   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:52.675138   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:52.675199   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:52.711045   79927 cri.go:89] found id: ""
	I0806 08:36:52.711072   79927 logs.go:276] 0 containers: []
	W0806 08:36:52.711081   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:52.711086   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:52.711151   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:52.749001   79927 cri.go:89] found id: ""
	I0806 08:36:52.749029   79927 logs.go:276] 0 containers: []
	W0806 08:36:52.749039   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:52.749046   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:52.749113   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:52.783692   79927 cri.go:89] found id: ""
	I0806 08:36:52.783720   79927 logs.go:276] 0 containers: []
	W0806 08:36:52.783737   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:52.783748   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:52.783762   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:52.863240   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:52.863281   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:52.902471   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:52.902496   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:52.953299   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:52.953341   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:52.968786   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:52.968818   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:53.045904   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:51.288280   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:53.289276   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:51.612945   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:54.112574   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:52.968486   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:55.468458   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:55.546407   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:55.560166   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:55.560243   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:55.592961   79927 cri.go:89] found id: ""
	I0806 08:36:55.592992   79927 logs.go:276] 0 containers: []
	W0806 08:36:55.593004   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:55.593012   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:55.593075   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:55.629546   79927 cri.go:89] found id: ""
	I0806 08:36:55.629568   79927 logs.go:276] 0 containers: []
	W0806 08:36:55.629578   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:55.629585   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:55.629639   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:55.668677   79927 cri.go:89] found id: ""
	I0806 08:36:55.668707   79927 logs.go:276] 0 containers: []
	W0806 08:36:55.668718   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:55.668726   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:55.668784   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:55.704141   79927 cri.go:89] found id: ""
	I0806 08:36:55.704173   79927 logs.go:276] 0 containers: []
	W0806 08:36:55.704183   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:55.704188   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:55.704257   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:55.740125   79927 cri.go:89] found id: ""
	I0806 08:36:55.740152   79927 logs.go:276] 0 containers: []
	W0806 08:36:55.740160   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:55.740166   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:55.740210   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:55.773709   79927 cri.go:89] found id: ""
	I0806 08:36:55.773739   79927 logs.go:276] 0 containers: []
	W0806 08:36:55.773750   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:55.773759   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:55.773827   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:55.813913   79927 cri.go:89] found id: ""
	I0806 08:36:55.813940   79927 logs.go:276] 0 containers: []
	W0806 08:36:55.813953   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:55.813959   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:55.814004   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:55.850807   79927 cri.go:89] found id: ""
	I0806 08:36:55.850841   79927 logs.go:276] 0 containers: []
	W0806 08:36:55.850853   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:55.850864   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:55.850878   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:55.908202   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:55.908247   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:55.922278   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:55.922311   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:55.996448   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:55.996476   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:55.996489   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:56.076612   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:56.076647   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:58.625610   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:58.640939   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:58.641008   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:58.675638   79927 cri.go:89] found id: ""
	I0806 08:36:58.675666   79927 logs.go:276] 0 containers: []
	W0806 08:36:58.675674   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:58.675680   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:58.675728   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:58.712020   79927 cri.go:89] found id: ""
	I0806 08:36:58.712055   79927 logs.go:276] 0 containers: []
	W0806 08:36:58.712065   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:58.712072   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:58.712126   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:58.749683   79927 cri.go:89] found id: ""
	I0806 08:36:58.749712   79927 logs.go:276] 0 containers: []
	W0806 08:36:58.749724   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:58.749731   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:58.749792   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:55.788883   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:58.288825   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:56.613051   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:59.113176   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:57.968553   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:59.968580   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:58.790005   79927 cri.go:89] found id: ""
	I0806 08:36:58.790026   79927 logs.go:276] 0 containers: []
	W0806 08:36:58.790033   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:58.790039   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:58.790089   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:58.828781   79927 cri.go:89] found id: ""
	I0806 08:36:58.828806   79927 logs.go:276] 0 containers: []
	W0806 08:36:58.828815   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:58.828821   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:58.828873   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:58.867039   79927 cri.go:89] found id: ""
	I0806 08:36:58.867067   79927 logs.go:276] 0 containers: []
	W0806 08:36:58.867075   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:58.867081   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:58.867125   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:58.901362   79927 cri.go:89] found id: ""
	I0806 08:36:58.901396   79927 logs.go:276] 0 containers: []
	W0806 08:36:58.901415   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:58.901423   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:58.901472   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:58.940351   79927 cri.go:89] found id: ""
	I0806 08:36:58.940388   79927 logs.go:276] 0 containers: []
	W0806 08:36:58.940399   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:58.940410   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:58.940425   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:58.995737   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:58.995776   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:59.009191   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:59.009222   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:59.085109   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:59.085133   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:59.085150   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:59.164778   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:59.164819   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:01.712282   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:01.725701   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:01.725759   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:01.762005   79927 cri.go:89] found id: ""
	I0806 08:37:01.762029   79927 logs.go:276] 0 containers: []
	W0806 08:37:01.762038   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:01.762044   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:01.762092   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:01.799277   79927 cri.go:89] found id: ""
	I0806 08:37:01.799304   79927 logs.go:276] 0 containers: []
	W0806 08:37:01.799316   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:01.799322   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:01.799391   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:01.845100   79927 cri.go:89] found id: ""
	I0806 08:37:01.845133   79927 logs.go:276] 0 containers: []
	W0806 08:37:01.845147   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:01.845154   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:01.845214   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:01.882019   79927 cri.go:89] found id: ""
	I0806 08:37:01.882050   79927 logs.go:276] 0 containers: []
	W0806 08:37:01.882061   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:01.882069   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:01.882160   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:01.919224   79927 cri.go:89] found id: ""
	I0806 08:37:01.919251   79927 logs.go:276] 0 containers: []
	W0806 08:37:01.919262   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:01.919269   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:01.919327   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:01.957300   79927 cri.go:89] found id: ""
	I0806 08:37:01.957327   79927 logs.go:276] 0 containers: []
	W0806 08:37:01.957335   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:01.957342   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:01.957389   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:02.000717   79927 cri.go:89] found id: ""
	I0806 08:37:02.000740   79927 logs.go:276] 0 containers: []
	W0806 08:37:02.000749   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:02.000754   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:02.000801   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:02.034710   79927 cri.go:89] found id: ""
	I0806 08:37:02.034739   79927 logs.go:276] 0 containers: []
	W0806 08:37:02.034753   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:02.034763   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:02.034776   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:02.090553   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:02.090593   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:02.104247   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:02.104276   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:02.181990   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:02.182015   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:02.182030   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:02.268127   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:02.268167   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:00.789157   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:03.288510   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:01.611956   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:04.111772   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:02.467718   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:04.968898   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:04.814144   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:04.829120   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:04.829194   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:04.873503   79927 cri.go:89] found id: ""
	I0806 08:37:04.873537   79927 logs.go:276] 0 containers: []
	W0806 08:37:04.873548   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:04.873557   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:04.873614   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:04.913878   79927 cri.go:89] found id: ""
	I0806 08:37:04.913910   79927 logs.go:276] 0 containers: []
	W0806 08:37:04.913918   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:04.913924   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:04.913984   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:04.950273   79927 cri.go:89] found id: ""
	I0806 08:37:04.950303   79927 logs.go:276] 0 containers: []
	W0806 08:37:04.950316   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:04.950323   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:04.950388   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:04.985108   79927 cri.go:89] found id: ""
	I0806 08:37:04.985135   79927 logs.go:276] 0 containers: []
	W0806 08:37:04.985145   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:04.985151   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:04.985215   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:05.021033   79927 cri.go:89] found id: ""
	I0806 08:37:05.021056   79927 logs.go:276] 0 containers: []
	W0806 08:37:05.021064   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:05.021069   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:05.021120   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:05.058416   79927 cri.go:89] found id: ""
	I0806 08:37:05.058440   79927 logs.go:276] 0 containers: []
	W0806 08:37:05.058448   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:05.058454   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:05.058510   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:05.093709   79927 cri.go:89] found id: ""
	I0806 08:37:05.093737   79927 logs.go:276] 0 containers: []
	W0806 08:37:05.093745   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:05.093751   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:05.093815   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:05.129391   79927 cri.go:89] found id: ""
	I0806 08:37:05.129416   79927 logs.go:276] 0 containers: []
	W0806 08:37:05.129424   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:05.129432   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:05.129444   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:05.172149   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:05.172176   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:05.224745   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:05.224781   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:05.238967   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:05.238997   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:05.314834   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:05.314868   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:05.314883   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:07.903732   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:07.918551   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:07.918619   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:07.966068   79927 cri.go:89] found id: ""
	I0806 08:37:07.966093   79927 logs.go:276] 0 containers: []
	W0806 08:37:07.966104   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:07.966112   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:07.966180   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:08.001557   79927 cri.go:89] found id: ""
	I0806 08:37:08.001586   79927 logs.go:276] 0 containers: []
	W0806 08:37:08.001595   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:08.001602   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:08.001663   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:08.034214   79927 cri.go:89] found id: ""
	I0806 08:37:08.034239   79927 logs.go:276] 0 containers: []
	W0806 08:37:08.034249   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:08.034256   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:08.034316   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:08.070645   79927 cri.go:89] found id: ""
	I0806 08:37:08.070673   79927 logs.go:276] 0 containers: []
	W0806 08:37:08.070683   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:08.070689   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:08.070761   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:08.110537   79927 cri.go:89] found id: ""
	I0806 08:37:08.110565   79927 logs.go:276] 0 containers: []
	W0806 08:37:08.110574   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:08.110581   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:08.110635   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:08.146179   79927 cri.go:89] found id: ""
	I0806 08:37:08.146203   79927 logs.go:276] 0 containers: []
	W0806 08:37:08.146211   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:08.146224   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:08.146272   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:08.182924   79927 cri.go:89] found id: ""
	I0806 08:37:08.182953   79927 logs.go:276] 0 containers: []
	W0806 08:37:08.182962   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:08.182968   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:08.183024   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:08.219547   79927 cri.go:89] found id: ""
	I0806 08:37:08.219570   79927 logs.go:276] 0 containers: []
	W0806 08:37:08.219576   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:08.219585   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:08.219599   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:08.298919   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:08.298956   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:08.338475   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:08.338512   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:08.390859   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:08.390898   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:08.404854   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:08.404891   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:08.473553   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:05.788534   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:08.288635   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:06.112204   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:08.612960   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:06.969444   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:09.468423   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:11.469280   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:10.973824   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:10.987720   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:10.987777   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:11.022637   79927 cri.go:89] found id: ""
	I0806 08:37:11.022667   79927 logs.go:276] 0 containers: []
	W0806 08:37:11.022677   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:11.022683   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:11.022739   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:11.062625   79927 cri.go:89] found id: ""
	I0806 08:37:11.062655   79927 logs.go:276] 0 containers: []
	W0806 08:37:11.062673   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:11.062681   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:11.062736   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:11.099906   79927 cri.go:89] found id: ""
	I0806 08:37:11.099937   79927 logs.go:276] 0 containers: []
	W0806 08:37:11.099945   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:11.099951   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:11.100019   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:11.139461   79927 cri.go:89] found id: ""
	I0806 08:37:11.139485   79927 logs.go:276] 0 containers: []
	W0806 08:37:11.139493   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:11.139498   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:11.139541   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:11.177916   79927 cri.go:89] found id: ""
	I0806 08:37:11.177940   79927 logs.go:276] 0 containers: []
	W0806 08:37:11.177948   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:11.177954   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:11.178007   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:11.216335   79927 cri.go:89] found id: ""
	I0806 08:37:11.216377   79927 logs.go:276] 0 containers: []
	W0806 08:37:11.216389   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:11.216398   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:11.216453   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:11.278457   79927 cri.go:89] found id: ""
	I0806 08:37:11.278479   79927 logs.go:276] 0 containers: []
	W0806 08:37:11.278487   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:11.278493   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:11.278541   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:11.316325   79927 cri.go:89] found id: ""
	I0806 08:37:11.316348   79927 logs.go:276] 0 containers: []
	W0806 08:37:11.316356   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:11.316381   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:11.316397   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:11.357435   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:11.357469   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:11.413151   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:11.413184   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:11.427149   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:11.427182   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:11.500691   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:11.500723   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:11.500738   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:10.787717   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:12.788263   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:11.113471   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:13.612216   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:13.968959   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:16.468099   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:14.077817   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:14.090888   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:14.090960   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:14.126454   79927 cri.go:89] found id: ""
	I0806 08:37:14.126476   79927 logs.go:276] 0 containers: []
	W0806 08:37:14.126484   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:14.126489   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:14.126535   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:14.163910   79927 cri.go:89] found id: ""
	I0806 08:37:14.163939   79927 logs.go:276] 0 containers: []
	W0806 08:37:14.163950   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:14.163957   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:14.164025   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:14.202918   79927 cri.go:89] found id: ""
	I0806 08:37:14.202942   79927 logs.go:276] 0 containers: []
	W0806 08:37:14.202950   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:14.202956   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:14.202999   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:14.240997   79927 cri.go:89] found id: ""
	I0806 08:37:14.241021   79927 logs.go:276] 0 containers: []
	W0806 08:37:14.241030   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:14.241036   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:14.241083   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:14.277424   79927 cri.go:89] found id: ""
	I0806 08:37:14.277448   79927 logs.go:276] 0 containers: []
	W0806 08:37:14.277465   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:14.277473   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:14.277545   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:14.317402   79927 cri.go:89] found id: ""
	I0806 08:37:14.317426   79927 logs.go:276] 0 containers: []
	W0806 08:37:14.317435   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:14.317440   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:14.317487   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:14.364310   79927 cri.go:89] found id: ""
	I0806 08:37:14.364341   79927 logs.go:276] 0 containers: []
	W0806 08:37:14.364351   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:14.364358   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:14.364445   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:14.398836   79927 cri.go:89] found id: ""
	I0806 08:37:14.398866   79927 logs.go:276] 0 containers: []
	W0806 08:37:14.398874   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:14.398891   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:14.398904   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:14.450550   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:14.450586   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:14.467190   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:14.467220   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:14.538786   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:14.538809   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:14.538824   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:14.615049   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:14.615080   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:17.155906   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:17.170401   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:17.170479   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:17.208276   79927 cri.go:89] found id: ""
	I0806 08:37:17.208311   79927 logs.go:276] 0 containers: []
	W0806 08:37:17.208322   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:17.208330   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:17.208403   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:17.258671   79927 cri.go:89] found id: ""
	I0806 08:37:17.258696   79927 logs.go:276] 0 containers: []
	W0806 08:37:17.258705   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:17.258711   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:17.258771   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:17.296999   79927 cri.go:89] found id: ""
	I0806 08:37:17.297027   79927 logs.go:276] 0 containers: []
	W0806 08:37:17.297035   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:17.297041   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:17.297098   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:17.335818   79927 cri.go:89] found id: ""
	I0806 08:37:17.335841   79927 logs.go:276] 0 containers: []
	W0806 08:37:17.335849   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:17.335855   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:17.335903   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:17.371650   79927 cri.go:89] found id: ""
	I0806 08:37:17.371674   79927 logs.go:276] 0 containers: []
	W0806 08:37:17.371682   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:17.371688   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:17.371737   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:17.408413   79927 cri.go:89] found id: ""
	I0806 08:37:17.408440   79927 logs.go:276] 0 containers: []
	W0806 08:37:17.408448   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:17.408454   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:17.408502   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:17.443877   79927 cri.go:89] found id: ""
	I0806 08:37:17.443909   79927 logs.go:276] 0 containers: []
	W0806 08:37:17.443920   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:17.443928   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:17.443988   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:17.484292   79927 cri.go:89] found id: ""
	I0806 08:37:17.484321   79927 logs.go:276] 0 containers: []
	W0806 08:37:17.484331   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:17.484341   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:17.484356   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:17.541365   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:17.541418   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:17.555961   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:17.555993   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:17.625904   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:17.625923   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:17.625941   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:17.708997   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:17.709045   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:14.788822   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:17.288567   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:16.112034   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:18.612595   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:18.468580   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:20.967605   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:20.250351   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:20.264560   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:20.264625   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:20.301216   79927 cri.go:89] found id: ""
	I0806 08:37:20.301244   79927 logs.go:276] 0 containers: []
	W0806 08:37:20.301252   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:20.301257   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:20.301317   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:20.345820   79927 cri.go:89] found id: ""
	I0806 08:37:20.345849   79927 logs.go:276] 0 containers: []
	W0806 08:37:20.345860   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:20.345867   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:20.345923   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:20.382333   79927 cri.go:89] found id: ""
	I0806 08:37:20.382357   79927 logs.go:276] 0 containers: []
	W0806 08:37:20.382365   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:20.382371   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:20.382426   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:20.420193   79927 cri.go:89] found id: ""
	I0806 08:37:20.420228   79927 logs.go:276] 0 containers: []
	W0806 08:37:20.420238   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:20.420249   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:20.420316   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:20.456200   79927 cri.go:89] found id: ""
	I0806 08:37:20.456227   79927 logs.go:276] 0 containers: []
	W0806 08:37:20.456235   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:20.456241   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:20.456298   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:20.493321   79927 cri.go:89] found id: ""
	I0806 08:37:20.493349   79927 logs.go:276] 0 containers: []
	W0806 08:37:20.493356   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:20.493362   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:20.493414   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:20.531007   79927 cri.go:89] found id: ""
	I0806 08:37:20.531034   79927 logs.go:276] 0 containers: []
	W0806 08:37:20.531044   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:20.531051   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:20.531118   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:20.572014   79927 cri.go:89] found id: ""
	I0806 08:37:20.572044   79927 logs.go:276] 0 containers: []
	W0806 08:37:20.572062   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:20.572071   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:20.572083   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:20.623368   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:20.623399   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:20.638941   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:20.638984   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:20.717287   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:20.717313   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:20.717324   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:20.800157   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:20.800192   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:23.342061   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:23.356954   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:23.357026   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:23.391415   79927 cri.go:89] found id: ""
	I0806 08:37:23.391445   79927 logs.go:276] 0 containers: []
	W0806 08:37:23.391456   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:23.391463   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:23.391526   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:23.426460   79927 cri.go:89] found id: ""
	I0806 08:37:23.426482   79927 logs.go:276] 0 containers: []
	W0806 08:37:23.426491   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:23.426496   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:23.426541   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:23.461078   79927 cri.go:89] found id: ""
	I0806 08:37:23.461111   79927 logs.go:276] 0 containers: []
	W0806 08:37:23.461122   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:23.461129   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:23.461183   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:23.496566   79927 cri.go:89] found id: ""
	I0806 08:37:23.496594   79927 logs.go:276] 0 containers: []
	W0806 08:37:23.496602   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:23.496607   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:23.496665   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:23.529097   79927 cri.go:89] found id: ""
	I0806 08:37:23.529123   79927 logs.go:276] 0 containers: []
	W0806 08:37:23.529134   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:23.529153   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:23.529243   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:23.563690   79927 cri.go:89] found id: ""
	I0806 08:37:23.563720   79927 logs.go:276] 0 containers: []
	W0806 08:37:23.563730   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:23.563737   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:23.563796   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:23.598787   79927 cri.go:89] found id: ""
	I0806 08:37:23.598861   79927 logs.go:276] 0 containers: []
	W0806 08:37:23.598875   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:23.598884   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:23.598942   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:23.655675   79927 cri.go:89] found id: ""
	I0806 08:37:23.655705   79927 logs.go:276] 0 containers: []
	W0806 08:37:23.655715   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:23.655723   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:23.655734   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:23.715853   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:23.715887   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:23.735805   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:23.735844   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 08:37:19.288768   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:21.788292   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:23.788808   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:20.612865   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:22.613432   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:22.968064   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:24.968171   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	W0806 08:37:23.815238   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:23.815261   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:23.815274   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:23.899436   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:23.899479   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:26.441837   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:26.456410   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:26.456472   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:26.497707   79927 cri.go:89] found id: ""
	I0806 08:37:26.497733   79927 logs.go:276] 0 containers: []
	W0806 08:37:26.497741   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:26.497746   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:26.497795   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:26.536101   79927 cri.go:89] found id: ""
	I0806 08:37:26.536126   79927 logs.go:276] 0 containers: []
	W0806 08:37:26.536133   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:26.536145   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:26.536190   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:26.573067   79927 cri.go:89] found id: ""
	I0806 08:37:26.573091   79927 logs.go:276] 0 containers: []
	W0806 08:37:26.573102   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:26.573110   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:26.573162   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:26.609820   79927 cri.go:89] found id: ""
	I0806 08:37:26.609846   79927 logs.go:276] 0 containers: []
	W0806 08:37:26.609854   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:26.609860   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:26.609914   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:26.645721   79927 cri.go:89] found id: ""
	I0806 08:37:26.645748   79927 logs.go:276] 0 containers: []
	W0806 08:37:26.645758   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:26.645764   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:26.645825   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:26.679941   79927 cri.go:89] found id: ""
	I0806 08:37:26.679988   79927 logs.go:276] 0 containers: []
	W0806 08:37:26.679999   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:26.680006   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:26.680068   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:26.717247   79927 cri.go:89] found id: ""
	I0806 08:37:26.717277   79927 logs.go:276] 0 containers: []
	W0806 08:37:26.717288   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:26.717295   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:26.717382   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:26.760157   79927 cri.go:89] found id: ""
	I0806 08:37:26.760184   79927 logs.go:276] 0 containers: []
	W0806 08:37:26.760194   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:26.760205   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:26.760221   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:26.816034   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:26.816071   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:26.830141   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:26.830170   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:26.900078   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:26.900103   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:26.900121   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:26.984460   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:26.984501   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:26.287692   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:28.288990   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:25.112666   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:27.611170   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:27.466925   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:29.469989   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:29.527184   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:29.541974   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:29.542052   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:29.576062   79927 cri.go:89] found id: ""
	I0806 08:37:29.576095   79927 logs.go:276] 0 containers: []
	W0806 08:37:29.576107   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:29.576114   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:29.576176   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:29.610841   79927 cri.go:89] found id: ""
	I0806 08:37:29.610865   79927 logs.go:276] 0 containers: []
	W0806 08:37:29.610881   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:29.610892   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:29.610948   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:29.650326   79927 cri.go:89] found id: ""
	I0806 08:37:29.650350   79927 logs.go:276] 0 containers: []
	W0806 08:37:29.650360   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:29.650367   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:29.650434   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:29.689382   79927 cri.go:89] found id: ""
	I0806 08:37:29.689418   79927 logs.go:276] 0 containers: []
	W0806 08:37:29.689429   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:29.689436   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:29.689495   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:29.725893   79927 cri.go:89] found id: ""
	I0806 08:37:29.725918   79927 logs.go:276] 0 containers: []
	W0806 08:37:29.725926   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:29.725932   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:29.725980   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:29.767354   79927 cri.go:89] found id: ""
	I0806 08:37:29.767379   79927 logs.go:276] 0 containers: []
	W0806 08:37:29.767386   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:29.767392   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:29.767441   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:29.811139   79927 cri.go:89] found id: ""
	I0806 08:37:29.811161   79927 logs.go:276] 0 containers: []
	W0806 08:37:29.811169   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:29.811174   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:29.811219   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:29.848252   79927 cri.go:89] found id: ""
	I0806 08:37:29.848285   79927 logs.go:276] 0 containers: []
	W0806 08:37:29.848294   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:29.848303   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:29.848315   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:29.899850   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:29.899884   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:29.913744   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:29.913774   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:29.988016   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:29.988039   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:29.988054   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:30.069701   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:30.069735   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:32.606958   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:32.622621   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:32.622684   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:32.658816   79927 cri.go:89] found id: ""
	I0806 08:37:32.658844   79927 logs.go:276] 0 containers: []
	W0806 08:37:32.658855   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:32.658862   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:32.658928   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:32.697734   79927 cri.go:89] found id: ""
	I0806 08:37:32.697769   79927 logs.go:276] 0 containers: []
	W0806 08:37:32.697780   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:32.697786   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:32.697868   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:32.735540   79927 cri.go:89] found id: ""
	I0806 08:37:32.735572   79927 logs.go:276] 0 containers: []
	W0806 08:37:32.735584   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:32.735591   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:32.735658   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:32.771319   79927 cri.go:89] found id: ""
	I0806 08:37:32.771343   79927 logs.go:276] 0 containers: []
	W0806 08:37:32.771357   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:32.771365   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:32.771421   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:32.808137   79927 cri.go:89] found id: ""
	I0806 08:37:32.808162   79927 logs.go:276] 0 containers: []
	W0806 08:37:32.808172   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:32.808179   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:32.808234   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:32.843307   79927 cri.go:89] found id: ""
	I0806 08:37:32.843333   79927 logs.go:276] 0 containers: []
	W0806 08:37:32.843343   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:32.843350   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:32.843404   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:32.883855   79927 cri.go:89] found id: ""
	I0806 08:37:32.883922   79927 logs.go:276] 0 containers: []
	W0806 08:37:32.883935   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:32.883944   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:32.884014   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:32.920965   79927 cri.go:89] found id: ""
	I0806 08:37:32.921008   79927 logs.go:276] 0 containers: []
	W0806 08:37:32.921019   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:32.921027   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:32.921039   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:32.974933   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:32.974966   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:32.989326   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:32.989352   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:33.058274   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:33.058302   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:33.058314   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:33.136632   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:33.136670   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:30.789771   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:33.289001   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:29.613189   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:32.111086   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:34.112398   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:31.973697   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:34.469358   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:35.680315   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:35.694438   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:35.694497   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:35.729541   79927 cri.go:89] found id: ""
	I0806 08:37:35.729569   79927 logs.go:276] 0 containers: []
	W0806 08:37:35.729580   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:35.729587   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:35.729650   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:35.765783   79927 cri.go:89] found id: ""
	I0806 08:37:35.765813   79927 logs.go:276] 0 containers: []
	W0806 08:37:35.765822   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:35.765827   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:35.765885   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:35.802975   79927 cri.go:89] found id: ""
	I0806 08:37:35.803002   79927 logs.go:276] 0 containers: []
	W0806 08:37:35.803010   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:35.803016   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:35.803062   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:35.838588   79927 cri.go:89] found id: ""
	I0806 08:37:35.838614   79927 logs.go:276] 0 containers: []
	W0806 08:37:35.838623   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:35.838629   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:35.838675   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:35.874413   79927 cri.go:89] found id: ""
	I0806 08:37:35.874442   79927 logs.go:276] 0 containers: []
	W0806 08:37:35.874452   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:35.874460   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:35.874528   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:35.913816   79927 cri.go:89] found id: ""
	I0806 08:37:35.913852   79927 logs.go:276] 0 containers: []
	W0806 08:37:35.913871   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:35.913885   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:35.913963   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:35.951719   79927 cri.go:89] found id: ""
	I0806 08:37:35.951750   79927 logs.go:276] 0 containers: []
	W0806 08:37:35.951761   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:35.951770   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:35.951828   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:35.987720   79927 cri.go:89] found id: ""
	I0806 08:37:35.987749   79927 logs.go:276] 0 containers: []
	W0806 08:37:35.987760   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:35.987769   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:35.987780   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:36.043300   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:36.043339   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:36.057138   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:36.057169   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:36.126382   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:36.126399   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:36.126415   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:36.205243   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:36.205284   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:38.747978   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:35.289346   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:37.788785   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:36.114392   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:38.612174   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:36.967654   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:38.969141   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:40.969752   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:38.762123   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:38.762190   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:38.805074   79927 cri.go:89] found id: ""
	I0806 08:37:38.805101   79927 logs.go:276] 0 containers: []
	W0806 08:37:38.805111   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:38.805117   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:38.805185   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:38.839305   79927 cri.go:89] found id: ""
	I0806 08:37:38.839330   79927 logs.go:276] 0 containers: []
	W0806 08:37:38.839338   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:38.839344   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:38.839406   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:38.874174   79927 cri.go:89] found id: ""
	I0806 08:37:38.874203   79927 logs.go:276] 0 containers: []
	W0806 08:37:38.874214   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:38.874221   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:38.874277   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:38.909607   79927 cri.go:89] found id: ""
	I0806 08:37:38.909630   79927 logs.go:276] 0 containers: []
	W0806 08:37:38.909639   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:38.909645   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:38.909702   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:38.945584   79927 cri.go:89] found id: ""
	I0806 08:37:38.945615   79927 logs.go:276] 0 containers: []
	W0806 08:37:38.945626   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:38.945634   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:38.945696   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:38.988993   79927 cri.go:89] found id: ""
	I0806 08:37:38.989022   79927 logs.go:276] 0 containers: []
	W0806 08:37:38.989033   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:38.989041   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:38.989100   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:39.028902   79927 cri.go:89] found id: ""
	I0806 08:37:39.028934   79927 logs.go:276] 0 containers: []
	W0806 08:37:39.028942   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:39.028948   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:39.029005   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:39.070163   79927 cri.go:89] found id: ""
	I0806 08:37:39.070192   79927 logs.go:276] 0 containers: []
	W0806 08:37:39.070202   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:39.070213   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:39.070229   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:39.124891   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:39.124927   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:39.139898   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:39.139926   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:39.216157   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:39.216181   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:39.216196   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:39.301089   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:39.301136   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:41.840165   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:41.855659   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:41.855792   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:41.892620   79927 cri.go:89] found id: ""
	I0806 08:37:41.892648   79927 logs.go:276] 0 containers: []
	W0806 08:37:41.892660   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:41.892668   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:41.892728   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:41.929064   79927 cri.go:89] found id: ""
	I0806 08:37:41.929097   79927 logs.go:276] 0 containers: []
	W0806 08:37:41.929107   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:41.929114   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:41.929165   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:41.968334   79927 cri.go:89] found id: ""
	I0806 08:37:41.968371   79927 logs.go:276] 0 containers: []
	W0806 08:37:41.968382   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:41.968389   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:41.968450   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:42.008566   79927 cri.go:89] found id: ""
	I0806 08:37:42.008587   79927 logs.go:276] 0 containers: []
	W0806 08:37:42.008595   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:42.008601   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:42.008649   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:42.044281   79927 cri.go:89] found id: ""
	I0806 08:37:42.044308   79927 logs.go:276] 0 containers: []
	W0806 08:37:42.044315   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:42.044321   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:42.044392   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:42.081987   79927 cri.go:89] found id: ""
	I0806 08:37:42.082012   79927 logs.go:276] 0 containers: []
	W0806 08:37:42.082020   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:42.082026   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:42.082072   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:42.120818   79927 cri.go:89] found id: ""
	I0806 08:37:42.120847   79927 logs.go:276] 0 containers: []
	W0806 08:37:42.120857   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:42.120865   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:42.120926   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:42.160685   79927 cri.go:89] found id: ""
	I0806 08:37:42.160719   79927 logs.go:276] 0 containers: []
	W0806 08:37:42.160731   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:42.160742   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:42.160759   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:42.236322   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:42.236344   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:42.236357   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:42.328488   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:42.328526   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:42.369520   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:42.369545   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:42.425449   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:42.425490   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:39.788990   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:41.789236   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:43.789318   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:40.614326   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:43.112061   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:43.469284   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:45.469725   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:44.941557   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:44.954984   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:44.955043   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:44.992186   79927 cri.go:89] found id: ""
	I0806 08:37:44.992216   79927 logs.go:276] 0 containers: []
	W0806 08:37:44.992228   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:44.992234   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:44.992285   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:45.027114   79927 cri.go:89] found id: ""
	I0806 08:37:45.027143   79927 logs.go:276] 0 containers: []
	W0806 08:37:45.027154   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:45.027161   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:45.027222   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:45.062629   79927 cri.go:89] found id: ""
	I0806 08:37:45.062660   79927 logs.go:276] 0 containers: []
	W0806 08:37:45.062672   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:45.062679   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:45.062741   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:45.097499   79927 cri.go:89] found id: ""
	I0806 08:37:45.097530   79927 logs.go:276] 0 containers: []
	W0806 08:37:45.097541   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:45.097548   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:45.097605   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:45.137130   79927 cri.go:89] found id: ""
	I0806 08:37:45.137162   79927 logs.go:276] 0 containers: []
	W0806 08:37:45.137184   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:45.137191   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:45.137264   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:45.170617   79927 cri.go:89] found id: ""
	I0806 08:37:45.170642   79927 logs.go:276] 0 containers: []
	W0806 08:37:45.170659   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:45.170667   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:45.170726   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:45.205729   79927 cri.go:89] found id: ""
	I0806 08:37:45.205758   79927 logs.go:276] 0 containers: []
	W0806 08:37:45.205768   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:45.205774   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:45.205827   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:45.242345   79927 cri.go:89] found id: ""
	I0806 08:37:45.242373   79927 logs.go:276] 0 containers: []
	W0806 08:37:45.242384   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:45.242397   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:45.242414   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:45.283778   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:45.283810   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:45.341421   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:45.341457   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:45.356581   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:45.356606   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:45.428512   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:45.428541   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:45.428559   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:48.011744   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:48.025452   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:48.025533   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:48.064315   79927 cri.go:89] found id: ""
	I0806 08:37:48.064339   79927 logs.go:276] 0 containers: []
	W0806 08:37:48.064348   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:48.064353   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:48.064430   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:48.103855   79927 cri.go:89] found id: ""
	I0806 08:37:48.103891   79927 logs.go:276] 0 containers: []
	W0806 08:37:48.103902   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:48.103909   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:48.103968   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:48.141402   79927 cri.go:89] found id: ""
	I0806 08:37:48.141427   79927 logs.go:276] 0 containers: []
	W0806 08:37:48.141435   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:48.141440   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:48.141491   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:48.177449   79927 cri.go:89] found id: ""
	I0806 08:37:48.177479   79927 logs.go:276] 0 containers: []
	W0806 08:37:48.177490   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:48.177497   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:48.177566   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:48.214613   79927 cri.go:89] found id: ""
	I0806 08:37:48.214637   79927 logs.go:276] 0 containers: []
	W0806 08:37:48.214644   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:48.214650   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:48.214695   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:48.254049   79927 cri.go:89] found id: ""
	I0806 08:37:48.254075   79927 logs.go:276] 0 containers: []
	W0806 08:37:48.254083   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:48.254089   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:48.254147   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:48.296206   79927 cri.go:89] found id: ""
	I0806 08:37:48.296230   79927 logs.go:276] 0 containers: []
	W0806 08:37:48.296240   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:48.296246   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:48.296306   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:48.337614   79927 cri.go:89] found id: ""
	I0806 08:37:48.337641   79927 logs.go:276] 0 containers: []
	W0806 08:37:48.337651   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:48.337662   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:48.337679   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:48.391932   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:48.391971   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:48.405422   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:48.405450   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:48.477374   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:48.477402   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:48.477419   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:48.559252   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:48.559288   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:46.287167   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:48.288652   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:45.113308   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:47.611780   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:47.968109   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:50.469370   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:51.104881   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:51.120113   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:51.120181   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:51.155703   79927 cri.go:89] found id: ""
	I0806 08:37:51.155733   79927 logs.go:276] 0 containers: []
	W0806 08:37:51.155741   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:51.155747   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:51.155792   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:51.191211   79927 cri.go:89] found id: ""
	I0806 08:37:51.191240   79927 logs.go:276] 0 containers: []
	W0806 08:37:51.191250   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:51.191257   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:51.191324   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:51.226740   79927 cri.go:89] found id: ""
	I0806 08:37:51.226771   79927 logs.go:276] 0 containers: []
	W0806 08:37:51.226783   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:51.226790   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:51.226852   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:51.269101   79927 cri.go:89] found id: ""
	I0806 08:37:51.269132   79927 logs.go:276] 0 containers: []
	W0806 08:37:51.269141   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:51.269146   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:51.269196   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:51.307176   79927 cri.go:89] found id: ""
	I0806 08:37:51.307206   79927 logs.go:276] 0 containers: []
	W0806 08:37:51.307216   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:51.307224   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:51.307290   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:51.342824   79927 cri.go:89] found id: ""
	I0806 08:37:51.342854   79927 logs.go:276] 0 containers: []
	W0806 08:37:51.342864   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:51.342871   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:51.342935   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:51.383309   79927 cri.go:89] found id: ""
	I0806 08:37:51.383337   79927 logs.go:276] 0 containers: []
	W0806 08:37:51.383348   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:51.383355   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:51.383420   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:51.424851   79927 cri.go:89] found id: ""
	I0806 08:37:51.424896   79927 logs.go:276] 0 containers: []
	W0806 08:37:51.424908   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:51.424918   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:51.424950   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:51.507286   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:51.507307   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:51.507318   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:51.590512   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:51.590556   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:51.640453   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:51.640482   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:51.694699   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:51.694740   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:50.788163   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:52.788396   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:49.616347   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:52.112323   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:54.112911   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:52.470121   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:54.473264   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:54.211106   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:54.225858   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:54.225936   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:54.267219   79927 cri.go:89] found id: ""
	I0806 08:37:54.267247   79927 logs.go:276] 0 containers: []
	W0806 08:37:54.267258   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:54.267266   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:54.267323   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:54.306734   79927 cri.go:89] found id: ""
	I0806 08:37:54.306763   79927 logs.go:276] 0 containers: []
	W0806 08:37:54.306774   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:54.306781   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:54.306839   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:54.342257   79927 cri.go:89] found id: ""
	I0806 08:37:54.342294   79927 logs.go:276] 0 containers: []
	W0806 08:37:54.342306   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:54.342314   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:54.342383   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:54.380661   79927 cri.go:89] found id: ""
	I0806 08:37:54.380685   79927 logs.go:276] 0 containers: []
	W0806 08:37:54.380693   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:54.380699   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:54.380743   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:54.416503   79927 cri.go:89] found id: ""
	I0806 08:37:54.416525   79927 logs.go:276] 0 containers: []
	W0806 08:37:54.416533   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:54.416539   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:54.416589   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:54.453470   79927 cri.go:89] found id: ""
	I0806 08:37:54.453508   79927 logs.go:276] 0 containers: []
	W0806 08:37:54.453520   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:54.453528   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:54.453591   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:54.497630   79927 cri.go:89] found id: ""
	I0806 08:37:54.497671   79927 logs.go:276] 0 containers: []
	W0806 08:37:54.497682   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:54.497691   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:54.497750   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:54.536516   79927 cri.go:89] found id: ""
	I0806 08:37:54.536546   79927 logs.go:276] 0 containers: []
	W0806 08:37:54.536557   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:54.536568   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:54.536587   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:54.592997   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:54.593032   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:54.607568   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:54.607598   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:54.679369   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:54.679402   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:54.679423   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:54.760001   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:54.760041   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:57.305405   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:57.319427   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:57.319493   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:57.357465   79927 cri.go:89] found id: ""
	I0806 08:37:57.357493   79927 logs.go:276] 0 containers: []
	W0806 08:37:57.357504   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:57.357513   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:57.357571   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:57.390131   79927 cri.go:89] found id: ""
	I0806 08:37:57.390155   79927 logs.go:276] 0 containers: []
	W0806 08:37:57.390165   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:57.390172   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:57.390238   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:57.428571   79927 cri.go:89] found id: ""
	I0806 08:37:57.428600   79927 logs.go:276] 0 containers: []
	W0806 08:37:57.428608   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:57.428613   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:57.428675   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:57.462688   79927 cri.go:89] found id: ""
	I0806 08:37:57.462717   79927 logs.go:276] 0 containers: []
	W0806 08:37:57.462727   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:57.462733   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:57.462791   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:57.525008   79927 cri.go:89] found id: ""
	I0806 08:37:57.525033   79927 logs.go:276] 0 containers: []
	W0806 08:37:57.525041   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:57.525046   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:57.525108   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:57.565992   79927 cri.go:89] found id: ""
	I0806 08:37:57.566024   79927 logs.go:276] 0 containers: []
	W0806 08:37:57.566035   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:57.566045   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:57.566123   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:57.610170   79927 cri.go:89] found id: ""
	I0806 08:37:57.610198   79927 logs.go:276] 0 containers: []
	W0806 08:37:57.610208   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:57.610215   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:57.610262   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:57.647054   79927 cri.go:89] found id: ""
	I0806 08:37:57.647085   79927 logs.go:276] 0 containers: []
	W0806 08:37:57.647096   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:57.647106   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:57.647121   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:57.698177   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:57.698214   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:57.712244   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:57.712271   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:57.781683   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:57.781707   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:57.781723   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:57.861097   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:57.861133   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:54.789102   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:57.287564   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:56.613662   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:59.112248   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:56.967534   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:59.469122   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:00.401186   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:00.414546   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:00.414602   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:00.454163   79927 cri.go:89] found id: ""
	I0806 08:38:00.454190   79927 logs.go:276] 0 containers: []
	W0806 08:38:00.454201   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:00.454208   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:00.454269   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:00.497427   79927 cri.go:89] found id: ""
	I0806 08:38:00.497455   79927 logs.go:276] 0 containers: []
	W0806 08:38:00.497466   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:00.497473   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:00.497538   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:00.537590   79927 cri.go:89] found id: ""
	I0806 08:38:00.537619   79927 logs.go:276] 0 containers: []
	W0806 08:38:00.537627   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:00.537633   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:00.537695   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:00.572337   79927 cri.go:89] found id: ""
	I0806 08:38:00.572383   79927 logs.go:276] 0 containers: []
	W0806 08:38:00.572396   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:00.572403   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:00.572460   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:00.607729   79927 cri.go:89] found id: ""
	I0806 08:38:00.607772   79927 logs.go:276] 0 containers: []
	W0806 08:38:00.607783   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:00.607791   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:00.607852   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:00.644777   79927 cri.go:89] found id: ""
	I0806 08:38:00.644843   79927 logs.go:276] 0 containers: []
	W0806 08:38:00.644856   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:00.644864   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:00.644935   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:00.684698   79927 cri.go:89] found id: ""
	I0806 08:38:00.684727   79927 logs.go:276] 0 containers: []
	W0806 08:38:00.684739   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:00.684747   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:00.684863   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:00.718472   79927 cri.go:89] found id: ""
	I0806 08:38:00.718506   79927 logs.go:276] 0 containers: []
	W0806 08:38:00.718518   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:00.718529   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:00.718546   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:00.770265   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:00.770309   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:00.787438   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:00.787469   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:00.854538   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:00.854558   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:00.854570   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:00.936810   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:00.936847   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:03.477109   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:03.494379   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:03.494439   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:03.536166   79927 cri.go:89] found id: ""
	I0806 08:38:03.536193   79927 logs.go:276] 0 containers: []
	W0806 08:38:03.536202   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:03.536208   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:03.536255   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:03.577497   79927 cri.go:89] found id: ""
	I0806 08:38:03.577522   79927 logs.go:276] 0 containers: []
	W0806 08:38:03.577531   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:03.577536   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:03.577585   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:03.614534   79927 cri.go:89] found id: ""
	I0806 08:38:03.614560   79927 logs.go:276] 0 containers: []
	W0806 08:38:03.614569   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:03.614576   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:03.614634   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:03.655824   79927 cri.go:89] found id: ""
	I0806 08:38:03.655860   79927 logs.go:276] 0 containers: []
	W0806 08:38:03.655871   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:03.655878   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:03.655952   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:03.699843   79927 cri.go:89] found id: ""
	I0806 08:38:03.699875   79927 logs.go:276] 0 containers: []
	W0806 08:38:03.699888   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:03.699896   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:03.699957   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:03.740131   79927 cri.go:89] found id: ""
	I0806 08:38:03.740159   79927 logs.go:276] 0 containers: []
	W0806 08:38:03.740168   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:03.740174   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:03.740223   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:59.787634   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:01.788942   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:03.790134   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:01.112582   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:03.113004   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:01.972073   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:04.468917   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:03.774103   79927 cri.go:89] found id: ""
	I0806 08:38:03.774149   79927 logs.go:276] 0 containers: []
	W0806 08:38:03.774163   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:03.774170   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:03.774230   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:03.814051   79927 cri.go:89] found id: ""
	I0806 08:38:03.814075   79927 logs.go:276] 0 containers: []
	W0806 08:38:03.814083   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:03.814091   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:03.814102   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:03.854869   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:03.854898   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:03.908284   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:03.908318   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:03.923265   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:03.923303   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:04.001785   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:04.001806   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:04.001819   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:06.583809   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:06.596764   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:06.596827   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:06.636611   79927 cri.go:89] found id: ""
	I0806 08:38:06.636639   79927 logs.go:276] 0 containers: []
	W0806 08:38:06.636650   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:06.636657   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:06.636707   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:06.675392   79927 cri.go:89] found id: ""
	I0806 08:38:06.675419   79927 logs.go:276] 0 containers: []
	W0806 08:38:06.675430   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:06.675438   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:06.675492   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:06.711270   79927 cri.go:89] found id: ""
	I0806 08:38:06.711302   79927 logs.go:276] 0 containers: []
	W0806 08:38:06.711313   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:06.711320   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:06.711388   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:06.747252   79927 cri.go:89] found id: ""
	I0806 08:38:06.747280   79927 logs.go:276] 0 containers: []
	W0806 08:38:06.747291   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:06.747297   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:06.747360   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:06.781999   79927 cri.go:89] found id: ""
	I0806 08:38:06.782028   79927 logs.go:276] 0 containers: []
	W0806 08:38:06.782038   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:06.782043   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:06.782095   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:06.818810   79927 cri.go:89] found id: ""
	I0806 08:38:06.818847   79927 logs.go:276] 0 containers: []
	W0806 08:38:06.818860   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:06.818869   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:06.818935   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:06.857937   79927 cri.go:89] found id: ""
	I0806 08:38:06.857972   79927 logs.go:276] 0 containers: []
	W0806 08:38:06.857983   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:06.857989   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:06.858049   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:06.897920   79927 cri.go:89] found id: ""
	I0806 08:38:06.897949   79927 logs.go:276] 0 containers: []
	W0806 08:38:06.897957   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:06.897965   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:06.897979   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:06.912199   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:06.912228   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:06.983129   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:06.983157   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:06.983173   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:07.067783   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:07.067853   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:07.107756   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:07.107787   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:06.286856   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:08.287070   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:05.113664   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:07.611129   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:06.973447   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:09.468163   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:11.469075   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:09.657958   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:09.670726   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:09.670807   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:09.709184   79927 cri.go:89] found id: ""
	I0806 08:38:09.709220   79927 logs.go:276] 0 containers: []
	W0806 08:38:09.709232   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:09.709239   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:09.709299   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:09.747032   79927 cri.go:89] found id: ""
	I0806 08:38:09.747061   79927 logs.go:276] 0 containers: []
	W0806 08:38:09.747073   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:09.747080   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:09.747141   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:09.785134   79927 cri.go:89] found id: ""
	I0806 08:38:09.785170   79927 logs.go:276] 0 containers: []
	W0806 08:38:09.785180   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:09.785188   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:09.785248   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:09.825136   79927 cri.go:89] found id: ""
	I0806 08:38:09.825168   79927 logs.go:276] 0 containers: []
	W0806 08:38:09.825179   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:09.825186   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:09.825249   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:09.865292   79927 cri.go:89] found id: ""
	I0806 08:38:09.865327   79927 logs.go:276] 0 containers: []
	W0806 08:38:09.865336   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:09.865345   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:09.865415   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:09.911145   79927 cri.go:89] found id: ""
	I0806 08:38:09.911172   79927 logs.go:276] 0 containers: []
	W0806 08:38:09.911183   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:09.911190   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:09.911247   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:09.961259   79927 cri.go:89] found id: ""
	I0806 08:38:09.961289   79927 logs.go:276] 0 containers: []
	W0806 08:38:09.961299   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:09.961307   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:09.961364   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:10.009572   79927 cri.go:89] found id: ""
	I0806 08:38:10.009594   79927 logs.go:276] 0 containers: []
	W0806 08:38:10.009602   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:10.009610   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:10.009623   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:10.063733   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:10.063770   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:10.080004   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:10.080044   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:10.158792   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:10.158823   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:10.158840   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:10.242126   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:10.242162   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:12.781497   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:12.796715   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:12.796804   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:12.831223   79927 cri.go:89] found id: ""
	I0806 08:38:12.831254   79927 logs.go:276] 0 containers: []
	W0806 08:38:12.831265   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:12.831273   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:12.831332   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:12.872202   79927 cri.go:89] found id: ""
	I0806 08:38:12.872224   79927 logs.go:276] 0 containers: []
	W0806 08:38:12.872231   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:12.872236   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:12.872284   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:12.913457   79927 cri.go:89] found id: ""
	I0806 08:38:12.913486   79927 logs.go:276] 0 containers: []
	W0806 08:38:12.913497   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:12.913505   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:12.913563   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:12.950382   79927 cri.go:89] found id: ""
	I0806 08:38:12.950413   79927 logs.go:276] 0 containers: []
	W0806 08:38:12.950424   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:12.950432   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:12.950497   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:12.988956   79927 cri.go:89] found id: ""
	I0806 08:38:12.988984   79927 logs.go:276] 0 containers: []
	W0806 08:38:12.988992   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:12.988998   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:12.989054   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:13.025308   79927 cri.go:89] found id: ""
	I0806 08:38:13.025334   79927 logs.go:276] 0 containers: []
	W0806 08:38:13.025342   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:13.025348   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:13.025401   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:13.060156   79927 cri.go:89] found id: ""
	I0806 08:38:13.060184   79927 logs.go:276] 0 containers: []
	W0806 08:38:13.060196   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:13.060204   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:13.060268   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:13.096302   79927 cri.go:89] found id: ""
	I0806 08:38:13.096331   79927 logs.go:276] 0 containers: []
	W0806 08:38:13.096344   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:13.096360   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:13.096387   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:13.149567   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:13.149603   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:13.164680   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:13.164715   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:13.236695   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:13.236718   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:13.236733   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:13.323043   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:13.323081   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:10.288582   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:12.788556   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:09.611677   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:12.111364   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:14.112265   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:13.469940   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:15.969381   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:15.867128   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:15.885504   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:15.885579   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:15.929060   79927 cri.go:89] found id: ""
	I0806 08:38:15.929087   79927 logs.go:276] 0 containers: []
	W0806 08:38:15.929098   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:15.929106   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:15.929157   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:15.996919   79927 cri.go:89] found id: ""
	I0806 08:38:15.996946   79927 logs.go:276] 0 containers: []
	W0806 08:38:15.996954   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:15.996959   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:15.997013   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:16.033299   79927 cri.go:89] found id: ""
	I0806 08:38:16.033323   79927 logs.go:276] 0 containers: []
	W0806 08:38:16.033331   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:16.033338   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:16.033399   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:16.070754   79927 cri.go:89] found id: ""
	I0806 08:38:16.070781   79927 logs.go:276] 0 containers: []
	W0806 08:38:16.070789   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:16.070795   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:16.070853   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:16.107906   79927 cri.go:89] found id: ""
	I0806 08:38:16.107931   79927 logs.go:276] 0 containers: []
	W0806 08:38:16.107943   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:16.107951   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:16.108011   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:16.145814   79927 cri.go:89] found id: ""
	I0806 08:38:16.145844   79927 logs.go:276] 0 containers: []
	W0806 08:38:16.145853   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:16.145861   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:16.145921   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:16.183752   79927 cri.go:89] found id: ""
	I0806 08:38:16.183782   79927 logs.go:276] 0 containers: []
	W0806 08:38:16.183793   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:16.183828   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:16.183890   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:16.220973   79927 cri.go:89] found id: ""
	I0806 08:38:16.221000   79927 logs.go:276] 0 containers: []
	W0806 08:38:16.221010   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:16.221020   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:16.221034   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:16.274514   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:16.274555   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:16.291387   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:16.291417   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:16.365737   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:16.365769   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:16.365785   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:16.446578   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:16.446617   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:14.789440   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:17.288354   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:16.112334   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:18.612150   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:18.468700   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:20.469520   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:18.989406   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:19.003113   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:19.003174   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:19.038794   79927 cri.go:89] found id: ""
	I0806 08:38:19.038821   79927 logs.go:276] 0 containers: []
	W0806 08:38:19.038832   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:19.038839   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:19.038899   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:19.072521   79927 cri.go:89] found id: ""
	I0806 08:38:19.072546   79927 logs.go:276] 0 containers: []
	W0806 08:38:19.072554   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:19.072561   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:19.072606   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:19.105949   79927 cri.go:89] found id: ""
	I0806 08:38:19.105982   79927 logs.go:276] 0 containers: []
	W0806 08:38:19.105994   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:19.106002   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:19.106053   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:19.141277   79927 cri.go:89] found id: ""
	I0806 08:38:19.141303   79927 logs.go:276] 0 containers: []
	W0806 08:38:19.141311   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:19.141317   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:19.141373   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:19.179144   79927 cri.go:89] found id: ""
	I0806 08:38:19.179173   79927 logs.go:276] 0 containers: []
	W0806 08:38:19.179184   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:19.179191   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:19.179252   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:19.218970   79927 cri.go:89] found id: ""
	I0806 08:38:19.218998   79927 logs.go:276] 0 containers: []
	W0806 08:38:19.219010   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:19.219022   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:19.219084   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:19.257623   79927 cri.go:89] found id: ""
	I0806 08:38:19.257652   79927 logs.go:276] 0 containers: []
	W0806 08:38:19.257663   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:19.257671   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:19.257740   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:19.294222   79927 cri.go:89] found id: ""
	I0806 08:38:19.294250   79927 logs.go:276] 0 containers: []
	W0806 08:38:19.294258   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:19.294267   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:19.294279   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:19.374327   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:19.374352   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:19.374368   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:19.458570   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:19.458604   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:19.504329   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:19.504382   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:19.554199   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:19.554237   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:22.069574   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:22.082919   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:22.082981   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:22.115952   79927 cri.go:89] found id: ""
	I0806 08:38:22.115974   79927 logs.go:276] 0 containers: []
	W0806 08:38:22.115981   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:22.115987   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:22.116031   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:22.158322   79927 cri.go:89] found id: ""
	I0806 08:38:22.158349   79927 logs.go:276] 0 containers: []
	W0806 08:38:22.158359   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:22.158366   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:22.158425   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:22.203375   79927 cri.go:89] found id: ""
	I0806 08:38:22.203403   79927 logs.go:276] 0 containers: []
	W0806 08:38:22.203412   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:22.203418   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:22.203473   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:22.240808   79927 cri.go:89] found id: ""
	I0806 08:38:22.240837   79927 logs.go:276] 0 containers: []
	W0806 08:38:22.240849   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:22.240857   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:22.240913   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:22.277885   79927 cri.go:89] found id: ""
	I0806 08:38:22.277914   79927 logs.go:276] 0 containers: []
	W0806 08:38:22.277925   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:22.277932   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:22.277993   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:22.314990   79927 cri.go:89] found id: ""
	I0806 08:38:22.315020   79927 logs.go:276] 0 containers: []
	W0806 08:38:22.315031   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:22.315039   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:22.315098   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:22.352159   79927 cri.go:89] found id: ""
	I0806 08:38:22.352189   79927 logs.go:276] 0 containers: []
	W0806 08:38:22.352198   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:22.352204   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:22.352267   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:22.387349   79927 cri.go:89] found id: ""
	I0806 08:38:22.387382   79927 logs.go:276] 0 containers: []
	W0806 08:38:22.387393   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:22.387404   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:22.387416   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:22.425834   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:22.425871   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:22.479680   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:22.479712   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:22.495104   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:22.495140   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:22.567013   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:22.567030   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:22.567043   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:19.288896   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:21.291455   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:23.789282   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:21.111745   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:23.111513   79219 pod_ready.go:81] duration metric: took 4m0.005888372s for pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace to be "Ready" ...
	E0806 08:38:23.111545   79219 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0806 08:38:23.111555   79219 pod_ready.go:38] duration metric: took 4m6.006750463s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 08:38:23.111579   79219 api_server.go:52] waiting for apiserver process to appear ...
	I0806 08:38:23.111610   79219 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:23.111669   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:23.182880   79219 cri.go:89] found id: "f7edb6bece3347d88a263663a1d6549165dafdcb67ad723494b937766cba6da6"
	I0806 08:38:23.182907   79219 cri.go:89] found id: ""
	I0806 08:38:23.182918   79219 logs.go:276] 1 containers: [f7edb6bece3347d88a263663a1d6549165dafdcb67ad723494b937766cba6da6]
	I0806 08:38:23.182974   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:23.188039   79219 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:23.188103   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:23.235090   79219 cri.go:89] found id: "df2efdb62a4af10f89a1b37599d4258ac3b72660353795f8c2141f3ef70c1f65"
	I0806 08:38:23.235113   79219 cri.go:89] found id: ""
	I0806 08:38:23.235135   79219 logs.go:276] 1 containers: [df2efdb62a4af10f89a1b37599d4258ac3b72660353795f8c2141f3ef70c1f65]
	I0806 08:38:23.235195   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:23.240260   79219 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:23.240335   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:23.280619   79219 cri.go:89] found id: "9e1c91bc3a920b6f7db4477b8219f13fa092c68ff53abe2390325a68dad253e5"
	I0806 08:38:23.280645   79219 cri.go:89] found id: ""
	I0806 08:38:23.280655   79219 logs.go:276] 1 containers: [9e1c91bc3a920b6f7db4477b8219f13fa092c68ff53abe2390325a68dad253e5]
	I0806 08:38:23.280715   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:23.286155   79219 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:23.286234   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:23.329583   79219 cri.go:89] found id: "a25c5660a3a2b7226bf308b44307a3912efa7a9a73e79ec63f83133b80a44683"
	I0806 08:38:23.329609   79219 cri.go:89] found id: ""
	I0806 08:38:23.329618   79219 logs.go:276] 1 containers: [a25c5660a3a2b7226bf308b44307a3912efa7a9a73e79ec63f83133b80a44683]
	I0806 08:38:23.329678   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:23.334342   79219 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:23.334407   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:23.372955   79219 cri.go:89] found id: "612fc6a8f2c1208097a6d6287c5fecb20026608a278af5d0d4efc9662e065eb9"
	I0806 08:38:23.372974   79219 cri.go:89] found id: ""
	I0806 08:38:23.372981   79219 logs.go:276] 1 containers: [612fc6a8f2c1208097a6d6287c5fecb20026608a278af5d0d4efc9662e065eb9]
	I0806 08:38:23.373037   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:23.377637   79219 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:23.377692   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:23.416165   79219 cri.go:89] found id: "d8651a7ed6615e826b950eab003841c2ec872b48160f207e77dd32a684125a23"
	I0806 08:38:23.416188   79219 cri.go:89] found id: ""
	I0806 08:38:23.416197   79219 logs.go:276] 1 containers: [d8651a7ed6615e826b950eab003841c2ec872b48160f207e77dd32a684125a23]
	I0806 08:38:23.416248   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:23.420970   79219 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:23.421020   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:23.459247   79219 cri.go:89] found id: ""
	I0806 08:38:23.459277   79219 logs.go:276] 0 containers: []
	W0806 08:38:23.459287   79219 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:23.459295   79219 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0806 08:38:23.459354   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0806 08:38:23.503329   79219 cri.go:89] found id: "367eb6487187a37acafb1fa74364a2d9ec5d9241888ed2e871fb14f2af161975"
	I0806 08:38:23.503354   79219 cri.go:89] found id: "78abe2efa92d2d6c9171ed65bd4d941693be3207554f9318a7cefb91544eeb20"
	I0806 08:38:23.503360   79219 cri.go:89] found id: ""
	I0806 08:38:23.503369   79219 logs.go:276] 2 containers: [367eb6487187a37acafb1fa74364a2d9ec5d9241888ed2e871fb14f2af161975 78abe2efa92d2d6c9171ed65bd4d941693be3207554f9318a7cefb91544eeb20]
	I0806 08:38:23.503431   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:23.508375   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:23.512355   79219 logs.go:123] Gathering logs for kube-scheduler [a25c5660a3a2b7226bf308b44307a3912efa7a9a73e79ec63f83133b80a44683] ...
	I0806 08:38:23.512401   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a25c5660a3a2b7226bf308b44307a3912efa7a9a73e79ec63f83133b80a44683"
	I0806 08:38:23.548545   79219 logs.go:123] Gathering logs for kube-controller-manager [d8651a7ed6615e826b950eab003841c2ec872b48160f207e77dd32a684125a23] ...
	I0806 08:38:23.548574   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8651a7ed6615e826b950eab003841c2ec872b48160f207e77dd32a684125a23"
	I0806 08:38:23.607970   79219 logs.go:123] Gathering logs for storage-provisioner [78abe2efa92d2d6c9171ed65bd4d941693be3207554f9318a7cefb91544eeb20] ...
	I0806 08:38:23.608004   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78abe2efa92d2d6c9171ed65bd4d941693be3207554f9318a7cefb91544eeb20"
	I0806 08:38:23.646553   79219 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:23.646582   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:24.180521   79219 logs.go:123] Gathering logs for container status ...
	I0806 08:38:24.180558   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:24.228240   79219 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:24.228273   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:24.285975   79219 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:24.286007   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:24.301117   79219 logs.go:123] Gathering logs for etcd [df2efdb62a4af10f89a1b37599d4258ac3b72660353795f8c2141f3ef70c1f65] ...
	I0806 08:38:24.301152   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df2efdb62a4af10f89a1b37599d4258ac3b72660353795f8c2141f3ef70c1f65"
	I0806 08:38:22.968463   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:25.469138   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:25.146383   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:25.160013   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:25.160078   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:25.200981   79927 cri.go:89] found id: ""
	I0806 08:38:25.201006   79927 logs.go:276] 0 containers: []
	W0806 08:38:25.201018   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:25.201026   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:25.201091   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:25.239852   79927 cri.go:89] found id: ""
	I0806 08:38:25.239895   79927 logs.go:276] 0 containers: []
	W0806 08:38:25.239907   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:25.239914   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:25.239975   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:25.275521   79927 cri.go:89] found id: ""
	I0806 08:38:25.275549   79927 logs.go:276] 0 containers: []
	W0806 08:38:25.275561   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:25.275568   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:25.275630   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:25.311921   79927 cri.go:89] found id: ""
	I0806 08:38:25.311952   79927 logs.go:276] 0 containers: []
	W0806 08:38:25.311962   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:25.311969   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:25.312036   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:25.353047   79927 cri.go:89] found id: ""
	I0806 08:38:25.353081   79927 logs.go:276] 0 containers: []
	W0806 08:38:25.353093   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:25.353102   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:25.353177   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:25.389442   79927 cri.go:89] found id: ""
	I0806 08:38:25.389470   79927 logs.go:276] 0 containers: []
	W0806 08:38:25.389479   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:25.389485   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:25.389529   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:25.426435   79927 cri.go:89] found id: ""
	I0806 08:38:25.426464   79927 logs.go:276] 0 containers: []
	W0806 08:38:25.426474   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:25.426481   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:25.426541   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:25.460095   79927 cri.go:89] found id: ""
	I0806 08:38:25.460132   79927 logs.go:276] 0 containers: []
	W0806 08:38:25.460157   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:25.460169   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:25.460192   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:25.511363   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:25.511398   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:25.525953   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:25.525980   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:25.598690   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:25.598711   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:25.598723   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:25.674996   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:25.675034   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:28.216533   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:28.230080   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:28.230171   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:28.269239   79927 cri.go:89] found id: ""
	I0806 08:38:28.269274   79927 logs.go:276] 0 containers: []
	W0806 08:38:28.269288   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:28.269296   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:28.269361   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:28.311773   79927 cri.go:89] found id: ""
	I0806 08:38:28.311809   79927 logs.go:276] 0 containers: []
	W0806 08:38:28.311820   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:28.311828   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:28.311896   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:28.351492   79927 cri.go:89] found id: ""
	I0806 08:38:28.351522   79927 logs.go:276] 0 containers: []
	W0806 08:38:28.351530   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:28.351535   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:28.351581   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:28.389039   79927 cri.go:89] found id: ""
	I0806 08:38:28.389067   79927 logs.go:276] 0 containers: []
	W0806 08:38:28.389079   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:28.389086   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:28.389175   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:28.432249   79927 cri.go:89] found id: ""
	I0806 08:38:28.432277   79927 logs.go:276] 0 containers: []
	W0806 08:38:28.432287   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:28.432296   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:28.432354   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:28.478277   79927 cri.go:89] found id: ""
	I0806 08:38:28.478303   79927 logs.go:276] 0 containers: []
	W0806 08:38:28.478313   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:28.478321   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:28.478380   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:28.516167   79927 cri.go:89] found id: ""
	I0806 08:38:28.516199   79927 logs.go:276] 0 containers: []
	W0806 08:38:28.516211   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:28.516219   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:28.516286   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:28.555493   79927 cri.go:89] found id: ""
	I0806 08:38:28.555523   79927 logs.go:276] 0 containers: []
	W0806 08:38:28.555534   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:28.555545   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:28.555560   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:28.627101   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:28.627124   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:28.627147   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:28.716655   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:28.716709   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:28.755469   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:28.755510   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:26.286925   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:28.288654   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:24.355686   79219 logs.go:123] Gathering logs for coredns [9e1c91bc3a920b6f7db4477b8219f13fa092c68ff53abe2390325a68dad253e5] ...
	I0806 08:38:24.355729   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e1c91bc3a920b6f7db4477b8219f13fa092c68ff53abe2390325a68dad253e5"
	I0806 08:38:24.396350   79219 logs.go:123] Gathering logs for kube-proxy [612fc6a8f2c1208097a6d6287c5fecb20026608a278af5d0d4efc9662e065eb9] ...
	I0806 08:38:24.396402   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 612fc6a8f2c1208097a6d6287c5fecb20026608a278af5d0d4efc9662e065eb9"
	I0806 08:38:24.435066   79219 logs.go:123] Gathering logs for storage-provisioner [367eb6487187a37acafb1fa74364a2d9ec5d9241888ed2e871fb14f2af161975] ...
	I0806 08:38:24.435098   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 367eb6487187a37acafb1fa74364a2d9ec5d9241888ed2e871fb14f2af161975"
	I0806 08:38:24.474159   79219 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:24.474188   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 08:38:24.601081   79219 logs.go:123] Gathering logs for kube-apiserver [f7edb6bece3347d88a263663a1d6549165dafdcb67ad723494b937766cba6da6] ...
	I0806 08:38:24.601112   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7edb6bece3347d88a263663a1d6549165dafdcb67ad723494b937766cba6da6"
	I0806 08:38:27.170209   79219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:27.187573   79219 api_server.go:72] duration metric: took 4m15.285938182s to wait for apiserver process to appear ...
	I0806 08:38:27.187597   79219 api_server.go:88] waiting for apiserver healthz status ...
	I0806 08:38:27.187626   79219 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:27.187673   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:27.224204   79219 cri.go:89] found id: "f7edb6bece3347d88a263663a1d6549165dafdcb67ad723494b937766cba6da6"
	I0806 08:38:27.224227   79219 cri.go:89] found id: ""
	I0806 08:38:27.224237   79219 logs.go:276] 1 containers: [f7edb6bece3347d88a263663a1d6549165dafdcb67ad723494b937766cba6da6]
	I0806 08:38:27.224284   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:27.228883   79219 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:27.228957   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:27.275914   79219 cri.go:89] found id: "df2efdb62a4af10f89a1b37599d4258ac3b72660353795f8c2141f3ef70c1f65"
	I0806 08:38:27.275939   79219 cri.go:89] found id: ""
	I0806 08:38:27.275948   79219 logs.go:276] 1 containers: [df2efdb62a4af10f89a1b37599d4258ac3b72660353795f8c2141f3ef70c1f65]
	I0806 08:38:27.276009   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:27.280979   79219 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:27.281037   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:27.326292   79219 cri.go:89] found id: "9e1c91bc3a920b6f7db4477b8219f13fa092c68ff53abe2390325a68dad253e5"
	I0806 08:38:27.326312   79219 cri.go:89] found id: ""
	I0806 08:38:27.326319   79219 logs.go:276] 1 containers: [9e1c91bc3a920b6f7db4477b8219f13fa092c68ff53abe2390325a68dad253e5]
	I0806 08:38:27.326365   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:27.331249   79219 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:27.331326   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:27.374229   79219 cri.go:89] found id: "a25c5660a3a2b7226bf308b44307a3912efa7a9a73e79ec63f83133b80a44683"
	I0806 08:38:27.374250   79219 cri.go:89] found id: ""
	I0806 08:38:27.374257   79219 logs.go:276] 1 containers: [a25c5660a3a2b7226bf308b44307a3912efa7a9a73e79ec63f83133b80a44683]
	I0806 08:38:27.374300   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:27.378875   79219 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:27.378957   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:27.416159   79219 cri.go:89] found id: "612fc6a8f2c1208097a6d6287c5fecb20026608a278af5d0d4efc9662e065eb9"
	I0806 08:38:27.416183   79219 cri.go:89] found id: ""
	I0806 08:38:27.416192   79219 logs.go:276] 1 containers: [612fc6a8f2c1208097a6d6287c5fecb20026608a278af5d0d4efc9662e065eb9]
	I0806 08:38:27.416243   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:27.420463   79219 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:27.420531   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:27.461673   79219 cri.go:89] found id: "d8651a7ed6615e826b950eab003841c2ec872b48160f207e77dd32a684125a23"
	I0806 08:38:27.461700   79219 cri.go:89] found id: ""
	I0806 08:38:27.461710   79219 logs.go:276] 1 containers: [d8651a7ed6615e826b950eab003841c2ec872b48160f207e77dd32a684125a23]
	I0806 08:38:27.461762   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:27.466404   79219 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:27.466474   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:27.507684   79219 cri.go:89] found id: ""
	I0806 08:38:27.507708   79219 logs.go:276] 0 containers: []
	W0806 08:38:27.507716   79219 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:27.507722   79219 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0806 08:38:27.507787   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0806 08:38:27.547812   79219 cri.go:89] found id: "367eb6487187a37acafb1fa74364a2d9ec5d9241888ed2e871fb14f2af161975"
	I0806 08:38:27.547844   79219 cri.go:89] found id: "78abe2efa92d2d6c9171ed65bd4d941693be3207554f9318a7cefb91544eeb20"
	I0806 08:38:27.547850   79219 cri.go:89] found id: ""
	I0806 08:38:27.547859   79219 logs.go:276] 2 containers: [367eb6487187a37acafb1fa74364a2d9ec5d9241888ed2e871fb14f2af161975 78abe2efa92d2d6c9171ed65bd4d941693be3207554f9318a7cefb91544eeb20]
	I0806 08:38:27.547929   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:27.552639   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:27.556993   79219 logs.go:123] Gathering logs for kube-proxy [612fc6a8f2c1208097a6d6287c5fecb20026608a278af5d0d4efc9662e065eb9] ...
	I0806 08:38:27.557022   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 612fc6a8f2c1208097a6d6287c5fecb20026608a278af5d0d4efc9662e065eb9"
	I0806 08:38:27.599317   79219 logs.go:123] Gathering logs for storage-provisioner [78abe2efa92d2d6c9171ed65bd4d941693be3207554f9318a7cefb91544eeb20] ...
	I0806 08:38:27.599345   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78abe2efa92d2d6c9171ed65bd4d941693be3207554f9318a7cefb91544eeb20"
	I0806 08:38:27.643812   79219 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:27.643845   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:28.119873   79219 logs.go:123] Gathering logs for container status ...
	I0806 08:38:28.119909   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:28.160698   79219 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:28.160744   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 08:38:28.275895   79219 logs.go:123] Gathering logs for coredns [9e1c91bc3a920b6f7db4477b8219f13fa092c68ff53abe2390325a68dad253e5] ...
	I0806 08:38:28.275927   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e1c91bc3a920b6f7db4477b8219f13fa092c68ff53abe2390325a68dad253e5"
	I0806 08:38:28.324525   79219 logs.go:123] Gathering logs for kube-scheduler [a25c5660a3a2b7226bf308b44307a3912efa7a9a73e79ec63f83133b80a44683] ...
	I0806 08:38:28.324559   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a25c5660a3a2b7226bf308b44307a3912efa7a9a73e79ec63f83133b80a44683"
	I0806 08:38:28.372860   79219 logs.go:123] Gathering logs for etcd [df2efdb62a4af10f89a1b37599d4258ac3b72660353795f8c2141f3ef70c1f65] ...
	I0806 08:38:28.372891   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df2efdb62a4af10f89a1b37599d4258ac3b72660353795f8c2141f3ef70c1f65"
	I0806 08:38:28.438909   79219 logs.go:123] Gathering logs for kube-controller-manager [d8651a7ed6615e826b950eab003841c2ec872b48160f207e77dd32a684125a23] ...
	I0806 08:38:28.438943   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8651a7ed6615e826b950eab003841c2ec872b48160f207e77dd32a684125a23"
	I0806 08:38:28.514238   79219 logs.go:123] Gathering logs for storage-provisioner [367eb6487187a37acafb1fa74364a2d9ec5d9241888ed2e871fb14f2af161975] ...
	I0806 08:38:28.514279   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 367eb6487187a37acafb1fa74364a2d9ec5d9241888ed2e871fb14f2af161975"
	I0806 08:38:28.552420   79219 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:28.552451   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:28.605414   79219 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:28.605452   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:28.621364   79219 logs.go:123] Gathering logs for kube-apiserver [f7edb6bece3347d88a263663a1d6549165dafdcb67ad723494b937766cba6da6] ...
	I0806 08:38:28.621404   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7edb6bece3347d88a263663a1d6549165dafdcb67ad723494b937766cba6da6"
	I0806 08:38:27.469521   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:29.967734   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:28.812203   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:28.812237   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:31.327103   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:31.340745   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:31.340826   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:31.382470   79927 cri.go:89] found id: ""
	I0806 08:38:31.382503   79927 logs.go:276] 0 containers: []
	W0806 08:38:31.382515   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:31.382523   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:31.382593   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:31.423319   79927 cri.go:89] found id: ""
	I0806 08:38:31.423343   79927 logs.go:276] 0 containers: []
	W0806 08:38:31.423353   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:31.423359   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:31.423423   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:31.463073   79927 cri.go:89] found id: ""
	I0806 08:38:31.463099   79927 logs.go:276] 0 containers: []
	W0806 08:38:31.463108   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:31.463115   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:31.463165   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:31.499169   79927 cri.go:89] found id: ""
	I0806 08:38:31.499209   79927 logs.go:276] 0 containers: []
	W0806 08:38:31.499220   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:31.499228   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:31.499284   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:31.535030   79927 cri.go:89] found id: ""
	I0806 08:38:31.535060   79927 logs.go:276] 0 containers: []
	W0806 08:38:31.535071   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:31.535079   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:31.535145   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:31.575872   79927 cri.go:89] found id: ""
	I0806 08:38:31.575894   79927 logs.go:276] 0 containers: []
	W0806 08:38:31.575902   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:31.575907   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:31.575952   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:31.614766   79927 cri.go:89] found id: ""
	I0806 08:38:31.614787   79927 logs.go:276] 0 containers: []
	W0806 08:38:31.614797   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:31.614804   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:31.614860   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:31.655929   79927 cri.go:89] found id: ""
	I0806 08:38:31.655954   79927 logs.go:276] 0 containers: []
	W0806 08:38:31.655964   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:31.655992   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:31.656009   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:31.728563   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:31.728611   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:31.745445   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:31.745479   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:31.827965   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:31.827998   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:31.828014   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:31.935051   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:31.935093   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:30.788021   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:32.788768   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:31.179830   79219 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8443/healthz ...
	I0806 08:38:31.184262   79219 api_server.go:279] https://192.168.39.190:8443/healthz returned 200:
	ok
	I0806 08:38:31.185245   79219 api_server.go:141] control plane version: v1.30.3
	I0806 08:38:31.185267   79219 api_server.go:131] duration metric: took 3.997664057s to wait for apiserver health ...
	I0806 08:38:31.185274   79219 system_pods.go:43] waiting for kube-system pods to appear ...
	I0806 08:38:31.185297   79219 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:31.185339   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:31.222960   79219 cri.go:89] found id: "f7edb6bece3347d88a263663a1d6549165dafdcb67ad723494b937766cba6da6"
	I0806 08:38:31.222986   79219 cri.go:89] found id: ""
	I0806 08:38:31.222993   79219 logs.go:276] 1 containers: [f7edb6bece3347d88a263663a1d6549165dafdcb67ad723494b937766cba6da6]
	I0806 08:38:31.223042   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:31.227448   79219 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:31.227514   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:31.265326   79219 cri.go:89] found id: "df2efdb62a4af10f89a1b37599d4258ac3b72660353795f8c2141f3ef70c1f65"
	I0806 08:38:31.265349   79219 cri.go:89] found id: ""
	I0806 08:38:31.265356   79219 logs.go:276] 1 containers: [df2efdb62a4af10f89a1b37599d4258ac3b72660353795f8c2141f3ef70c1f65]
	I0806 08:38:31.265402   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:31.269672   79219 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:31.269732   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:31.310202   79219 cri.go:89] found id: "9e1c91bc3a920b6f7db4477b8219f13fa092c68ff53abe2390325a68dad253e5"
	I0806 08:38:31.310228   79219 cri.go:89] found id: ""
	I0806 08:38:31.310237   79219 logs.go:276] 1 containers: [9e1c91bc3a920b6f7db4477b8219f13fa092c68ff53abe2390325a68dad253e5]
	I0806 08:38:31.310283   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:31.314486   79219 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:31.314540   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:31.352612   79219 cri.go:89] found id: "a25c5660a3a2b7226bf308b44307a3912efa7a9a73e79ec63f83133b80a44683"
	I0806 08:38:31.352637   79219 cri.go:89] found id: ""
	I0806 08:38:31.352647   79219 logs.go:276] 1 containers: [a25c5660a3a2b7226bf308b44307a3912efa7a9a73e79ec63f83133b80a44683]
	I0806 08:38:31.352707   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:31.358230   79219 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:31.358300   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:31.402435   79219 cri.go:89] found id: "612fc6a8f2c1208097a6d6287c5fecb20026608a278af5d0d4efc9662e065eb9"
	I0806 08:38:31.402462   79219 cri.go:89] found id: ""
	I0806 08:38:31.402472   79219 logs.go:276] 1 containers: [612fc6a8f2c1208097a6d6287c5fecb20026608a278af5d0d4efc9662e065eb9]
	I0806 08:38:31.402530   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:31.407723   79219 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:31.407793   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:31.455484   79219 cri.go:89] found id: "d8651a7ed6615e826b950eab003841c2ec872b48160f207e77dd32a684125a23"
	I0806 08:38:31.455511   79219 cri.go:89] found id: ""
	I0806 08:38:31.455520   79219 logs.go:276] 1 containers: [d8651a7ed6615e826b950eab003841c2ec872b48160f207e77dd32a684125a23]
	I0806 08:38:31.455582   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:31.461155   79219 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:31.461231   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:31.501317   79219 cri.go:89] found id: ""
	I0806 08:38:31.501342   79219 logs.go:276] 0 containers: []
	W0806 08:38:31.501352   79219 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:31.501359   79219 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0806 08:38:31.501418   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0806 08:38:31.552712   79219 cri.go:89] found id: "367eb6487187a37acafb1fa74364a2d9ec5d9241888ed2e871fb14f2af161975"
	I0806 08:38:31.552742   79219 cri.go:89] found id: "78abe2efa92d2d6c9171ed65bd4d941693be3207554f9318a7cefb91544eeb20"
	I0806 08:38:31.552748   79219 cri.go:89] found id: ""
	I0806 08:38:31.552758   79219 logs.go:276] 2 containers: [367eb6487187a37acafb1fa74364a2d9ec5d9241888ed2e871fb14f2af161975 78abe2efa92d2d6c9171ed65bd4d941693be3207554f9318a7cefb91544eeb20]
	I0806 08:38:31.552820   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:31.561505   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:31.566613   79219 logs.go:123] Gathering logs for storage-provisioner [367eb6487187a37acafb1fa74364a2d9ec5d9241888ed2e871fb14f2af161975] ...
	I0806 08:38:31.566640   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 367eb6487187a37acafb1fa74364a2d9ec5d9241888ed2e871fb14f2af161975"
	I0806 08:38:31.613033   79219 logs.go:123] Gathering logs for storage-provisioner [78abe2efa92d2d6c9171ed65bd4d941693be3207554f9318a7cefb91544eeb20] ...
	I0806 08:38:31.613073   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78abe2efa92d2d6c9171ed65bd4d941693be3207554f9318a7cefb91544eeb20"
	I0806 08:38:31.655353   79219 logs.go:123] Gathering logs for kube-scheduler [a25c5660a3a2b7226bf308b44307a3912efa7a9a73e79ec63f83133b80a44683] ...
	I0806 08:38:31.655389   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a25c5660a3a2b7226bf308b44307a3912efa7a9a73e79ec63f83133b80a44683"
	I0806 08:38:31.703934   79219 logs.go:123] Gathering logs for kube-controller-manager [d8651a7ed6615e826b950eab003841c2ec872b48160f207e77dd32a684125a23] ...
	I0806 08:38:31.703960   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8651a7ed6615e826b950eab003841c2ec872b48160f207e77dd32a684125a23"
	I0806 08:38:31.768880   79219 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:31.768916   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 08:38:31.896045   79219 logs.go:123] Gathering logs for kube-apiserver [f7edb6bece3347d88a263663a1d6549165dafdcb67ad723494b937766cba6da6] ...
	I0806 08:38:31.896079   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7edb6bece3347d88a263663a1d6549165dafdcb67ad723494b937766cba6da6"
	I0806 08:38:31.966646   79219 logs.go:123] Gathering logs for etcd [df2efdb62a4af10f89a1b37599d4258ac3b72660353795f8c2141f3ef70c1f65] ...
	I0806 08:38:31.966682   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df2efdb62a4af10f89a1b37599d4258ac3b72660353795f8c2141f3ef70c1f65"
	I0806 08:38:32.029471   79219 logs.go:123] Gathering logs for coredns [9e1c91bc3a920b6f7db4477b8219f13fa092c68ff53abe2390325a68dad253e5] ...
	I0806 08:38:32.029509   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e1c91bc3a920b6f7db4477b8219f13fa092c68ff53abe2390325a68dad253e5"
	I0806 08:38:32.071406   79219 logs.go:123] Gathering logs for kube-proxy [612fc6a8f2c1208097a6d6287c5fecb20026608a278af5d0d4efc9662e065eb9] ...
	I0806 08:38:32.071435   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 612fc6a8f2c1208097a6d6287c5fecb20026608a278af5d0d4efc9662e065eb9"
	I0806 08:38:32.122333   79219 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:32.122368   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:32.478494   79219 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:32.478540   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:32.530649   79219 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:32.530696   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:32.545866   79219 logs.go:123] Gathering logs for container status ...
	I0806 08:38:32.545895   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:35.101370   79219 system_pods.go:59] 8 kube-system pods found
	I0806 08:38:35.101397   79219 system_pods.go:61] "coredns-7db6d8ff4d-4xxwm" [eeb47dea-32e9-458f-9ad9-2af6d4e68cc0] Running
	I0806 08:38:35.101402   79219 system_pods.go:61] "etcd-embed-certs-324808" [1e0a82cd-6704-4da3-8992-15c68a7aaf70] Running
	I0806 08:38:35.101405   79219 system_pods.go:61] "kube-apiserver-embed-certs-324808" [a33c54d9-1fc5-4243-8fe9-d535c40908c3] Running
	I0806 08:38:35.101409   79219 system_pods.go:61] "kube-controller-manager-embed-certs-324808" [b1c519b8-9c96-4c87-90f3-cc277cfe7763] Running
	I0806 08:38:35.101412   79219 system_pods.go:61] "kube-proxy-8lbzv" [0c45e6c0-9b7f-4c48-bff5-38d4a5949bec] Running
	I0806 08:38:35.101415   79219 system_pods.go:61] "kube-scheduler-embed-certs-324808" [78c2827f-341a-4441-bf7e-eff947fc3ea8] Running
	I0806 08:38:35.101421   79219 system_pods.go:61] "metrics-server-569cc877fc-8xb9k" [7a5fb326-11a7-48fa-bcde-a22026a1dccb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0806 08:38:35.101424   79219 system_pods.go:61] "storage-provisioner" [15c7d89a-e994-4cca-931c-f646157a15e8] Running
	I0806 08:38:35.101431   79219 system_pods.go:74] duration metric: took 3.916151772s to wait for pod list to return data ...
	I0806 08:38:35.101438   79219 default_sa.go:34] waiting for default service account to be created ...
	I0806 08:38:35.103518   79219 default_sa.go:45] found service account: "default"
	I0806 08:38:35.103536   79219 default_sa.go:55] duration metric: took 2.092582ms for default service account to be created ...
	I0806 08:38:35.103546   79219 system_pods.go:116] waiting for k8s-apps to be running ...
	I0806 08:38:35.108383   79219 system_pods.go:86] 8 kube-system pods found
	I0806 08:38:35.108409   79219 system_pods.go:89] "coredns-7db6d8ff4d-4xxwm" [eeb47dea-32e9-458f-9ad9-2af6d4e68cc0] Running
	I0806 08:38:35.108417   79219 system_pods.go:89] "etcd-embed-certs-324808" [1e0a82cd-6704-4da3-8992-15c68a7aaf70] Running
	I0806 08:38:35.108423   79219 system_pods.go:89] "kube-apiserver-embed-certs-324808" [a33c54d9-1fc5-4243-8fe9-d535c40908c3] Running
	I0806 08:38:35.108427   79219 system_pods.go:89] "kube-controller-manager-embed-certs-324808" [b1c519b8-9c96-4c87-90f3-cc277cfe7763] Running
	I0806 08:38:35.108431   79219 system_pods.go:89] "kube-proxy-8lbzv" [0c45e6c0-9b7f-4c48-bff5-38d4a5949bec] Running
	I0806 08:38:35.108436   79219 system_pods.go:89] "kube-scheduler-embed-certs-324808" [78c2827f-341a-4441-bf7e-eff947fc3ea8] Running
	I0806 08:38:35.108442   79219 system_pods.go:89] "metrics-server-569cc877fc-8xb9k" [7a5fb326-11a7-48fa-bcde-a22026a1dccb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0806 08:38:35.108447   79219 system_pods.go:89] "storage-provisioner" [15c7d89a-e994-4cca-931c-f646157a15e8] Running
	I0806 08:38:35.108453   79219 system_pods.go:126] duration metric: took 4.901954ms to wait for k8s-apps to be running ...
	I0806 08:38:35.108463   79219 system_svc.go:44] waiting for kubelet service to be running ....
	I0806 08:38:35.108505   79219 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 08:38:35.126018   79219 system_svc.go:56] duration metric: took 17.549117ms WaitForService to wait for kubelet
	I0806 08:38:35.126045   79219 kubeadm.go:582] duration metric: took 4m23.224415251s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0806 08:38:35.126065   79219 node_conditions.go:102] verifying NodePressure condition ...
	I0806 08:38:35.129214   79219 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0806 08:38:35.129231   79219 node_conditions.go:123] node cpu capacity is 2
	I0806 08:38:35.129241   79219 node_conditions.go:105] duration metric: took 3.172893ms to run NodePressure ...
	I0806 08:38:35.129252   79219 start.go:241] waiting for startup goroutines ...
	I0806 08:38:35.129259   79219 start.go:246] waiting for cluster config update ...
	I0806 08:38:35.129273   79219 start.go:255] writing updated cluster config ...
	I0806 08:38:35.129540   79219 ssh_runner.go:195] Run: rm -f paused
	I0806 08:38:35.178651   79219 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0806 08:38:35.180921   79219 out.go:177] * Done! kubectl is now configured to use "embed-certs-324808" cluster and "default" namespace by default
	I0806 08:38:31.968170   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:34.468911   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:34.478890   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:34.493573   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:34.493650   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:34.529054   79927 cri.go:89] found id: ""
	I0806 08:38:34.529083   79927 logs.go:276] 0 containers: []
	W0806 08:38:34.529094   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:34.529101   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:34.529171   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:34.570214   79927 cri.go:89] found id: ""
	I0806 08:38:34.570245   79927 logs.go:276] 0 containers: []
	W0806 08:38:34.570256   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:34.570264   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:34.570325   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:34.607198   79927 cri.go:89] found id: ""
	I0806 08:38:34.607232   79927 logs.go:276] 0 containers: []
	W0806 08:38:34.607243   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:34.607250   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:34.607296   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:34.646238   79927 cri.go:89] found id: ""
	I0806 08:38:34.646266   79927 logs.go:276] 0 containers: []
	W0806 08:38:34.646278   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:34.646287   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:34.646354   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:34.682137   79927 cri.go:89] found id: ""
	I0806 08:38:34.682161   79927 logs.go:276] 0 containers: []
	W0806 08:38:34.682169   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:34.682175   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:34.682227   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:34.721211   79927 cri.go:89] found id: ""
	I0806 08:38:34.721238   79927 logs.go:276] 0 containers: []
	W0806 08:38:34.721246   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:34.721252   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:34.721306   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:34.759693   79927 cri.go:89] found id: ""
	I0806 08:38:34.759728   79927 logs.go:276] 0 containers: []
	W0806 08:38:34.759737   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:34.759743   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:34.759793   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:34.799224   79927 cri.go:89] found id: ""
	I0806 08:38:34.799246   79927 logs.go:276] 0 containers: []
	W0806 08:38:34.799254   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:34.799262   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:34.799273   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:34.852682   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:34.852727   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:34.866456   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:34.866494   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:34.933029   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:34.933055   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:34.933076   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:35.017215   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:35.017253   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:37.567662   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:37.582220   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:37.582284   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:37.622112   79927 cri.go:89] found id: ""
	I0806 08:38:37.622145   79927 logs.go:276] 0 containers: []
	W0806 08:38:37.622153   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:37.622164   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:37.622228   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:37.657767   79927 cri.go:89] found id: ""
	I0806 08:38:37.657791   79927 logs.go:276] 0 containers: []
	W0806 08:38:37.657798   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:37.657804   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:37.657849   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:37.691442   79927 cri.go:89] found id: ""
	I0806 08:38:37.691471   79927 logs.go:276] 0 containers: []
	W0806 08:38:37.691480   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:37.691485   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:37.691539   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:37.725810   79927 cri.go:89] found id: ""
	I0806 08:38:37.725837   79927 logs.go:276] 0 containers: []
	W0806 08:38:37.725859   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:37.725866   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:37.725922   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:37.764708   79927 cri.go:89] found id: ""
	I0806 08:38:37.764737   79927 logs.go:276] 0 containers: []
	W0806 08:38:37.764748   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:37.764755   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:37.764818   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:37.802915   79927 cri.go:89] found id: ""
	I0806 08:38:37.802939   79927 logs.go:276] 0 containers: []
	W0806 08:38:37.802947   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:37.802953   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:37.803008   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:37.838492   79927 cri.go:89] found id: ""
	I0806 08:38:37.838525   79927 logs.go:276] 0 containers: []
	W0806 08:38:37.838536   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:37.838544   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:37.838607   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:37.873937   79927 cri.go:89] found id: ""
	I0806 08:38:37.873969   79927 logs.go:276] 0 containers: []
	W0806 08:38:37.873979   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:37.873990   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:37.874005   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:37.912169   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:37.912193   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:37.968269   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:37.968305   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:37.983414   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:37.983440   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:38.055031   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:38.055049   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:38.055062   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:35.289081   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:37.789220   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:36.968687   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:39.468051   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:41.468488   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:40.635449   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:40.649962   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:40.650045   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:40.688975   79927 cri.go:89] found id: ""
	I0806 08:38:40.689006   79927 logs.go:276] 0 containers: []
	W0806 08:38:40.689017   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:40.689029   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:40.689093   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:40.726740   79927 cri.go:89] found id: ""
	I0806 08:38:40.726770   79927 logs.go:276] 0 containers: []
	W0806 08:38:40.726780   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:40.726788   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:40.726886   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:40.767402   79927 cri.go:89] found id: ""
	I0806 08:38:40.767432   79927 logs.go:276] 0 containers: []
	W0806 08:38:40.767445   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:40.767453   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:40.767510   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:40.809994   79927 cri.go:89] found id: ""
	I0806 08:38:40.810023   79927 logs.go:276] 0 containers: []
	W0806 08:38:40.810033   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:40.810042   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:40.810102   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:40.845624   79927 cri.go:89] found id: ""
	I0806 08:38:40.845653   79927 logs.go:276] 0 containers: []
	W0806 08:38:40.845662   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:40.845668   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:40.845715   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:40.883830   79927 cri.go:89] found id: ""
	I0806 08:38:40.883856   79927 logs.go:276] 0 containers: []
	W0806 08:38:40.883871   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:40.883879   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:40.883937   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:40.919930   79927 cri.go:89] found id: ""
	I0806 08:38:40.919958   79927 logs.go:276] 0 containers: []
	W0806 08:38:40.919966   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:40.919971   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:40.920018   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:40.954563   79927 cri.go:89] found id: ""
	I0806 08:38:40.954594   79927 logs.go:276] 0 containers: []
	W0806 08:38:40.954605   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:40.954616   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:40.954629   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:41.008780   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:41.008813   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:41.022292   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:41.022321   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:41.090241   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:41.090273   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:41.090294   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:41.171739   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:41.171779   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:43.712410   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:43.726676   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:43.726747   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:40.288192   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:42.289039   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:42.789083   79614 pod_ready.go:81] duration metric: took 4m0.00771712s for pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace to be "Ready" ...
	E0806 08:38:42.789107   79614 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0806 08:38:42.789117   79614 pod_ready.go:38] duration metric: took 4m5.553596655s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 08:38:42.789135   79614 api_server.go:52] waiting for apiserver process to appear ...
	I0806 08:38:42.789163   79614 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:42.789215   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:42.837092   79614 cri.go:89] found id: "49843be03d16addb301ffd26e517770ff29b00f049efb524a2139b8b2153325c"
	I0806 08:38:42.837119   79614 cri.go:89] found id: ""
	I0806 08:38:42.837128   79614 logs.go:276] 1 containers: [49843be03d16addb301ffd26e517770ff29b00f049efb524a2139b8b2153325c]
	I0806 08:38:42.837188   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:42.842270   79614 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:42.842334   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:42.878018   79614 cri.go:89] found id: "b5fa6fe8f1db6b59f47ea637515efb2864e0408ce1df0e6851b3bda0e240f5d3"
	I0806 08:38:42.878044   79614 cri.go:89] found id: ""
	I0806 08:38:42.878053   79614 logs.go:276] 1 containers: [b5fa6fe8f1db6b59f47ea637515efb2864e0408ce1df0e6851b3bda0e240f5d3]
	I0806 08:38:42.878115   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:42.882437   79614 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:42.882509   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:42.925635   79614 cri.go:89] found id: "59f63eeae644f7c8944cdb7b1d05d9e49fe84eb28e9ccbad2d2dd79846d7eb52"
	I0806 08:38:42.925655   79614 cri.go:89] found id: ""
	I0806 08:38:42.925662   79614 logs.go:276] 1 containers: [59f63eeae644f7c8944cdb7b1d05d9e49fe84eb28e9ccbad2d2dd79846d7eb52]
	I0806 08:38:42.925704   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:42.930525   79614 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:42.930583   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:42.972716   79614 cri.go:89] found id: "e596abcad3dc13d27548701f515beebba051e04a9e20f8ce1d018731d880ff1c"
	I0806 08:38:42.972736   79614 cri.go:89] found id: ""
	I0806 08:38:42.972744   79614 logs.go:276] 1 containers: [e596abcad3dc13d27548701f515beebba051e04a9e20f8ce1d018731d880ff1c]
	I0806 08:38:42.972791   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:42.977840   79614 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:42.977894   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:43.017316   79614 cri.go:89] found id: "75990046864b4a4db9f8c02f44de4eefbb0cf45584880b053bce4ca0b677ce4e"
	I0806 08:38:43.017339   79614 cri.go:89] found id: ""
	I0806 08:38:43.017346   79614 logs.go:276] 1 containers: [75990046864b4a4db9f8c02f44de4eefbb0cf45584880b053bce4ca0b677ce4e]
	I0806 08:38:43.017396   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:43.022341   79614 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:43.022416   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:43.067113   79614 cri.go:89] found id: "05bbffa4c989e42bd020ebe6b943459ec1d7b31a6c5e5ba9a37056da15542658"
	I0806 08:38:43.067145   79614 cri.go:89] found id: ""
	I0806 08:38:43.067155   79614 logs.go:276] 1 containers: [05bbffa4c989e42bd020ebe6b943459ec1d7b31a6c5e5ba9a37056da15542658]
	I0806 08:38:43.067211   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:43.071674   79614 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:43.071748   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:43.114123   79614 cri.go:89] found id: ""
	I0806 08:38:43.114146   79614 logs.go:276] 0 containers: []
	W0806 08:38:43.114154   79614 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:43.114159   79614 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0806 08:38:43.114211   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0806 08:38:43.155927   79614 cri.go:89] found id: "e668e4a5fa59e279bc227b6110983cc4660b639d5b1e1291ae60d361f682d82e"
	I0806 08:38:43.155955   79614 cri.go:89] found id: "abd677ac30faba3fef9e9d562601ad0012d7f1b48852e08b8f2684efda6b0f2e"
	I0806 08:38:43.155961   79614 cri.go:89] found id: ""
	I0806 08:38:43.155969   79614 logs.go:276] 2 containers: [e668e4a5fa59e279bc227b6110983cc4660b639d5b1e1291ae60d361f682d82e abd677ac30faba3fef9e9d562601ad0012d7f1b48852e08b8f2684efda6b0f2e]
	I0806 08:38:43.156017   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:43.165735   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:43.171834   79614 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:43.171861   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:43.232588   79614 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:43.232628   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:43.248033   79614 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:43.248080   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 08:38:43.383729   79614 logs.go:123] Gathering logs for etcd [b5fa6fe8f1db6b59f47ea637515efb2864e0408ce1df0e6851b3bda0e240f5d3] ...
	I0806 08:38:43.383760   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5fa6fe8f1db6b59f47ea637515efb2864e0408ce1df0e6851b3bda0e240f5d3"
	I0806 08:38:43.428082   79614 logs.go:123] Gathering logs for coredns [59f63eeae644f7c8944cdb7b1d05d9e49fe84eb28e9ccbad2d2dd79846d7eb52] ...
	I0806 08:38:43.428116   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59f63eeae644f7c8944cdb7b1d05d9e49fe84eb28e9ccbad2d2dd79846d7eb52"
	I0806 08:38:43.467054   79614 logs.go:123] Gathering logs for storage-provisioner [e668e4a5fa59e279bc227b6110983cc4660b639d5b1e1291ae60d361f682d82e] ...
	I0806 08:38:43.467081   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e668e4a5fa59e279bc227b6110983cc4660b639d5b1e1291ae60d361f682d82e"
	I0806 08:38:43.502900   79614 logs.go:123] Gathering logs for storage-provisioner [abd677ac30faba3fef9e9d562601ad0012d7f1b48852e08b8f2684efda6b0f2e] ...
	I0806 08:38:43.502928   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abd677ac30faba3fef9e9d562601ad0012d7f1b48852e08b8f2684efda6b0f2e"
	I0806 08:38:43.539338   79614 logs.go:123] Gathering logs for kube-apiserver [49843be03d16addb301ffd26e517770ff29b00f049efb524a2139b8b2153325c] ...
	I0806 08:38:43.539369   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49843be03d16addb301ffd26e517770ff29b00f049efb524a2139b8b2153325c"
	I0806 08:38:43.585813   79614 logs.go:123] Gathering logs for kube-scheduler [e596abcad3dc13d27548701f515beebba051e04a9e20f8ce1d018731d880ff1c] ...
	I0806 08:38:43.585855   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e596abcad3dc13d27548701f515beebba051e04a9e20f8ce1d018731d880ff1c"
	I0806 08:38:43.627141   79614 logs.go:123] Gathering logs for kube-proxy [75990046864b4a4db9f8c02f44de4eefbb0cf45584880b053bce4ca0b677ce4e] ...
	I0806 08:38:43.627173   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75990046864b4a4db9f8c02f44de4eefbb0cf45584880b053bce4ca0b677ce4e"
	I0806 08:38:43.663754   79614 logs.go:123] Gathering logs for kube-controller-manager [05bbffa4c989e42bd020ebe6b943459ec1d7b31a6c5e5ba9a37056da15542658] ...
	I0806 08:38:43.663783   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 05bbffa4c989e42bd020ebe6b943459ec1d7b31a6c5e5ba9a37056da15542658"
	I0806 08:38:43.717594   79614 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:43.717631   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:43.469134   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:45.967796   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:43.765343   79927 cri.go:89] found id: ""
	I0806 08:38:43.765379   79927 logs.go:276] 0 containers: []
	W0806 08:38:43.765390   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:43.765398   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:43.765457   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:43.803813   79927 cri.go:89] found id: ""
	I0806 08:38:43.803846   79927 logs.go:276] 0 containers: []
	W0806 08:38:43.803859   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:43.803868   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:43.803933   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:43.841307   79927 cri.go:89] found id: ""
	I0806 08:38:43.841339   79927 logs.go:276] 0 containers: []
	W0806 08:38:43.841349   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:43.841356   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:43.841418   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:43.876488   79927 cri.go:89] found id: ""
	I0806 08:38:43.876521   79927 logs.go:276] 0 containers: []
	W0806 08:38:43.876534   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:43.876544   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:43.876606   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:43.910972   79927 cri.go:89] found id: ""
	I0806 08:38:43.911000   79927 logs.go:276] 0 containers: []
	W0806 08:38:43.911010   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:43.911017   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:43.911077   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:43.945541   79927 cri.go:89] found id: ""
	I0806 08:38:43.945573   79927 logs.go:276] 0 containers: []
	W0806 08:38:43.945584   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:43.945592   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:43.945655   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:43.981366   79927 cri.go:89] found id: ""
	I0806 08:38:43.981399   79927 logs.go:276] 0 containers: []
	W0806 08:38:43.981408   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:43.981413   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:43.981463   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:44.015349   79927 cri.go:89] found id: ""
	I0806 08:38:44.015375   79927 logs.go:276] 0 containers: []
	W0806 08:38:44.015384   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:44.015394   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:44.015408   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:44.068089   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:44.068129   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:44.082496   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:44.082524   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:44.157765   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:44.157788   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:44.157805   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:44.242842   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:44.242885   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:46.789280   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:46.803490   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:46.803562   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:46.844164   79927 cri.go:89] found id: ""
	I0806 08:38:46.844192   79927 logs.go:276] 0 containers: []
	W0806 08:38:46.844200   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:46.844206   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:46.844247   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:46.880968   79927 cri.go:89] found id: ""
	I0806 08:38:46.881003   79927 logs.go:276] 0 containers: []
	W0806 08:38:46.881015   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:46.881022   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:46.881082   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:46.919291   79927 cri.go:89] found id: ""
	I0806 08:38:46.919311   79927 logs.go:276] 0 containers: []
	W0806 08:38:46.919319   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:46.919324   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:46.919369   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:46.960591   79927 cri.go:89] found id: ""
	I0806 08:38:46.960616   79927 logs.go:276] 0 containers: []
	W0806 08:38:46.960626   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:46.960632   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:46.960681   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:46.995913   79927 cri.go:89] found id: ""
	I0806 08:38:46.995943   79927 logs.go:276] 0 containers: []
	W0806 08:38:46.995954   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:46.995961   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:46.996045   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:47.035185   79927 cri.go:89] found id: ""
	I0806 08:38:47.035213   79927 logs.go:276] 0 containers: []
	W0806 08:38:47.035225   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:47.035236   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:47.035292   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:47.079605   79927 cri.go:89] found id: ""
	I0806 08:38:47.079634   79927 logs.go:276] 0 containers: []
	W0806 08:38:47.079646   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:47.079653   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:47.079701   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:47.118819   79927 cri.go:89] found id: ""
	I0806 08:38:47.118849   79927 logs.go:276] 0 containers: []
	W0806 08:38:47.118861   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:47.118873   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:47.118888   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:47.181357   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:47.181394   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:47.197676   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:47.197707   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:47.270304   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:47.270331   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:47.270348   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:47.354145   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:47.354176   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:44.291978   79614 logs.go:123] Gathering logs for container status ...
	I0806 08:38:44.292026   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:46.843357   79614 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:46.860793   79614 api_server.go:72] duration metric: took 4m14.882891742s to wait for apiserver process to appear ...
	I0806 08:38:46.860820   79614 api_server.go:88] waiting for apiserver healthz status ...
	I0806 08:38:46.860858   79614 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:46.860923   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:46.909044   79614 cri.go:89] found id: "49843be03d16addb301ffd26e517770ff29b00f049efb524a2139b8b2153325c"
	I0806 08:38:46.909068   79614 cri.go:89] found id: ""
	I0806 08:38:46.909076   79614 logs.go:276] 1 containers: [49843be03d16addb301ffd26e517770ff29b00f049efb524a2139b8b2153325c]
	I0806 08:38:46.909134   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:46.913443   79614 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:46.913522   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:46.955246   79614 cri.go:89] found id: "b5fa6fe8f1db6b59f47ea637515efb2864e0408ce1df0e6851b3bda0e240f5d3"
	I0806 08:38:46.955273   79614 cri.go:89] found id: ""
	I0806 08:38:46.955281   79614 logs.go:276] 1 containers: [b5fa6fe8f1db6b59f47ea637515efb2864e0408ce1df0e6851b3bda0e240f5d3]
	I0806 08:38:46.955347   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:46.959914   79614 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:46.959994   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:47.001639   79614 cri.go:89] found id: "59f63eeae644f7c8944cdb7b1d05d9e49fe84eb28e9ccbad2d2dd79846d7eb52"
	I0806 08:38:47.001665   79614 cri.go:89] found id: ""
	I0806 08:38:47.001673   79614 logs.go:276] 1 containers: [59f63eeae644f7c8944cdb7b1d05d9e49fe84eb28e9ccbad2d2dd79846d7eb52]
	I0806 08:38:47.001717   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:47.006260   79614 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:47.006344   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:47.049226   79614 cri.go:89] found id: "e596abcad3dc13d27548701f515beebba051e04a9e20f8ce1d018731d880ff1c"
	I0806 08:38:47.049252   79614 cri.go:89] found id: ""
	I0806 08:38:47.049261   79614 logs.go:276] 1 containers: [e596abcad3dc13d27548701f515beebba051e04a9e20f8ce1d018731d880ff1c]
	I0806 08:38:47.049317   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:47.053544   79614 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:47.053626   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:47.095805   79614 cri.go:89] found id: "75990046864b4a4db9f8c02f44de4eefbb0cf45584880b053bce4ca0b677ce4e"
	I0806 08:38:47.095832   79614 cri.go:89] found id: ""
	I0806 08:38:47.095841   79614 logs.go:276] 1 containers: [75990046864b4a4db9f8c02f44de4eefbb0cf45584880b053bce4ca0b677ce4e]
	I0806 08:38:47.095895   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:47.100272   79614 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:47.100329   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:47.146108   79614 cri.go:89] found id: "05bbffa4c989e42bd020ebe6b943459ec1d7b31a6c5e5ba9a37056da15542658"
	I0806 08:38:47.146132   79614 cri.go:89] found id: ""
	I0806 08:38:47.146142   79614 logs.go:276] 1 containers: [05bbffa4c989e42bd020ebe6b943459ec1d7b31a6c5e5ba9a37056da15542658]
	I0806 08:38:47.146200   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:47.150808   79614 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:47.150863   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:47.194122   79614 cri.go:89] found id: ""
	I0806 08:38:47.194157   79614 logs.go:276] 0 containers: []
	W0806 08:38:47.194169   79614 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:47.194177   79614 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0806 08:38:47.194241   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0806 08:38:47.237872   79614 cri.go:89] found id: "e668e4a5fa59e279bc227b6110983cc4660b639d5b1e1291ae60d361f682d82e"
	I0806 08:38:47.237906   79614 cri.go:89] found id: "abd677ac30faba3fef9e9d562601ad0012d7f1b48852e08b8f2684efda6b0f2e"
	I0806 08:38:47.237915   79614 cri.go:89] found id: ""
	I0806 08:38:47.237925   79614 logs.go:276] 2 containers: [e668e4a5fa59e279bc227b6110983cc4660b639d5b1e1291ae60d361f682d82e abd677ac30faba3fef9e9d562601ad0012d7f1b48852e08b8f2684efda6b0f2e]
	I0806 08:38:47.237988   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:47.242528   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:47.247317   79614 logs.go:123] Gathering logs for kube-proxy [75990046864b4a4db9f8c02f44de4eefbb0cf45584880b053bce4ca0b677ce4e] ...
	I0806 08:38:47.247341   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75990046864b4a4db9f8c02f44de4eefbb0cf45584880b053bce4ca0b677ce4e"
	I0806 08:38:47.289612   79614 logs.go:123] Gathering logs for kube-controller-manager [05bbffa4c989e42bd020ebe6b943459ec1d7b31a6c5e5ba9a37056da15542658] ...
	I0806 08:38:47.289644   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 05bbffa4c989e42bd020ebe6b943459ec1d7b31a6c5e5ba9a37056da15542658"
	I0806 08:38:47.345619   79614 logs.go:123] Gathering logs for storage-provisioner [e668e4a5fa59e279bc227b6110983cc4660b639d5b1e1291ae60d361f682d82e] ...
	I0806 08:38:47.345653   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e668e4a5fa59e279bc227b6110983cc4660b639d5b1e1291ae60d361f682d82e"
	I0806 08:38:47.401281   79614 logs.go:123] Gathering logs for storage-provisioner [abd677ac30faba3fef9e9d562601ad0012d7f1b48852e08b8f2684efda6b0f2e] ...
	I0806 08:38:47.401306   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abd677ac30faba3fef9e9d562601ad0012d7f1b48852e08b8f2684efda6b0f2e"
	I0806 08:38:47.438335   79614 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:47.438364   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 08:38:47.537334   79614 logs.go:123] Gathering logs for etcd [b5fa6fe8f1db6b59f47ea637515efb2864e0408ce1df0e6851b3bda0e240f5d3] ...
	I0806 08:38:47.537360   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5fa6fe8f1db6b59f47ea637515efb2864e0408ce1df0e6851b3bda0e240f5d3"
	I0806 08:38:47.592101   79614 logs.go:123] Gathering logs for coredns [59f63eeae644f7c8944cdb7b1d05d9e49fe84eb28e9ccbad2d2dd79846d7eb52] ...
	I0806 08:38:47.592132   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59f63eeae644f7c8944cdb7b1d05d9e49fe84eb28e9ccbad2d2dd79846d7eb52"
	I0806 08:38:47.626714   79614 logs.go:123] Gathering logs for kube-scheduler [e596abcad3dc13d27548701f515beebba051e04a9e20f8ce1d018731d880ff1c] ...
	I0806 08:38:47.626738   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e596abcad3dc13d27548701f515beebba051e04a9e20f8ce1d018731d880ff1c"
	I0806 08:38:47.663232   79614 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:47.663262   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:48.133308   79614 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:48.133344   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:48.189669   79614 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:48.189709   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:48.208530   79614 logs.go:123] Gathering logs for kube-apiserver [49843be03d16addb301ffd26e517770ff29b00f049efb524a2139b8b2153325c] ...
	I0806 08:38:48.208557   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49843be03d16addb301ffd26e517770ff29b00f049efb524a2139b8b2153325c"
	I0806 08:38:48.264499   79614 logs.go:123] Gathering logs for container status ...
	I0806 08:38:48.264535   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:49.901481   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:49.914862   79927 kubeadm.go:597] duration metric: took 4m3.80396481s to restartPrimaryControlPlane
	W0806 08:38:49.914926   79927 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0806 08:38:49.914952   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0806 08:38:50.394829   79927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 08:38:50.409425   79927 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0806 08:38:50.419349   79927 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0806 08:38:50.429688   79927 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0806 08:38:50.429711   79927 kubeadm.go:157] found existing configuration files:
	
	I0806 08:38:50.429756   79927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0806 08:38:50.440055   79927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0806 08:38:50.440126   79927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0806 08:38:50.450734   79927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0806 08:38:50.461860   79927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0806 08:38:50.461943   79927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0806 08:38:50.473324   79927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0806 08:38:50.483381   79927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0806 08:38:50.483440   79927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0806 08:38:50.493247   79927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0806 08:38:50.502800   79927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0806 08:38:50.502861   79927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0806 08:38:50.512897   79927 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0806 08:38:50.585658   79927 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0806 08:38:50.585795   79927 kubeadm.go:310] [preflight] Running pre-flight checks
	I0806 08:38:50.740936   79927 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0806 08:38:50.741079   79927 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0806 08:38:50.741200   79927 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0806 08:38:50.938086   79927 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0806 08:38:48.468228   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:50.968702   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:50.940205   79927 out.go:204]   - Generating certificates and keys ...
	I0806 08:38:50.940315   79927 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0806 08:38:50.940428   79927 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0806 08:38:50.940543   79927 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0806 08:38:50.940621   79927 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0806 08:38:50.940723   79927 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0806 08:38:50.940800   79927 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0806 08:38:50.940881   79927 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0806 08:38:50.941150   79927 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0806 08:38:50.941954   79927 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0806 08:38:50.942765   79927 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0806 08:38:50.942910   79927 kubeadm.go:310] [certs] Using the existing "sa" key
	I0806 08:38:50.942993   79927 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0806 08:38:51.039558   79927 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0806 08:38:51.265243   79927 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0806 08:38:51.348501   79927 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0806 08:38:51.509558   79927 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0806 08:38:51.530725   79927 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0806 08:38:51.534108   79927 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0806 08:38:51.534297   79927 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0806 08:38:51.680149   79927 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0806 08:38:51.682195   79927 out.go:204]   - Booting up control plane ...
	I0806 08:38:51.682331   79927 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0806 08:38:51.693736   79927 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0806 08:38:51.695315   79927 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0806 08:38:51.697107   79927 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0806 08:38:51.700906   79927 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0806 08:38:50.822580   79614 api_server.go:253] Checking apiserver healthz at https://192.168.50.235:8444/healthz ...
	I0806 08:38:50.827121   79614 api_server.go:279] https://192.168.50.235:8444/healthz returned 200:
	ok
	I0806 08:38:50.828327   79614 api_server.go:141] control plane version: v1.30.3
	I0806 08:38:50.828355   79614 api_server.go:131] duration metric: took 3.967524972s to wait for apiserver health ...
	I0806 08:38:50.828381   79614 system_pods.go:43] waiting for kube-system pods to appear ...
	I0806 08:38:50.828406   79614 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:50.828460   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:50.875518   79614 cri.go:89] found id: "49843be03d16addb301ffd26e517770ff29b00f049efb524a2139b8b2153325c"
	I0806 08:38:50.875545   79614 cri.go:89] found id: ""
	I0806 08:38:50.875555   79614 logs.go:276] 1 containers: [49843be03d16addb301ffd26e517770ff29b00f049efb524a2139b8b2153325c]
	I0806 08:38:50.875611   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:50.881260   79614 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:50.881321   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:50.923252   79614 cri.go:89] found id: "b5fa6fe8f1db6b59f47ea637515efb2864e0408ce1df0e6851b3bda0e240f5d3"
	I0806 08:38:50.923281   79614 cri.go:89] found id: ""
	I0806 08:38:50.923292   79614 logs.go:276] 1 containers: [b5fa6fe8f1db6b59f47ea637515efb2864e0408ce1df0e6851b3bda0e240f5d3]
	I0806 08:38:50.923350   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:50.928115   79614 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:50.928176   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:50.971427   79614 cri.go:89] found id: "59f63eeae644f7c8944cdb7b1d05d9e49fe84eb28e9ccbad2d2dd79846d7eb52"
	I0806 08:38:50.971453   79614 cri.go:89] found id: ""
	I0806 08:38:50.971463   79614 logs.go:276] 1 containers: [59f63eeae644f7c8944cdb7b1d05d9e49fe84eb28e9ccbad2d2dd79846d7eb52]
	I0806 08:38:50.971520   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:50.975665   79614 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:50.975789   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:51.014601   79614 cri.go:89] found id: "e596abcad3dc13d27548701f515beebba051e04a9e20f8ce1d018731d880ff1c"
	I0806 08:38:51.014633   79614 cri.go:89] found id: ""
	I0806 08:38:51.014644   79614 logs.go:276] 1 containers: [e596abcad3dc13d27548701f515beebba051e04a9e20f8ce1d018731d880ff1c]
	I0806 08:38:51.014722   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:51.019197   79614 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:51.019266   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:51.057063   79614 cri.go:89] found id: "75990046864b4a4db9f8c02f44de4eefbb0cf45584880b053bce4ca0b677ce4e"
	I0806 08:38:51.057089   79614 cri.go:89] found id: ""
	I0806 08:38:51.057098   79614 logs.go:276] 1 containers: [75990046864b4a4db9f8c02f44de4eefbb0cf45584880b053bce4ca0b677ce4e]
	I0806 08:38:51.057159   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:51.061504   79614 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:51.061571   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:51.106496   79614 cri.go:89] found id: "05bbffa4c989e42bd020ebe6b943459ec1d7b31a6c5e5ba9a37056da15542658"
	I0806 08:38:51.106523   79614 cri.go:89] found id: ""
	I0806 08:38:51.106532   79614 logs.go:276] 1 containers: [05bbffa4c989e42bd020ebe6b943459ec1d7b31a6c5e5ba9a37056da15542658]
	I0806 08:38:51.106590   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:51.111040   79614 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:51.111109   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:51.156042   79614 cri.go:89] found id: ""
	I0806 08:38:51.156072   79614 logs.go:276] 0 containers: []
	W0806 08:38:51.156083   79614 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:51.156090   79614 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0806 08:38:51.156152   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0806 08:38:51.208712   79614 cri.go:89] found id: "e668e4a5fa59e279bc227b6110983cc4660b639d5b1e1291ae60d361f682d82e"
	I0806 08:38:51.208733   79614 cri.go:89] found id: "abd677ac30faba3fef9e9d562601ad0012d7f1b48852e08b8f2684efda6b0f2e"
	I0806 08:38:51.208737   79614 cri.go:89] found id: ""
	I0806 08:38:51.208744   79614 logs.go:276] 2 containers: [e668e4a5fa59e279bc227b6110983cc4660b639d5b1e1291ae60d361f682d82e abd677ac30faba3fef9e9d562601ad0012d7f1b48852e08b8f2684efda6b0f2e]
	I0806 08:38:51.208796   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:51.213508   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:51.219082   79614 logs.go:123] Gathering logs for etcd [b5fa6fe8f1db6b59f47ea637515efb2864e0408ce1df0e6851b3bda0e240f5d3] ...
	I0806 08:38:51.219104   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5fa6fe8f1db6b59f47ea637515efb2864e0408ce1df0e6851b3bda0e240f5d3"
	I0806 08:38:51.266352   79614 logs.go:123] Gathering logs for storage-provisioner [e668e4a5fa59e279bc227b6110983cc4660b639d5b1e1291ae60d361f682d82e] ...
	I0806 08:38:51.266380   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e668e4a5fa59e279bc227b6110983cc4660b639d5b1e1291ae60d361f682d82e"
	I0806 08:38:51.315128   79614 logs.go:123] Gathering logs for storage-provisioner [abd677ac30faba3fef9e9d562601ad0012d7f1b48852e08b8f2684efda6b0f2e] ...
	I0806 08:38:51.315168   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abd677ac30faba3fef9e9d562601ad0012d7f1b48852e08b8f2684efda6b0f2e"
	I0806 08:38:51.357872   79614 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:51.357914   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:51.768288   79614 logs.go:123] Gathering logs for container status ...
	I0806 08:38:51.768328   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:51.825304   79614 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:51.825341   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:51.888532   79614 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:51.888567   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 08:38:51.998271   79614 logs.go:123] Gathering logs for coredns [59f63eeae644f7c8944cdb7b1d05d9e49fe84eb28e9ccbad2d2dd79846d7eb52] ...
	I0806 08:38:51.998307   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59f63eeae644f7c8944cdb7b1d05d9e49fe84eb28e9ccbad2d2dd79846d7eb52"
	I0806 08:38:52.034549   79614 logs.go:123] Gathering logs for kube-scheduler [e596abcad3dc13d27548701f515beebba051e04a9e20f8ce1d018731d880ff1c] ...
	I0806 08:38:52.034574   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e596abcad3dc13d27548701f515beebba051e04a9e20f8ce1d018731d880ff1c"
	I0806 08:38:52.079289   79614 logs.go:123] Gathering logs for kube-proxy [75990046864b4a4db9f8c02f44de4eefbb0cf45584880b053bce4ca0b677ce4e] ...
	I0806 08:38:52.079317   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75990046864b4a4db9f8c02f44de4eefbb0cf45584880b053bce4ca0b677ce4e"
	I0806 08:38:52.124134   79614 logs.go:123] Gathering logs for kube-controller-manager [05bbffa4c989e42bd020ebe6b943459ec1d7b31a6c5e5ba9a37056da15542658] ...
	I0806 08:38:52.124163   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 05bbffa4c989e42bd020ebe6b943459ec1d7b31a6c5e5ba9a37056da15542658"
	I0806 08:38:52.191877   79614 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:52.191914   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:52.210712   79614 logs.go:123] Gathering logs for kube-apiserver [49843be03d16addb301ffd26e517770ff29b00f049efb524a2139b8b2153325c] ...
	I0806 08:38:52.210740   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49843be03d16addb301ffd26e517770ff29b00f049efb524a2139b8b2153325c"
	I0806 08:38:54.770330   79614 system_pods.go:59] 8 kube-system pods found
	I0806 08:38:54.770356   79614 system_pods.go:61] "coredns-7db6d8ff4d-gx8tt" [48f83ba8-a238-4baf-94af-88370fefcb02] Running
	I0806 08:38:54.770361   79614 system_pods.go:61] "etcd-default-k8s-diff-port-990751" [bf342ba4-1e11-40bf-80fd-02b402043ecf] Running
	I0806 08:38:54.770365   79614 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-990751" [11ee42c4-c9e9-41cf-ace2-deac0f30153a] Running
	I0806 08:38:54.770369   79614 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-990751" [f17fca30-0afc-4ed8-a8cc-cb64e69723f3] Running
	I0806 08:38:54.770373   79614 system_pods.go:61] "kube-proxy-lpf88" [866c10f0-2c44-4c03-b460-ff2194bd1ec6] Running
	I0806 08:38:54.770377   79614 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-990751" [2e591ea7-07bd-464f-bf0e-bc24bfcca016] Running
	I0806 08:38:54.770383   79614 system_pods.go:61] "metrics-server-569cc877fc-hnxd4" [f369ac70-49b7-448a-9852-89232fb66da9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0806 08:38:54.770388   79614 system_pods.go:61] "storage-provisioner" [0faff3fc-d34b-418e-972b-95a2e36b577a] Running
	I0806 08:38:54.770395   79614 system_pods.go:74] duration metric: took 3.942006529s to wait for pod list to return data ...
	I0806 08:38:54.770401   79614 default_sa.go:34] waiting for default service account to be created ...
	I0806 08:38:54.773935   79614 default_sa.go:45] found service account: "default"
	I0806 08:38:54.773956   79614 default_sa.go:55] duration metric: took 3.549307ms for default service account to be created ...
	I0806 08:38:54.773963   79614 system_pods.go:116] waiting for k8s-apps to be running ...
	I0806 08:38:54.779748   79614 system_pods.go:86] 8 kube-system pods found
	I0806 08:38:54.779773   79614 system_pods.go:89] "coredns-7db6d8ff4d-gx8tt" [48f83ba8-a238-4baf-94af-88370fefcb02] Running
	I0806 08:38:54.779780   79614 system_pods.go:89] "etcd-default-k8s-diff-port-990751" [bf342ba4-1e11-40bf-80fd-02b402043ecf] Running
	I0806 08:38:54.779787   79614 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-990751" [11ee42c4-c9e9-41cf-ace2-deac0f30153a] Running
	I0806 08:38:54.779794   79614 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-990751" [f17fca30-0afc-4ed8-a8cc-cb64e69723f3] Running
	I0806 08:38:54.779801   79614 system_pods.go:89] "kube-proxy-lpf88" [866c10f0-2c44-4c03-b460-ff2194bd1ec6] Running
	I0806 08:38:54.779807   79614 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-990751" [2e591ea7-07bd-464f-bf0e-bc24bfcca016] Running
	I0806 08:38:54.779818   79614 system_pods.go:89] "metrics-server-569cc877fc-hnxd4" [f369ac70-49b7-448a-9852-89232fb66da9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0806 08:38:54.779825   79614 system_pods.go:89] "storage-provisioner" [0faff3fc-d34b-418e-972b-95a2e36b577a] Running
	I0806 08:38:54.779834   79614 system_pods.go:126] duration metric: took 5.864748ms to wait for k8s-apps to be running ...
	I0806 08:38:54.779842   79614 system_svc.go:44] waiting for kubelet service to be running ....
	I0806 08:38:54.779898   79614 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 08:38:54.796325   79614 system_svc.go:56] duration metric: took 16.472895ms WaitForService to wait for kubelet
	I0806 08:38:54.796353   79614 kubeadm.go:582] duration metric: took 4m22.818456744s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0806 08:38:54.796397   79614 node_conditions.go:102] verifying NodePressure condition ...
	I0806 08:38:54.800536   79614 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0806 08:38:54.800564   79614 node_conditions.go:123] node cpu capacity is 2
	I0806 08:38:54.800577   79614 node_conditions.go:105] duration metric: took 4.173499ms to run NodePressure ...
	I0806 08:38:54.800591   79614 start.go:241] waiting for startup goroutines ...
	I0806 08:38:54.800600   79614 start.go:246] waiting for cluster config update ...
	I0806 08:38:54.800613   79614 start.go:255] writing updated cluster config ...
	I0806 08:38:54.800936   79614 ssh_runner.go:195] Run: rm -f paused
	I0806 08:38:54.850703   79614 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0806 08:38:54.852771   79614 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-990751" cluster and "default" namespace by default
	I0806 08:38:53.469241   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:55.968335   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:58.469143   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:39:00.470785   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:39:02.967983   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:39:05.468164   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:39:07.469643   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:39:09.971774   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:39:12.469590   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:39:14.968935   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:39:17.468202   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:39:19.968322   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:39:21.968455   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:39:24.467291   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:39:26.475005   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:39:28.968164   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:39:31.467876   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:39:31.701723   79927 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0806 08:39:31.702058   79927 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0806 08:39:31.702242   79927 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0806 08:39:33.468293   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:39:34.961921   78922 pod_ready.go:81] duration metric: took 4m0.000413299s for pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace to be "Ready" ...
	E0806 08:39:34.961944   78922 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace to be "Ready" (will not retry!)
	I0806 08:39:34.961967   78922 pod_ready.go:38] duration metric: took 4m12.510868918s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 08:39:34.962010   78922 kubeadm.go:597] duration metric: took 4m19.878821661s to restartPrimaryControlPlane
	W0806 08:39:34.962072   78922 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0806 08:39:34.962105   78922 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0806 08:39:36.702773   79927 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0806 08:39:36.703067   79927 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0806 08:39:46.703330   79927 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0806 08:39:46.703594   79927 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0806 08:40:01.383913   78922 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.421778806s)
	I0806 08:40:01.383994   78922 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 08:40:01.409726   78922 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0806 08:40:01.432284   78922 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0806 08:40:01.445162   78922 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0806 08:40:01.445187   78922 kubeadm.go:157] found existing configuration files:
	
	I0806 08:40:01.445242   78922 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0806 08:40:01.466287   78922 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0806 08:40:01.466344   78922 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0806 08:40:01.481355   78922 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0806 08:40:01.499933   78922 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0806 08:40:01.500003   78922 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0806 08:40:01.518094   78922 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0806 08:40:01.527345   78922 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0806 08:40:01.527408   78922 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0806 08:40:01.539328   78922 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0806 08:40:01.549752   78922 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0806 08:40:01.549863   78922 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0806 08:40:01.561309   78922 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0806 08:40:01.609660   78922 kubeadm.go:310] W0806 08:40:01.587147    2983 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0806 08:40:01.610476   78922 kubeadm.go:310] W0806 08:40:01.588016    2983 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0806 08:40:01.749335   78922 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0806 08:40:06.704889   79927 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0806 08:40:06.705136   79927 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0806 08:40:10.616781   78922 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-rc.0
	I0806 08:40:10.616868   78922 kubeadm.go:310] [preflight] Running pre-flight checks
	I0806 08:40:10.616966   78922 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0806 08:40:10.617110   78922 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0806 08:40:10.617234   78922 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0806 08:40:10.617332   78922 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0806 08:40:10.618729   78922 out.go:204]   - Generating certificates and keys ...
	I0806 08:40:10.618805   78922 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0806 08:40:10.618886   78922 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0806 08:40:10.618961   78922 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0806 08:40:10.619025   78922 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0806 08:40:10.619127   78922 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0806 08:40:10.619200   78922 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0806 08:40:10.619289   78922 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0806 08:40:10.619373   78922 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0806 08:40:10.619485   78922 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0806 08:40:10.619596   78922 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0806 08:40:10.619644   78922 kubeadm.go:310] [certs] Using the existing "sa" key
	I0806 08:40:10.619721   78922 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0806 08:40:10.619787   78922 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0806 08:40:10.619871   78922 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0806 08:40:10.619916   78922 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0806 08:40:10.619968   78922 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0806 08:40:10.620028   78922 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0806 08:40:10.620097   78922 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0806 08:40:10.620153   78922 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0806 08:40:10.621538   78922 out.go:204]   - Booting up control plane ...
	I0806 08:40:10.621632   78922 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0806 08:40:10.621739   78922 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0806 08:40:10.621802   78922 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0806 08:40:10.621898   78922 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0806 08:40:10.621979   78922 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0806 08:40:10.622011   78922 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0806 08:40:10.622117   78922 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0806 08:40:10.622225   78922 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0806 08:40:10.622291   78922 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.639662ms
	I0806 08:40:10.622351   78922 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0806 08:40:10.622399   78922 kubeadm.go:310] [api-check] The API server is healthy after 5.503393689s
	I0806 08:40:10.622505   78922 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0806 08:40:10.622639   78922 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0806 08:40:10.622692   78922 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0806 08:40:10.622843   78922 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-703340 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0806 08:40:10.622895   78922 kubeadm.go:310] [bootstrap-token] Using token: 5r8726.myu0qo9tvx4sxjd3
	I0806 08:40:10.624254   78922 out.go:204]   - Configuring RBAC rules ...
	I0806 08:40:10.624381   78922 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0806 08:40:10.624463   78922 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0806 08:40:10.624613   78922 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0806 08:40:10.624813   78922 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0806 08:40:10.624990   78922 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0806 08:40:10.625114   78922 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0806 08:40:10.625253   78922 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0806 08:40:10.625312   78922 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0806 08:40:10.625378   78922 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0806 08:40:10.625387   78922 kubeadm.go:310] 
	I0806 08:40:10.625464   78922 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0806 08:40:10.625472   78922 kubeadm.go:310] 
	I0806 08:40:10.625580   78922 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0806 08:40:10.625595   78922 kubeadm.go:310] 
	I0806 08:40:10.625615   78922 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0806 08:40:10.625668   78922 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0806 08:40:10.625708   78922 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0806 08:40:10.625712   78922 kubeadm.go:310] 
	I0806 08:40:10.625822   78922 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0806 08:40:10.625846   78922 kubeadm.go:310] 
	I0806 08:40:10.625891   78922 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0806 08:40:10.625898   78922 kubeadm.go:310] 
	I0806 08:40:10.625940   78922 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0806 08:40:10.626007   78922 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0806 08:40:10.626068   78922 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0806 08:40:10.626074   78922 kubeadm.go:310] 
	I0806 08:40:10.626173   78922 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0806 08:40:10.626272   78922 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0806 08:40:10.626282   78922 kubeadm.go:310] 
	I0806 08:40:10.626381   78922 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 5r8726.myu0qo9tvx4sxjd3 \
	I0806 08:40:10.626499   78922 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d36e5c1dfe989c7d561c809d7c06e0fc13926ac8f6d664cc5d6865ea5f79931b \
	I0806 08:40:10.626529   78922 kubeadm.go:310] 	--control-plane 
	I0806 08:40:10.626538   78922 kubeadm.go:310] 
	I0806 08:40:10.626639   78922 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0806 08:40:10.626647   78922 kubeadm.go:310] 
	I0806 08:40:10.626745   78922 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 5r8726.myu0qo9tvx4sxjd3 \
	I0806 08:40:10.626893   78922 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d36e5c1dfe989c7d561c809d7c06e0fc13926ac8f6d664cc5d6865ea5f79931b 
	I0806 08:40:10.626920   78922 cni.go:84] Creating CNI manager for ""
	I0806 08:40:10.626929   78922 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0806 08:40:10.628682   78922 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0806 08:40:10.630096   78922 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0806 08:40:10.642492   78922 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0806 08:40:10.667673   78922 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0806 08:40:10.667763   78922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:40:10.667833   78922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-703340 minikube.k8s.io/updated_at=2024_08_06T08_40_10_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=e92cb06692f5ea1ba801d10d148e5e92e807f9c8 minikube.k8s.io/name=no-preload-703340 minikube.k8s.io/primary=true
	I0806 08:40:10.705177   78922 ops.go:34] apiserver oom_adj: -16
	I0806 08:40:10.901976   78922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:40:11.401990   78922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:40:11.902906   78922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:40:12.402461   78922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:40:12.902909   78922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:40:13.402018   78922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:40:13.902098   78922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:40:14.402869   78922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:40:14.902034   78922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:40:14.999571   78922 kubeadm.go:1113] duration metric: took 4.331874752s to wait for elevateKubeSystemPrivileges
	I0806 08:40:14.999615   78922 kubeadm.go:394] duration metric: took 4m59.975766448s to StartCluster
	I0806 08:40:14.999636   78922 settings.go:142] acquiring lock: {Name:mkb0bfbcae3b4205ef43487a9de84cfd8afd2e5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:40:14.999722   78922 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19370-13057/kubeconfig
	I0806 08:40:15.001429   78922 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/kubeconfig: {Name:mk5c066914acf8e36556b29792f75b98076ca34a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:40:15.001696   78922 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.126 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0806 08:40:15.001753   78922 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0806 08:40:15.001864   78922 addons.go:69] Setting storage-provisioner=true in profile "no-preload-703340"
	I0806 08:40:15.001909   78922 addons.go:234] Setting addon storage-provisioner=true in "no-preload-703340"
	W0806 08:40:15.001921   78922 addons.go:243] addon storage-provisioner should already be in state true
	I0806 08:40:15.001919   78922 addons.go:69] Setting default-storageclass=true in profile "no-preload-703340"
	I0806 08:40:15.001945   78922 addons.go:69] Setting metrics-server=true in profile "no-preload-703340"
	I0806 08:40:15.001968   78922 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-703340"
	I0806 08:40:15.001984   78922 addons.go:234] Setting addon metrics-server=true in "no-preload-703340"
	W0806 08:40:15.001997   78922 addons.go:243] addon metrics-server should already be in state true
	I0806 08:40:15.001958   78922 host.go:66] Checking if "no-preload-703340" exists ...
	I0806 08:40:15.002030   78922 host.go:66] Checking if "no-preload-703340" exists ...
	I0806 08:40:15.001918   78922 config.go:182] Loaded profile config "no-preload-703340": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-rc.0
	I0806 08:40:15.002284   78922 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:40:15.002295   78922 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:40:15.002310   78922 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:40:15.002319   78922 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:40:15.002354   78922 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:40:15.002312   78922 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:40:15.003644   78922 out.go:177] * Verifying Kubernetes components...
	I0806 08:40:15.005157   78922 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 08:40:15.018061   78922 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35339
	I0806 08:40:15.018548   78922 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:40:15.018866   78922 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44375
	I0806 08:40:15.019201   78922 main.go:141] libmachine: Using API Version  1
	I0806 08:40:15.019226   78922 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:40:15.019270   78922 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:40:15.019582   78922 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:40:15.019743   78922 main.go:141] libmachine: Using API Version  1
	I0806 08:40:15.019748   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetState
	I0806 08:40:15.019765   78922 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:40:15.020089   78922 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:40:15.020617   78922 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:40:15.020664   78922 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:40:15.020815   78922 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32773
	I0806 08:40:15.021303   78922 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:40:15.021775   78922 main.go:141] libmachine: Using API Version  1
	I0806 08:40:15.021796   78922 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:40:15.022125   78922 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:40:15.022549   78922 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:40:15.022577   78922 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:40:15.023653   78922 addons.go:234] Setting addon default-storageclass=true in "no-preload-703340"
	W0806 08:40:15.023678   78922 addons.go:243] addon default-storageclass should already be in state true
	I0806 08:40:15.023708   78922 host.go:66] Checking if "no-preload-703340" exists ...
	I0806 08:40:15.024066   78922 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:40:15.024120   78922 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:40:15.036138   78922 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44447
	I0806 08:40:15.036586   78922 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:40:15.037132   78922 main.go:141] libmachine: Using API Version  1
	I0806 08:40:15.037153   78922 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:40:15.037609   78922 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:40:15.037921   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetState
	I0806 08:40:15.038640   78922 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42341
	I0806 08:40:15.038777   78922 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34793
	I0806 08:40:15.039003   78922 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:40:15.039098   78922 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:40:15.039473   78922 main.go:141] libmachine: Using API Version  1
	I0806 08:40:15.039489   78922 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:40:15.039607   78922 main.go:141] libmachine: Using API Version  1
	I0806 08:40:15.039623   78922 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:40:15.039820   78922 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:40:15.040041   78922 main.go:141] libmachine: (no-preload-703340) Calling .DriverName
	I0806 08:40:15.040181   78922 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:40:15.040414   78922 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:40:15.040456   78922 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:40:15.040477   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetState
	I0806 08:40:15.042137   78922 main.go:141] libmachine: (no-preload-703340) Calling .DriverName
	I0806 08:40:15.042223   78922 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0806 08:40:15.043522   78922 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 08:40:15.043585   78922 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0806 08:40:15.043601   78922 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0806 08:40:15.043619   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHHostname
	I0806 08:40:15.044812   78922 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0806 08:40:15.044829   78922 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0806 08:40:15.044854   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHHostname
	I0806 08:40:15.047376   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:40:15.047978   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:40:15.048001   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:40:15.048228   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:40:15.048269   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHPort
	I0806 08:40:15.048485   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHKeyPath
	I0806 08:40:15.048617   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHUsername
	I0806 08:40:15.048667   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:40:15.048688   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:40:15.048800   78922 sshutil.go:53] new ssh client: &{IP:192.168.72.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/no-preload-703340/id_rsa Username:docker}
	I0806 08:40:15.049107   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHPort
	I0806 08:40:15.049267   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHKeyPath
	I0806 08:40:15.049403   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHUsername
	I0806 08:40:15.049518   78922 sshutil.go:53] new ssh client: &{IP:192.168.72.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/no-preload-703340/id_rsa Username:docker}
	I0806 08:40:15.059440   78922 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33377
	I0806 08:40:15.059910   78922 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:40:15.060423   78922 main.go:141] libmachine: Using API Version  1
	I0806 08:40:15.060441   78922 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:40:15.060741   78922 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:40:15.060922   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetState
	I0806 08:40:15.062303   78922 main.go:141] libmachine: (no-preload-703340) Calling .DriverName
	I0806 08:40:15.062491   78922 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0806 08:40:15.062503   78922 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0806 08:40:15.062516   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHHostname
	I0806 08:40:15.065588   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:40:15.066008   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:40:15.066024   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:40:15.066147   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHPort
	I0806 08:40:15.066302   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHKeyPath
	I0806 08:40:15.066407   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHUsername
	I0806 08:40:15.066500   78922 sshutil.go:53] new ssh client: &{IP:192.168.72.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/no-preload-703340/id_rsa Username:docker}
	I0806 08:40:15.209763   78922 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 08:40:15.231626   78922 node_ready.go:35] waiting up to 6m0s for node "no-preload-703340" to be "Ready" ...
	I0806 08:40:15.239924   78922 node_ready.go:49] node "no-preload-703340" has status "Ready":"True"
	I0806 08:40:15.239952   78922 node_ready.go:38] duration metric: took 8.292674ms for node "no-preload-703340" to be "Ready" ...
	I0806 08:40:15.239964   78922 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 08:40:15.247043   78922 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-9dt4q" in "kube-system" namespace to be "Ready" ...
	I0806 08:40:15.320295   78922 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0806 08:40:15.343811   78922 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0806 08:40:15.343838   78922 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0806 08:40:15.367466   78922 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0806 08:40:15.392170   78922 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0806 08:40:15.392200   78922 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0806 08:40:15.440711   78922 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0806 08:40:15.440741   78922 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0806 08:40:15.483982   78922 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0806 08:40:16.105714   78922 main.go:141] libmachine: Making call to close driver server
	I0806 08:40:16.105739   78922 main.go:141] libmachine: (no-preload-703340) Calling .Close
	I0806 08:40:16.105768   78922 main.go:141] libmachine: Making call to close driver server
	I0806 08:40:16.105789   78922 main.go:141] libmachine: (no-preload-703340) Calling .Close
	I0806 08:40:16.106051   78922 main.go:141] libmachine: (no-preload-703340) DBG | Closing plugin on server side
	I0806 08:40:16.106086   78922 main.go:141] libmachine: (no-preload-703340) DBG | Closing plugin on server side
	I0806 08:40:16.106097   78922 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:40:16.106113   78922 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:40:16.106133   78922 main.go:141] libmachine: Making call to close driver server
	I0806 08:40:16.106144   78922 main.go:141] libmachine: (no-preload-703340) Calling .Close
	I0806 08:40:16.106176   78922 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:40:16.106254   78922 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:40:16.106280   78922 main.go:141] libmachine: Making call to close driver server
	I0806 08:40:16.106287   78922 main.go:141] libmachine: (no-preload-703340) Calling .Close
	I0806 08:40:16.107761   78922 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:40:16.107779   78922 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:40:16.107767   78922 main.go:141] libmachine: (no-preload-703340) DBG | Closing plugin on server side
	I0806 08:40:16.107805   78922 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:40:16.107820   78922 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:40:16.107874   78922 main.go:141] libmachine: (no-preload-703340) DBG | Closing plugin on server side
	I0806 08:40:16.133295   78922 main.go:141] libmachine: Making call to close driver server
	I0806 08:40:16.133318   78922 main.go:141] libmachine: (no-preload-703340) Calling .Close
	I0806 08:40:16.133642   78922 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:40:16.133660   78922 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:40:16.133680   78922 main.go:141] libmachine: (no-preload-703340) DBG | Closing plugin on server side
	I0806 08:40:16.628763   78922 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.144732374s)
	I0806 08:40:16.628864   78922 main.go:141] libmachine: Making call to close driver server
	I0806 08:40:16.628891   78922 main.go:141] libmachine: (no-preload-703340) Calling .Close
	I0806 08:40:16.629348   78922 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:40:16.629375   78922 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:40:16.629385   78922 main.go:141] libmachine: Making call to close driver server
	I0806 08:40:16.629396   78922 main.go:141] libmachine: (no-preload-703340) Calling .Close
	I0806 08:40:16.629406   78922 main.go:141] libmachine: (no-preload-703340) DBG | Closing plugin on server side
	I0806 08:40:16.629673   78922 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:40:16.629690   78922 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:40:16.629702   78922 addons.go:475] Verifying addon metrics-server=true in "no-preload-703340"
	I0806 08:40:16.631683   78922 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0806 08:40:16.632723   78922 addons.go:510] duration metric: took 1.630970284s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0806 08:40:17.264829   78922 pod_ready.go:102] pod "coredns-6f6b679f8f-9dt4q" in "kube-system" namespace has status "Ready":"False"
	I0806 08:40:19.754652   78922 pod_ready.go:102] pod "coredns-6f6b679f8f-9dt4q" in "kube-system" namespace has status "Ready":"False"
	I0806 08:40:21.753116   78922 pod_ready.go:92] pod "coredns-6f6b679f8f-9dt4q" in "kube-system" namespace has status "Ready":"True"
	I0806 08:40:21.753139   78922 pod_ready.go:81] duration metric: took 6.506065964s for pod "coredns-6f6b679f8f-9dt4q" in "kube-system" namespace to be "Ready" ...
	I0806 08:40:21.753149   78922 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-mpfwj" in "kube-system" namespace to be "Ready" ...
	I0806 08:40:21.757521   78922 pod_ready.go:92] pod "coredns-6f6b679f8f-mpfwj" in "kube-system" namespace has status "Ready":"True"
	I0806 08:40:21.757542   78922 pod_ready.go:81] duration metric: took 4.3867ms for pod "coredns-6f6b679f8f-mpfwj" in "kube-system" namespace to be "Ready" ...
	I0806 08:40:21.757553   78922 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-703340" in "kube-system" namespace to be "Ready" ...
	I0806 08:40:21.762161   78922 pod_ready.go:92] pod "etcd-no-preload-703340" in "kube-system" namespace has status "Ready":"True"
	I0806 08:40:21.762179   78922 pod_ready.go:81] duration metric: took 4.619377ms for pod "etcd-no-preload-703340" in "kube-system" namespace to be "Ready" ...
	I0806 08:40:21.762188   78922 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-703340" in "kube-system" namespace to be "Ready" ...
	I0806 08:40:21.766394   78922 pod_ready.go:92] pod "kube-apiserver-no-preload-703340" in "kube-system" namespace has status "Ready":"True"
	I0806 08:40:21.766411   78922 pod_ready.go:81] duration metric: took 4.216869ms for pod "kube-apiserver-no-preload-703340" in "kube-system" namespace to be "Ready" ...
	I0806 08:40:21.766421   78922 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-703340" in "kube-system" namespace to be "Ready" ...
	I0806 08:40:21.770569   78922 pod_ready.go:92] pod "kube-controller-manager-no-preload-703340" in "kube-system" namespace has status "Ready":"True"
	I0806 08:40:21.770592   78922 pod_ready.go:81] duration metric: took 4.160997ms for pod "kube-controller-manager-no-preload-703340" in "kube-system" namespace to be "Ready" ...
	I0806 08:40:21.770600   78922 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zsv9x" in "kube-system" namespace to be "Ready" ...
	I0806 08:40:22.151795   78922 pod_ready.go:92] pod "kube-proxy-zsv9x" in "kube-system" namespace has status "Ready":"True"
	I0806 08:40:22.151819   78922 pod_ready.go:81] duration metric: took 381.212951ms for pod "kube-proxy-zsv9x" in "kube-system" namespace to be "Ready" ...
	I0806 08:40:22.151829   78922 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-703340" in "kube-system" namespace to be "Ready" ...
	I0806 08:40:22.550798   78922 pod_ready.go:92] pod "kube-scheduler-no-preload-703340" in "kube-system" namespace has status "Ready":"True"
	I0806 08:40:22.550821   78922 pod_ready.go:81] duration metric: took 398.985533ms for pod "kube-scheduler-no-preload-703340" in "kube-system" namespace to be "Ready" ...
	I0806 08:40:22.550830   78922 pod_ready.go:38] duration metric: took 7.310852309s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 08:40:22.550842   78922 api_server.go:52] waiting for apiserver process to appear ...
	I0806 08:40:22.550888   78922 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:40:22.567410   78922 api_server.go:72] duration metric: took 7.565683505s to wait for apiserver process to appear ...
	I0806 08:40:22.567441   78922 api_server.go:88] waiting for apiserver healthz status ...
	I0806 08:40:22.567464   78922 api_server.go:253] Checking apiserver healthz at https://192.168.72.126:8443/healthz ...
	I0806 08:40:22.572514   78922 api_server.go:279] https://192.168.72.126:8443/healthz returned 200:
	ok
	I0806 08:40:22.573572   78922 api_server.go:141] control plane version: v1.31.0-rc.0
	I0806 08:40:22.573596   78922 api_server.go:131] duration metric: took 6.147703ms to wait for apiserver health ...
	I0806 08:40:22.573606   78922 system_pods.go:43] waiting for kube-system pods to appear ...
	I0806 08:40:22.754910   78922 system_pods.go:59] 9 kube-system pods found
	I0806 08:40:22.754937   78922 system_pods.go:61] "coredns-6f6b679f8f-9dt4q" [a62f9c28-2e06-4072-b459-3a879cba65d6] Running
	I0806 08:40:22.754942   78922 system_pods.go:61] "coredns-6f6b679f8f-mpfwj" [9da40ee8-73c2-4479-a2aa-32a2dce7e21d] Running
	I0806 08:40:22.754945   78922 system_pods.go:61] "etcd-no-preload-703340" [2b89a18a-69f6-429b-8616-b84bf9e666b4] Running
	I0806 08:40:22.754950   78922 system_pods.go:61] "kube-apiserver-no-preload-703340" [8b863a60-03ad-4d18-855a-12d29dc63612] Running
	I0806 08:40:22.754953   78922 system_pods.go:61] "kube-controller-manager-no-preload-703340" [8490ec19-da74-4c39-9a69-17918506ea10] Running
	I0806 08:40:22.754956   78922 system_pods.go:61] "kube-proxy-zsv9x" [2859a7f2-f880-4be4-806e-163eb9caf535] Running
	I0806 08:40:22.754959   78922 system_pods.go:61] "kube-scheduler-no-preload-703340" [e003ccd3-48e7-4e4e-9f05-166003935575] Running
	I0806 08:40:22.754964   78922 system_pods.go:61] "metrics-server-6867b74b74-tqnjm" [4ab2bdd5-bd70-445a-84ac-6a45bdaf06a7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0806 08:40:22.754972   78922 system_pods.go:61] "storage-provisioner" [5497ddbc-8ed5-4b42-80e8-81aa7c1b8fa7] Running
	I0806 08:40:22.754980   78922 system_pods.go:74] duration metric: took 181.367063ms to wait for pod list to return data ...
	I0806 08:40:22.754990   78922 default_sa.go:34] waiting for default service account to be created ...
	I0806 08:40:22.951215   78922 default_sa.go:45] found service account: "default"
	I0806 08:40:22.951242   78922 default_sa.go:55] duration metric: took 196.244443ms for default service account to be created ...
	I0806 08:40:22.951250   78922 system_pods.go:116] waiting for k8s-apps to be running ...
	I0806 08:40:23.154822   78922 system_pods.go:86] 9 kube-system pods found
	I0806 08:40:23.154852   78922 system_pods.go:89] "coredns-6f6b679f8f-9dt4q" [a62f9c28-2e06-4072-b459-3a879cba65d6] Running
	I0806 08:40:23.154858   78922 system_pods.go:89] "coredns-6f6b679f8f-mpfwj" [9da40ee8-73c2-4479-a2aa-32a2dce7e21d] Running
	I0806 08:40:23.154862   78922 system_pods.go:89] "etcd-no-preload-703340" [2b89a18a-69f6-429b-8616-b84bf9e666b4] Running
	I0806 08:40:23.154865   78922 system_pods.go:89] "kube-apiserver-no-preload-703340" [8b863a60-03ad-4d18-855a-12d29dc63612] Running
	I0806 08:40:23.154870   78922 system_pods.go:89] "kube-controller-manager-no-preload-703340" [8490ec19-da74-4c39-9a69-17918506ea10] Running
	I0806 08:40:23.154875   78922 system_pods.go:89] "kube-proxy-zsv9x" [2859a7f2-f880-4be4-806e-163eb9caf535] Running
	I0806 08:40:23.154879   78922 system_pods.go:89] "kube-scheduler-no-preload-703340" [e003ccd3-48e7-4e4e-9f05-166003935575] Running
	I0806 08:40:23.154886   78922 system_pods.go:89] "metrics-server-6867b74b74-tqnjm" [4ab2bdd5-bd70-445a-84ac-6a45bdaf06a7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0806 08:40:23.154890   78922 system_pods.go:89] "storage-provisioner" [5497ddbc-8ed5-4b42-80e8-81aa7c1b8fa7] Running
	I0806 08:40:23.154898   78922 system_pods.go:126] duration metric: took 203.642884ms to wait for k8s-apps to be running ...
	I0806 08:40:23.154905   78922 system_svc.go:44] waiting for kubelet service to be running ....
	I0806 08:40:23.154948   78922 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 08:40:23.170577   78922 system_svc.go:56] duration metric: took 15.661926ms WaitForService to wait for kubelet
	I0806 08:40:23.170609   78922 kubeadm.go:582] duration metric: took 8.168885934s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0806 08:40:23.170631   78922 node_conditions.go:102] verifying NodePressure condition ...
	I0806 08:40:23.352727   78922 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0806 08:40:23.352753   78922 node_conditions.go:123] node cpu capacity is 2
	I0806 08:40:23.352763   78922 node_conditions.go:105] duration metric: took 182.12736ms to run NodePressure ...
	I0806 08:40:23.352774   78922 start.go:241] waiting for startup goroutines ...
	I0806 08:40:23.352780   78922 start.go:246] waiting for cluster config update ...
	I0806 08:40:23.352790   78922 start.go:255] writing updated cluster config ...
	I0806 08:40:23.353072   78922 ssh_runner.go:195] Run: rm -f paused
	I0806 08:40:23.401609   78922 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-rc.0 (minor skew: 1)
	I0806 08:40:23.403639   78922 out.go:177] * Done! kubectl is now configured to use "no-preload-703340" cluster and "default" namespace by default
	I0806 08:40:46.706828   79927 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0806 08:40:46.707130   79927 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0806 08:40:46.707156   79927 kubeadm.go:310] 
	I0806 08:40:46.707218   79927 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0806 08:40:46.707278   79927 kubeadm.go:310] 		timed out waiting for the condition
	I0806 08:40:46.707304   79927 kubeadm.go:310] 
	I0806 08:40:46.707381   79927 kubeadm.go:310] 	This error is likely caused by:
	I0806 08:40:46.707439   79927 kubeadm.go:310] 		- The kubelet is not running
	I0806 08:40:46.707569   79927 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0806 08:40:46.707580   79927 kubeadm.go:310] 
	I0806 08:40:46.707705   79927 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0806 08:40:46.707756   79927 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0806 08:40:46.707805   79927 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0806 08:40:46.707814   79927 kubeadm.go:310] 
	I0806 08:40:46.707974   79927 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0806 08:40:46.708118   79927 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0806 08:40:46.708130   79927 kubeadm.go:310] 
	I0806 08:40:46.708297   79927 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0806 08:40:46.708448   79927 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0806 08:40:46.708577   79927 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0806 08:40:46.708673   79927 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0806 08:40:46.708684   79927 kubeadm.go:310] 
	I0806 08:40:46.709441   79927 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0806 08:40:46.709593   79927 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0806 08:40:46.709791   79927 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0806 08:40:46.709862   79927 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0806 08:40:46.709928   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0806 08:40:47.169443   79927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 08:40:47.187784   79927 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0806 08:40:47.199239   79927 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0806 08:40:47.199267   79927 kubeadm.go:157] found existing configuration files:
	
	I0806 08:40:47.199325   79927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0806 08:40:47.209294   79927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0806 08:40:47.209361   79927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0806 08:40:47.219483   79927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0806 08:40:47.229674   79927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0806 08:40:47.229732   79927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0806 08:40:47.240729   79927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0806 08:40:47.250964   79927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0806 08:40:47.251018   79927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0806 08:40:47.261322   79927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0806 08:40:47.271411   79927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0806 08:40:47.271502   79927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0806 08:40:47.281986   79927 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0806 08:40:47.514409   79927 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0806 08:42:43.760446   79927 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0806 08:42:43.760586   79927 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0806 08:42:43.762233   79927 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0806 08:42:43.762301   79927 kubeadm.go:310] [preflight] Running pre-flight checks
	I0806 08:42:43.762391   79927 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0806 08:42:43.762504   79927 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0806 08:42:43.762602   79927 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0806 08:42:43.762653   79927 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0806 08:42:43.764435   79927 out.go:204]   - Generating certificates and keys ...
	I0806 08:42:43.764523   79927 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0806 08:42:43.764587   79927 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0806 08:42:43.764684   79927 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0806 08:42:43.764761   79927 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0806 08:42:43.764833   79927 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0806 08:42:43.764905   79927 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0806 08:42:43.765002   79927 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0806 08:42:43.765100   79927 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0806 08:42:43.765202   79927 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0806 08:42:43.765338   79927 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0806 08:42:43.765400   79927 kubeadm.go:310] [certs] Using the existing "sa" key
	I0806 08:42:43.765516   79927 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0806 08:42:43.765601   79927 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0806 08:42:43.765680   79927 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0806 08:42:43.765774   79927 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0806 08:42:43.765854   79927 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0806 08:42:43.766002   79927 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0806 08:42:43.766133   79927 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0806 08:42:43.766194   79927 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0806 08:42:43.766273   79927 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0806 08:42:43.767861   79927 out.go:204]   - Booting up control plane ...
	I0806 08:42:43.767957   79927 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0806 08:42:43.768051   79927 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0806 08:42:43.768136   79927 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0806 08:42:43.768232   79927 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0806 08:42:43.768487   79927 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0806 08:42:43.768559   79927 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0806 08:42:43.768635   79927 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0806 08:42:43.768862   79927 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0806 08:42:43.768945   79927 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0806 08:42:43.769124   79927 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0806 08:42:43.769195   79927 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0806 08:42:43.769379   79927 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0806 08:42:43.769463   79927 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0806 08:42:43.769721   79927 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0806 08:42:43.769823   79927 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0806 08:42:43.770089   79927 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0806 08:42:43.770102   79927 kubeadm.go:310] 
	I0806 08:42:43.770161   79927 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0806 08:42:43.770223   79927 kubeadm.go:310] 		timed out waiting for the condition
	I0806 08:42:43.770234   79927 kubeadm.go:310] 
	I0806 08:42:43.770274   79927 kubeadm.go:310] 	This error is likely caused by:
	I0806 08:42:43.770303   79927 kubeadm.go:310] 		- The kubelet is not running
	I0806 08:42:43.770389   79927 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0806 08:42:43.770399   79927 kubeadm.go:310] 
	I0806 08:42:43.770487   79927 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0806 08:42:43.770519   79927 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0806 08:42:43.770547   79927 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0806 08:42:43.770554   79927 kubeadm.go:310] 
	I0806 08:42:43.770657   79927 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0806 08:42:43.770756   79927 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0806 08:42:43.770769   79927 kubeadm.go:310] 
	I0806 08:42:43.770932   79927 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0806 08:42:43.771079   79927 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0806 08:42:43.771178   79927 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0806 08:42:43.771266   79927 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0806 08:42:43.771315   79927 kubeadm.go:310] 
	I0806 08:42:43.771336   79927 kubeadm.go:394] duration metric: took 7m57.719144545s to StartCluster
	I0806 08:42:43.771381   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:42:43.771451   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:42:43.817981   79927 cri.go:89] found id: ""
	I0806 08:42:43.818010   79927 logs.go:276] 0 containers: []
	W0806 08:42:43.818021   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:42:43.818028   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:42:43.818087   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:42:43.855481   79927 cri.go:89] found id: ""
	I0806 08:42:43.855506   79927 logs.go:276] 0 containers: []
	W0806 08:42:43.855517   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:42:43.855524   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:42:43.855584   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:42:43.893110   79927 cri.go:89] found id: ""
	I0806 08:42:43.893141   79927 logs.go:276] 0 containers: []
	W0806 08:42:43.893152   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:42:43.893172   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:42:43.893255   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:42:43.929785   79927 cri.go:89] found id: ""
	I0806 08:42:43.929844   79927 logs.go:276] 0 containers: []
	W0806 08:42:43.929853   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:42:43.929858   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:42:43.929921   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:42:43.970501   79927 cri.go:89] found id: ""
	I0806 08:42:43.970533   79927 logs.go:276] 0 containers: []
	W0806 08:42:43.970541   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:42:43.970546   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:42:43.970590   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:42:44.007344   79927 cri.go:89] found id: ""
	I0806 08:42:44.007373   79927 logs.go:276] 0 containers: []
	W0806 08:42:44.007382   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:42:44.007388   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:42:44.007434   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:42:44.043724   79927 cri.go:89] found id: ""
	I0806 08:42:44.043747   79927 logs.go:276] 0 containers: []
	W0806 08:42:44.043755   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:42:44.043761   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:42:44.043810   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:42:44.084642   79927 cri.go:89] found id: ""
	I0806 08:42:44.084682   79927 logs.go:276] 0 containers: []
	W0806 08:42:44.084694   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:42:44.084709   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:42:44.084724   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:42:44.157081   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:42:44.157125   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:42:44.175703   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:42:44.175746   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:42:44.275665   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:42:44.275699   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:42:44.275717   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:42:44.385031   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:42:44.385076   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0806 08:42:44.425573   79927 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0806 08:42:44.425616   79927 out.go:239] * 
	W0806 08:42:44.425683   79927 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0806 08:42:44.425713   79927 out.go:239] * 
	W0806 08:42:44.426633   79927 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0806 08:42:44.429961   79927 out.go:177] 
	W0806 08:42:44.431352   79927 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0806 08:42:44.431398   79927 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0806 08:42:44.431414   79927 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0806 08:42:44.432963   79927 out.go:177] 
	
	
	==> CRI-O <==
	Aug 06 08:47:56 default-k8s-diff-port-990751 crio[719]: time="2024-08-06 08:47:56.948158219Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722934076948128312,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fe45f895-bbf0-4e5d-9b80-49e3aa05cc19 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 08:47:56 default-k8s-diff-port-990751 crio[719]: time="2024-08-06 08:47:56.948915422Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=90377704-5461-420c-adb9-6165746f633f name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:47:56 default-k8s-diff-port-990751 crio[719]: time="2024-08-06 08:47:56.949012128Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=90377704-5461-420c-adb9-6165746f633f name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:47:56 default-k8s-diff-port-990751 crio[719]: time="2024-08-06 08:47:56.949371526Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e668e4a5fa59e279bc227b6110983cc4660b639d5b1e1291ae60d361f682d82e,PodSandboxId:212629317ec9d4a02502b705c85c5e3a1d233f294ffdb7c78c6f03f5de7c4e0a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722933298482056515,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0faff3fc-d34b-418e-972b-95a2e36b577a,},Annotations:map[string]string{io.kubernetes.container.hash: 99df2b1d,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e66102b7ad4d765159de00c22e2ff0fc7466749da0042d42272d5d8590c8ee08,PodSandboxId:8676b9ccd33520640da1604c11f8e1f022def345d5accba2a99d0555ba1ac7b4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722933278325984668,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0c9c181e-0ac8-4463-bcdd-90536fa7ace0,},Annotations:map[string]string{io.kubernetes.container.hash: fb110dd9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59f63eeae644f7c8944cdb7b1d05d9e49fe84eb28e9ccbad2d2dd79846d7eb52,PodSandboxId:671bc248779b3dde9d65bca463606ec9f1e93428faa567ffc2a9d5a7f70c4ecd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722933275327872058,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gx8tt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48f83ba8-a238-4baf-94af-88370fefcb02,},Annotations:map[string]string{io.kubernetes.container.hash: a7046b57,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abd677ac30faba3fef9e9d562601ad0012d7f1b48852e08b8f2684efda6b0f2e,PodSandboxId:212629317ec9d4a02502b705c85c5e3a1d233f294ffdb7c78c6f03f5de7c4e0a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722933267621029914,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 0faff3fc-d34b-418e-972b-95a2e36b577a,},Annotations:map[string]string{io.kubernetes.container.hash: 99df2b1d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75990046864b4a4db9f8c02f44de4eefbb0cf45584880b053bce4ca0b677ce4e,PodSandboxId:62877dd46aaa63956643b3d0f0d405fa45b34e720ba8734b0ead0a9e3577eecf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722933267588436904,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lpf88,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 866c10f0-2c44-4c03-b460
-ff2194bd1ec6,},Annotations:map[string]string{io.kubernetes.container.hash: 6ffde570,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5fa6fe8f1db6b59f47ea637515efb2864e0408ce1df0e6851b3bda0e240f5d3,PodSandboxId:7d737c16e4d6b6dfb159c9481525da24c5a35d2d49fc7649c2c2b47bccf13eeb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722933263042998803,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-990751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be1af3bf6b8b56d20fa135ae58600b46,},Annotations:map[
string]string{io.kubernetes.container.hash: 42e17a12,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e596abcad3dc13d27548701f515beebba051e04a9e20f8ce1d018731d880ff1c,PodSandboxId:42f4b383c94b31dc78e49648f4bac2952ad70f8984c43ef9702d42fd0a1d6b84,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722933263050209712,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-990751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79d986bf1c9915f03028df54a8aaf049,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05bbffa4c989e42bd020ebe6b943459ec1d7b31a6c5e5ba9a37056da15542658,PodSandboxId:4502d8ea19cd903a4f9bd6f7e9e51168b714fcf287bf94b261c9ab68f1f2f3ce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722933263062442813,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-990751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7799dd4e95fe53745fccce3bf5a
3815,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49843be03d16addb301ffd26e517770ff29b00f049efb524a2139b8b2153325c,PodSandboxId:17201685578b67add3362633d77a2d263ea9700b8d936ca483f69667ff24a7e8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722933263075049639,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-990751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eeaec8565e0ec402ba61f03e2c1209
84,},Annotations:map[string]string{io.kubernetes.container.hash: e9a9675d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=90377704-5461-420c-adb9-6165746f633f name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:47:56 default-k8s-diff-port-990751 crio[719]: time="2024-08-06 08:47:56.993891918Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=71f014e8-3c01-435d-beaf-fa72111fd4e8 name=/runtime.v1.RuntimeService/Version
	Aug 06 08:47:56 default-k8s-diff-port-990751 crio[719]: time="2024-08-06 08:47:56.993967352Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=71f014e8-3c01-435d-beaf-fa72111fd4e8 name=/runtime.v1.RuntimeService/Version
	Aug 06 08:47:56 default-k8s-diff-port-990751 crio[719]: time="2024-08-06 08:47:56.994825798Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2915e6b0-04a2-4986-9a65-c5f3817bd27d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 08:47:56 default-k8s-diff-port-990751 crio[719]: time="2024-08-06 08:47:56.995214535Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722934076995194435,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2915e6b0-04a2-4986-9a65-c5f3817bd27d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 08:47:56 default-k8s-diff-port-990751 crio[719]: time="2024-08-06 08:47:56.995842573Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=29056557-7315-4c79-bb7d-3a3166b857a1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:47:56 default-k8s-diff-port-990751 crio[719]: time="2024-08-06 08:47:56.995896256Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=29056557-7315-4c79-bb7d-3a3166b857a1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:47:56 default-k8s-diff-port-990751 crio[719]: time="2024-08-06 08:47:56.996086221Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e668e4a5fa59e279bc227b6110983cc4660b639d5b1e1291ae60d361f682d82e,PodSandboxId:212629317ec9d4a02502b705c85c5e3a1d233f294ffdb7c78c6f03f5de7c4e0a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722933298482056515,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0faff3fc-d34b-418e-972b-95a2e36b577a,},Annotations:map[string]string{io.kubernetes.container.hash: 99df2b1d,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e66102b7ad4d765159de00c22e2ff0fc7466749da0042d42272d5d8590c8ee08,PodSandboxId:8676b9ccd33520640da1604c11f8e1f022def345d5accba2a99d0555ba1ac7b4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722933278325984668,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0c9c181e-0ac8-4463-bcdd-90536fa7ace0,},Annotations:map[string]string{io.kubernetes.container.hash: fb110dd9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59f63eeae644f7c8944cdb7b1d05d9e49fe84eb28e9ccbad2d2dd79846d7eb52,PodSandboxId:671bc248779b3dde9d65bca463606ec9f1e93428faa567ffc2a9d5a7f70c4ecd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722933275327872058,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gx8tt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48f83ba8-a238-4baf-94af-88370fefcb02,},Annotations:map[string]string{io.kubernetes.container.hash: a7046b57,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abd677ac30faba3fef9e9d562601ad0012d7f1b48852e08b8f2684efda6b0f2e,PodSandboxId:212629317ec9d4a02502b705c85c5e3a1d233f294ffdb7c78c6f03f5de7c4e0a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722933267621029914,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 0faff3fc-d34b-418e-972b-95a2e36b577a,},Annotations:map[string]string{io.kubernetes.container.hash: 99df2b1d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75990046864b4a4db9f8c02f44de4eefbb0cf45584880b053bce4ca0b677ce4e,PodSandboxId:62877dd46aaa63956643b3d0f0d405fa45b34e720ba8734b0ead0a9e3577eecf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722933267588436904,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lpf88,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 866c10f0-2c44-4c03-b460
-ff2194bd1ec6,},Annotations:map[string]string{io.kubernetes.container.hash: 6ffde570,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5fa6fe8f1db6b59f47ea637515efb2864e0408ce1df0e6851b3bda0e240f5d3,PodSandboxId:7d737c16e4d6b6dfb159c9481525da24c5a35d2d49fc7649c2c2b47bccf13eeb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722933263042998803,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-990751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be1af3bf6b8b56d20fa135ae58600b46,},Annotations:map[
string]string{io.kubernetes.container.hash: 42e17a12,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e596abcad3dc13d27548701f515beebba051e04a9e20f8ce1d018731d880ff1c,PodSandboxId:42f4b383c94b31dc78e49648f4bac2952ad70f8984c43ef9702d42fd0a1d6b84,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722933263050209712,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-990751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79d986bf1c9915f03028df54a8aaf049,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05bbffa4c989e42bd020ebe6b943459ec1d7b31a6c5e5ba9a37056da15542658,PodSandboxId:4502d8ea19cd903a4f9bd6f7e9e51168b714fcf287bf94b261c9ab68f1f2f3ce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722933263062442813,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-990751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7799dd4e95fe53745fccce3bf5a
3815,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49843be03d16addb301ffd26e517770ff29b00f049efb524a2139b8b2153325c,PodSandboxId:17201685578b67add3362633d77a2d263ea9700b8d936ca483f69667ff24a7e8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722933263075049639,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-990751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eeaec8565e0ec402ba61f03e2c1209
84,},Annotations:map[string]string{io.kubernetes.container.hash: e9a9675d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=29056557-7315-4c79-bb7d-3a3166b857a1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:47:57 default-k8s-diff-port-990751 crio[719]: time="2024-08-06 08:47:57.035668277Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9f8ce451-9053-4e04-b2d6-4ef2b845be0c name=/runtime.v1.RuntimeService/Version
	Aug 06 08:47:57 default-k8s-diff-port-990751 crio[719]: time="2024-08-06 08:47:57.035739547Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9f8ce451-9053-4e04-b2d6-4ef2b845be0c name=/runtime.v1.RuntimeService/Version
	Aug 06 08:47:57 default-k8s-diff-port-990751 crio[719]: time="2024-08-06 08:47:57.036925489Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1a00c97f-7161-4a50-a00d-bb027e7d3c04 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 08:47:57 default-k8s-diff-port-990751 crio[719]: time="2024-08-06 08:47:57.037363767Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722934077037291763,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1a00c97f-7161-4a50-a00d-bb027e7d3c04 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 08:47:57 default-k8s-diff-port-990751 crio[719]: time="2024-08-06 08:47:57.037894234Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=aaaefd0e-b404-4c02-9753-1437bd46b4ea name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:47:57 default-k8s-diff-port-990751 crio[719]: time="2024-08-06 08:47:57.037945617Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=aaaefd0e-b404-4c02-9753-1437bd46b4ea name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:47:57 default-k8s-diff-port-990751 crio[719]: time="2024-08-06 08:47:57.038138633Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e668e4a5fa59e279bc227b6110983cc4660b639d5b1e1291ae60d361f682d82e,PodSandboxId:212629317ec9d4a02502b705c85c5e3a1d233f294ffdb7c78c6f03f5de7c4e0a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722933298482056515,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0faff3fc-d34b-418e-972b-95a2e36b577a,},Annotations:map[string]string{io.kubernetes.container.hash: 99df2b1d,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e66102b7ad4d765159de00c22e2ff0fc7466749da0042d42272d5d8590c8ee08,PodSandboxId:8676b9ccd33520640da1604c11f8e1f022def345d5accba2a99d0555ba1ac7b4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722933278325984668,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0c9c181e-0ac8-4463-bcdd-90536fa7ace0,},Annotations:map[string]string{io.kubernetes.container.hash: fb110dd9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59f63eeae644f7c8944cdb7b1d05d9e49fe84eb28e9ccbad2d2dd79846d7eb52,PodSandboxId:671bc248779b3dde9d65bca463606ec9f1e93428faa567ffc2a9d5a7f70c4ecd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722933275327872058,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gx8tt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48f83ba8-a238-4baf-94af-88370fefcb02,},Annotations:map[string]string{io.kubernetes.container.hash: a7046b57,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abd677ac30faba3fef9e9d562601ad0012d7f1b48852e08b8f2684efda6b0f2e,PodSandboxId:212629317ec9d4a02502b705c85c5e3a1d233f294ffdb7c78c6f03f5de7c4e0a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722933267621029914,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 0faff3fc-d34b-418e-972b-95a2e36b577a,},Annotations:map[string]string{io.kubernetes.container.hash: 99df2b1d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75990046864b4a4db9f8c02f44de4eefbb0cf45584880b053bce4ca0b677ce4e,PodSandboxId:62877dd46aaa63956643b3d0f0d405fa45b34e720ba8734b0ead0a9e3577eecf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722933267588436904,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lpf88,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 866c10f0-2c44-4c03-b460
-ff2194bd1ec6,},Annotations:map[string]string{io.kubernetes.container.hash: 6ffde570,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5fa6fe8f1db6b59f47ea637515efb2864e0408ce1df0e6851b3bda0e240f5d3,PodSandboxId:7d737c16e4d6b6dfb159c9481525da24c5a35d2d49fc7649c2c2b47bccf13eeb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722933263042998803,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-990751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be1af3bf6b8b56d20fa135ae58600b46,},Annotations:map[
string]string{io.kubernetes.container.hash: 42e17a12,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e596abcad3dc13d27548701f515beebba051e04a9e20f8ce1d018731d880ff1c,PodSandboxId:42f4b383c94b31dc78e49648f4bac2952ad70f8984c43ef9702d42fd0a1d6b84,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722933263050209712,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-990751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79d986bf1c9915f03028df54a8aaf049,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05bbffa4c989e42bd020ebe6b943459ec1d7b31a6c5e5ba9a37056da15542658,PodSandboxId:4502d8ea19cd903a4f9bd6f7e9e51168b714fcf287bf94b261c9ab68f1f2f3ce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722933263062442813,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-990751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7799dd4e95fe53745fccce3bf5a
3815,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49843be03d16addb301ffd26e517770ff29b00f049efb524a2139b8b2153325c,PodSandboxId:17201685578b67add3362633d77a2d263ea9700b8d936ca483f69667ff24a7e8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722933263075049639,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-990751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eeaec8565e0ec402ba61f03e2c1209
84,},Annotations:map[string]string{io.kubernetes.container.hash: e9a9675d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=aaaefd0e-b404-4c02-9753-1437bd46b4ea name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:47:57 default-k8s-diff-port-990751 crio[719]: time="2024-08-06 08:47:57.073234246Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=29a27982-e5e7-47e3-b246-5cf0c1ad5440 name=/runtime.v1.RuntimeService/Version
	Aug 06 08:47:57 default-k8s-diff-port-990751 crio[719]: time="2024-08-06 08:47:57.073380296Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=29a27982-e5e7-47e3-b246-5cf0c1ad5440 name=/runtime.v1.RuntimeService/Version
	Aug 06 08:47:57 default-k8s-diff-port-990751 crio[719]: time="2024-08-06 08:47:57.074797232Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a66d1522-e533-4348-ac01-080d45a1d760 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 08:47:57 default-k8s-diff-port-990751 crio[719]: time="2024-08-06 08:47:57.075225915Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722934077075203574,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a66d1522-e533-4348-ac01-080d45a1d760 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 08:47:57 default-k8s-diff-port-990751 crio[719]: time="2024-08-06 08:47:57.075749466Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d2ef4e99-d41c-439e-86ad-ea666f2aa98e name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:47:57 default-k8s-diff-port-990751 crio[719]: time="2024-08-06 08:47:57.075808197Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d2ef4e99-d41c-439e-86ad-ea666f2aa98e name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:47:57 default-k8s-diff-port-990751 crio[719]: time="2024-08-06 08:47:57.076000674Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e668e4a5fa59e279bc227b6110983cc4660b639d5b1e1291ae60d361f682d82e,PodSandboxId:212629317ec9d4a02502b705c85c5e3a1d233f294ffdb7c78c6f03f5de7c4e0a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722933298482056515,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0faff3fc-d34b-418e-972b-95a2e36b577a,},Annotations:map[string]string{io.kubernetes.container.hash: 99df2b1d,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e66102b7ad4d765159de00c22e2ff0fc7466749da0042d42272d5d8590c8ee08,PodSandboxId:8676b9ccd33520640da1604c11f8e1f022def345d5accba2a99d0555ba1ac7b4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722933278325984668,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0c9c181e-0ac8-4463-bcdd-90536fa7ace0,},Annotations:map[string]string{io.kubernetes.container.hash: fb110dd9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59f63eeae644f7c8944cdb7b1d05d9e49fe84eb28e9ccbad2d2dd79846d7eb52,PodSandboxId:671bc248779b3dde9d65bca463606ec9f1e93428faa567ffc2a9d5a7f70c4ecd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722933275327872058,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gx8tt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48f83ba8-a238-4baf-94af-88370fefcb02,},Annotations:map[string]string{io.kubernetes.container.hash: a7046b57,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abd677ac30faba3fef9e9d562601ad0012d7f1b48852e08b8f2684efda6b0f2e,PodSandboxId:212629317ec9d4a02502b705c85c5e3a1d233f294ffdb7c78c6f03f5de7c4e0a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722933267621029914,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 0faff3fc-d34b-418e-972b-95a2e36b577a,},Annotations:map[string]string{io.kubernetes.container.hash: 99df2b1d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75990046864b4a4db9f8c02f44de4eefbb0cf45584880b053bce4ca0b677ce4e,PodSandboxId:62877dd46aaa63956643b3d0f0d405fa45b34e720ba8734b0ead0a9e3577eecf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722933267588436904,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lpf88,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 866c10f0-2c44-4c03-b460
-ff2194bd1ec6,},Annotations:map[string]string{io.kubernetes.container.hash: 6ffde570,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5fa6fe8f1db6b59f47ea637515efb2864e0408ce1df0e6851b3bda0e240f5d3,PodSandboxId:7d737c16e4d6b6dfb159c9481525da24c5a35d2d49fc7649c2c2b47bccf13eeb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722933263042998803,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-990751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be1af3bf6b8b56d20fa135ae58600b46,},Annotations:map[
string]string{io.kubernetes.container.hash: 42e17a12,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e596abcad3dc13d27548701f515beebba051e04a9e20f8ce1d018731d880ff1c,PodSandboxId:42f4b383c94b31dc78e49648f4bac2952ad70f8984c43ef9702d42fd0a1d6b84,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722933263050209712,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-990751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79d986bf1c9915f03028df54a8aaf049,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05bbffa4c989e42bd020ebe6b943459ec1d7b31a6c5e5ba9a37056da15542658,PodSandboxId:4502d8ea19cd903a4f9bd6f7e9e51168b714fcf287bf94b261c9ab68f1f2f3ce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722933263062442813,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-990751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7799dd4e95fe53745fccce3bf5a
3815,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49843be03d16addb301ffd26e517770ff29b00f049efb524a2139b8b2153325c,PodSandboxId:17201685578b67add3362633d77a2d263ea9700b8d936ca483f69667ff24a7e8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722933263075049639,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-990751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eeaec8565e0ec402ba61f03e2c1209
84,},Annotations:map[string]string{io.kubernetes.container.hash: e9a9675d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d2ef4e99-d41c-439e-86ad-ea666f2aa98e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e668e4a5fa59e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       3                   212629317ec9d       storage-provisioner
	e66102b7ad4d7       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   8676b9ccd3352       busybox
	59f63eeae644f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago      Running             coredns                   1                   671bc248779b3       coredns-7db6d8ff4d-gx8tt
	abd677ac30fab       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       2                   212629317ec9d       storage-provisioner
	75990046864b4       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      13 minutes ago      Running             kube-proxy                1                   62877dd46aaa6       kube-proxy-lpf88
	49843be03d16a       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      13 minutes ago      Running             kube-apiserver            1                   17201685578b6       kube-apiserver-default-k8s-diff-port-990751
	05bbffa4c989e       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      13 minutes ago      Running             kube-controller-manager   1                   4502d8ea19cd9       kube-controller-manager-default-k8s-diff-port-990751
	e596abcad3dc1       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      13 minutes ago      Running             kube-scheduler            1                   42f4b383c94b3       kube-scheduler-default-k8s-diff-port-990751
	b5fa6fe8f1db6       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      13 minutes ago      Running             etcd                      1                   7d737c16e4d6b       etcd-default-k8s-diff-port-990751
	
	
	==> coredns [59f63eeae644f7c8944cdb7b1d05d9e49fe84eb28e9ccbad2d2dd79846d7eb52] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:41340 - 63584 "HINFO IN 3658871603673032814.1514312603822694222. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.00986291s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-990751
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-990751
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e92cb06692f5ea1ba801d10d148e5e92e807f9c8
	                    minikube.k8s.io/name=default-k8s-diff-port-990751
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_06T08_26_39_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 06 Aug 2024 08:26:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-990751
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 06 Aug 2024 08:47:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 06 Aug 2024 08:45:09 +0000   Tue, 06 Aug 2024 08:26:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 06 Aug 2024 08:45:09 +0000   Tue, 06 Aug 2024 08:26:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 06 Aug 2024 08:45:09 +0000   Tue, 06 Aug 2024 08:26:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 06 Aug 2024 08:45:09 +0000   Tue, 06 Aug 2024 08:34:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.235
	  Hostname:    default-k8s-diff-port-990751
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7d6f63ee0ed64917abb3517943badd0c
	  System UUID:                7d6f63ee-0ed6-4917-abb3-517943badd0c
	  Boot ID:                    8f50a54c-584e-478b-8f59-052ccfb01920
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 coredns-7db6d8ff4d-gx8tt                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     21m
	  kube-system                 etcd-default-k8s-diff-port-990751                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         21m
	  kube-system                 kube-apiserver-default-k8s-diff-port-990751             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-990751    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-proxy-lpf88                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-scheduler-default-k8s-diff-port-990751             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 metrics-server-569cc877fc-hnxd4                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         20m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     21m                kubelet          Node default-k8s-diff-port-990751 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  21m                kubelet          Node default-k8s-diff-port-990751 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m                kubelet          Node default-k8s-diff-port-990751 status is now: NodeHasNoDiskPressure
	  Normal  NodeReady                21m                kubelet          Node default-k8s-diff-port-990751 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           21m                node-controller  Node default-k8s-diff-port-990751 event: Registered Node default-k8s-diff-port-990751 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-990751 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-990751 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node default-k8s-diff-port-990751 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node default-k8s-diff-port-990751 event: Registered Node default-k8s-diff-port-990751 in Controller
	
	
	==> dmesg <==
	[Aug 6 08:33] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051078] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040369] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Aug 6 08:34] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.525821] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.622945] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.370777] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.063429] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061661] systemd-fstab-generator[646]: Ignoring "noauto" option for root device
	[  +0.179950] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[  +0.139215] systemd-fstab-generator[672]: Ignoring "noauto" option for root device
	[  +0.308339] systemd-fstab-generator[701]: Ignoring "noauto" option for root device
	[  +4.693999] systemd-fstab-generator[802]: Ignoring "noauto" option for root device
	[  +0.061819] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.646385] systemd-fstab-generator[925]: Ignoring "noauto" option for root device
	[  +5.608618] kauditd_printk_skb: 97 callbacks suppressed
	[  +4.540987] systemd-fstab-generator[1538]: Ignoring "noauto" option for root device
	[  +1.171008] kauditd_printk_skb: 62 callbacks suppressed
	[  +5.142838] kauditd_printk_skb: 38 callbacks suppressed
	
	
	==> etcd [b5fa6fe8f1db6b59f47ea637515efb2864e0408ce1df0e6851b3bda0e240f5d3] <==
	{"level":"info","ts":"2024-08-06T08:34:23.843576Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"580d8333e4e46975","local-member-id":"7ca23f5629343562","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-06T08:34:23.843622Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-06T08:34:23.85717Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-06T08:34:23.863595Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"7ca23f5629343562","initial-advertise-peer-urls":["https://192.168.50.235:2380"],"listen-peer-urls":["https://192.168.50.235:2380"],"advertise-client-urls":["https://192.168.50.235:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.235:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-06T08:34:23.865341Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-06T08:34:23.865612Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.235:2380"}
	{"level":"info","ts":"2024-08-06T08:34:23.872376Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.235:2380"}
	{"level":"info","ts":"2024-08-06T08:34:25.229397Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7ca23f5629343562 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-06T08:34:25.229512Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7ca23f5629343562 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-06T08:34:25.229574Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7ca23f5629343562 received MsgPreVoteResp from 7ca23f5629343562 at term 2"}
	{"level":"info","ts":"2024-08-06T08:34:25.229618Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7ca23f5629343562 became candidate at term 3"}
	{"level":"info","ts":"2024-08-06T08:34:25.229642Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7ca23f5629343562 received MsgVoteResp from 7ca23f5629343562 at term 3"}
	{"level":"info","ts":"2024-08-06T08:34:25.22967Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7ca23f5629343562 became leader at term 3"}
	{"level":"info","ts":"2024-08-06T08:34:25.229698Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7ca23f5629343562 elected leader 7ca23f5629343562 at term 3"}
	{"level":"info","ts":"2024-08-06T08:34:25.242363Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-06T08:34:25.242772Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"7ca23f5629343562","local-member-attributes":"{Name:default-k8s-diff-port-990751 ClientURLs:[https://192.168.50.235:2379]}","request-path":"/0/members/7ca23f5629343562/attributes","cluster-id":"580d8333e4e46975","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-06T08:34:25.246793Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-06T08:34:25.247452Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-06T08:34:25.247502Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-06T08:34:25.247875Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.235:2379"}
	{"level":"info","ts":"2024-08-06T08:34:25.25009Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-08-06T08:34:45.64153Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"252.404601ms","expected-duration":"100ms","prefix":"","request":"header:<ID:3846796627666629277 > lease_revoke:<id:35629126d33d7183>","response":"size:27"}
	{"level":"info","ts":"2024-08-06T08:44:25.296197Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":838}
	{"level":"info","ts":"2024-08-06T08:44:25.307152Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":838,"took":"10.563531ms","hash":1724817416,"current-db-size-bytes":2740224,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":2740224,"current-db-size-in-use":"2.7 MB"}
	{"level":"info","ts":"2024-08-06T08:44:25.307221Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1724817416,"revision":838,"compact-revision":-1}
	
	
	==> kernel <==
	 08:47:57 up 13 min,  0 users,  load average: 0.19, 0.19, 0.17
	Linux default-k8s-diff-port-990751 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [49843be03d16addb301ffd26e517770ff29b00f049efb524a2139b8b2153325c] <==
	I0806 08:42:27.728785       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0806 08:44:26.730658       1 handler_proxy.go:93] no RequestInfo found in the context
	E0806 08:44:26.730990       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0806 08:44:27.731827       1 handler_proxy.go:93] no RequestInfo found in the context
	E0806 08:44:27.731882       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0806 08:44:27.731891       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0806 08:44:27.731992       1 handler_proxy.go:93] no RequestInfo found in the context
	E0806 08:44:27.732089       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0806 08:44:27.733090       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0806 08:45:27.732493       1 handler_proxy.go:93] no RequestInfo found in the context
	E0806 08:45:27.732575       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0806 08:45:27.732584       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0806 08:45:27.733554       1 handler_proxy.go:93] no RequestInfo found in the context
	E0806 08:45:27.733619       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0806 08:45:27.733627       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0806 08:47:27.733181       1 handler_proxy.go:93] no RequestInfo found in the context
	E0806 08:47:27.733624       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0806 08:47:27.733669       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0806 08:47:27.733800       1 handler_proxy.go:93] no RequestInfo found in the context
	E0806 08:47:27.733875       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0806 08:47:27.735414       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [05bbffa4c989e42bd020ebe6b943459ec1d7b31a6c5e5ba9a37056da15542658] <==
	I0806 08:42:12.672433       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0806 08:42:42.201461       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0806 08:42:42.680238       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0806 08:43:12.206872       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0806 08:43:12.688121       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0806 08:43:42.212841       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0806 08:43:42.696816       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0806 08:44:12.218662       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0806 08:44:12.705749       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0806 08:44:42.224780       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0806 08:44:42.715594       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0806 08:45:12.229942       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0806 08:45:12.723402       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0806 08:45:22.237546       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="443.237µs"
	I0806 08:45:37.240988       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="287.923µs"
	E0806 08:45:42.234917       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0806 08:45:42.731284       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0806 08:46:12.241788       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0806 08:46:12.738958       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0806 08:46:42.247141       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0806 08:46:42.746787       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0806 08:47:12.252101       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0806 08:47:12.755904       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0806 08:47:42.257857       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0806 08:47:42.766572       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [75990046864b4a4db9f8c02f44de4eefbb0cf45584880b053bce4ca0b677ce4e] <==
	I0806 08:34:27.955050       1 server_linux.go:69] "Using iptables proxy"
	I0806 08:34:27.983894       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.235"]
	I0806 08:34:28.024042       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0806 08:34:28.024128       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0806 08:34:28.024159       1 server_linux.go:165] "Using iptables Proxier"
	I0806 08:34:28.026803       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0806 08:34:28.027090       1 server.go:872] "Version info" version="v1.30.3"
	I0806 08:34:28.027142       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0806 08:34:28.028551       1 config.go:192] "Starting service config controller"
	I0806 08:34:28.028601       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0806 08:34:28.028642       1 config.go:101] "Starting endpoint slice config controller"
	I0806 08:34:28.028659       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0806 08:34:28.030565       1 config.go:319] "Starting node config controller"
	I0806 08:34:28.031391       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0806 08:34:28.129366       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0806 08:34:28.129507       1 shared_informer.go:320] Caches are synced for service config
	I0806 08:34:28.131503       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [e596abcad3dc13d27548701f515beebba051e04a9e20f8ce1d018731d880ff1c] <==
	I0806 08:34:24.829918       1 serving.go:380] Generated self-signed cert in-memory
	W0806 08:34:26.719992       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0806 08:34:26.720129       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0806 08:34:26.720158       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0806 08:34:26.720235       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0806 08:34:26.764539       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0806 08:34:26.764635       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0806 08:34:26.767052       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0806 08:34:26.767487       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0806 08:34:26.767691       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0806 08:34:26.767733       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0806 08:34:26.868862       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 06 08:45:22 default-k8s-diff-port-990751 kubelet[932]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 06 08:45:22 default-k8s-diff-port-990751 kubelet[932]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 06 08:45:22 default-k8s-diff-port-990751 kubelet[932]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 06 08:45:22 default-k8s-diff-port-990751 kubelet[932]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 06 08:45:37 default-k8s-diff-port-990751 kubelet[932]: E0806 08:45:37.219476     932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-hnxd4" podUID="f369ac70-49b7-448a-9852-89232fb66da9"
	Aug 06 08:45:52 default-k8s-diff-port-990751 kubelet[932]: E0806 08:45:52.223578     932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-hnxd4" podUID="f369ac70-49b7-448a-9852-89232fb66da9"
	Aug 06 08:46:03 default-k8s-diff-port-990751 kubelet[932]: E0806 08:46:03.218721     932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-hnxd4" podUID="f369ac70-49b7-448a-9852-89232fb66da9"
	Aug 06 08:46:15 default-k8s-diff-port-990751 kubelet[932]: E0806 08:46:15.221581     932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-hnxd4" podUID="f369ac70-49b7-448a-9852-89232fb66da9"
	Aug 06 08:46:22 default-k8s-diff-port-990751 kubelet[932]: E0806 08:46:22.247000     932 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 06 08:46:22 default-k8s-diff-port-990751 kubelet[932]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 06 08:46:22 default-k8s-diff-port-990751 kubelet[932]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 06 08:46:22 default-k8s-diff-port-990751 kubelet[932]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 06 08:46:22 default-k8s-diff-port-990751 kubelet[932]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 06 08:46:27 default-k8s-diff-port-990751 kubelet[932]: E0806 08:46:27.218192     932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-hnxd4" podUID="f369ac70-49b7-448a-9852-89232fb66da9"
	Aug 06 08:46:39 default-k8s-diff-port-990751 kubelet[932]: E0806 08:46:39.219681     932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-hnxd4" podUID="f369ac70-49b7-448a-9852-89232fb66da9"
	Aug 06 08:46:54 default-k8s-diff-port-990751 kubelet[932]: E0806 08:46:54.218854     932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-hnxd4" podUID="f369ac70-49b7-448a-9852-89232fb66da9"
	Aug 06 08:47:06 default-k8s-diff-port-990751 kubelet[932]: E0806 08:47:06.220136     932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-hnxd4" podUID="f369ac70-49b7-448a-9852-89232fb66da9"
	Aug 06 08:47:18 default-k8s-diff-port-990751 kubelet[932]: E0806 08:47:18.219108     932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-hnxd4" podUID="f369ac70-49b7-448a-9852-89232fb66da9"
	Aug 06 08:47:22 default-k8s-diff-port-990751 kubelet[932]: E0806 08:47:22.250076     932 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 06 08:47:22 default-k8s-diff-port-990751 kubelet[932]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 06 08:47:22 default-k8s-diff-port-990751 kubelet[932]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 06 08:47:22 default-k8s-diff-port-990751 kubelet[932]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 06 08:47:22 default-k8s-diff-port-990751 kubelet[932]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 06 08:47:29 default-k8s-diff-port-990751 kubelet[932]: E0806 08:47:29.218668     932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-hnxd4" podUID="f369ac70-49b7-448a-9852-89232fb66da9"
	Aug 06 08:47:44 default-k8s-diff-port-990751 kubelet[932]: E0806 08:47:44.221228     932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-hnxd4" podUID="f369ac70-49b7-448a-9852-89232fb66da9"
	
	
	==> storage-provisioner [abd677ac30faba3fef9e9d562601ad0012d7f1b48852e08b8f2684efda6b0f2e] <==
	I0806 08:34:27.931138       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0806 08:34:57.935077       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [e668e4a5fa59e279bc227b6110983cc4660b639d5b1e1291ae60d361f682d82e] <==
	I0806 08:34:58.590797       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0806 08:34:58.601682       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0806 08:34:58.601901       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0806 08:35:16.005463       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0806 08:35:16.005822       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-990751_10440c7b-16ec-4542-8107-c69e1ad11e0e!
	I0806 08:35:16.006236       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5ad1e3bd-e618-4071-ba71-ee7d7821c5e6", APIVersion:"v1", ResourceVersion:"621", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-990751_10440c7b-16ec-4542-8107-c69e1ad11e0e became leader
	I0806 08:35:16.107221       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-990751_10440c7b-16ec-4542-8107-c69e1ad11e0e!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-990751 -n default-k8s-diff-port-990751
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-990751 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-hnxd4
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-990751 describe pod metrics-server-569cc877fc-hnxd4
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-990751 describe pod metrics-server-569cc877fc-hnxd4: exit status 1 (64.181865ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-hnxd4" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-990751 describe pod metrics-server-569cc877fc-hnxd4: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.44s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.45s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0806 08:40:57.501111   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/kindnet-405163/client.crt: no such file or directory
E0806 08:41:21.824976   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/functional-811691/client.crt: no such file or directory
E0806 08:41:43.641902   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/auto-405163/client.crt: no such file or directory
E0806 08:42:16.255287   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/addons-505457/client.crt: no such file or directory
E0806 08:42:20.546403   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/kindnet-405163/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-703340 -n no-preload-703340
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-08-06 08:49:23.928530295 +0000 UTC m=+6287.153340726
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-703340 -n no-preload-703340
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-703340 logs -n 25
E0806 08:49:24.872619   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/functional-811691/client.crt: no such file or directory
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-703340 logs -n 25: (2.241586881s)
E0806 08:49:26.450756   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/enable-default-cni-405163/client.crt: no such file or directory
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p flannel-405163 sudo cat                             | flannel-405163               | jenkins | v1.33.1 | 06 Aug 24 08:25 UTC | 06 Aug 24 08:25 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p flannel-405163 sudo                                 | flannel-405163               | jenkins | v1.33.1 | 06 Aug 24 08:25 UTC | 06 Aug 24 08:25 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p flannel-405163 sudo                                 | flannel-405163               | jenkins | v1.33.1 | 06 Aug 24 08:25 UTC | 06 Aug 24 08:25 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p flannel-405163 sudo                                 | flannel-405163               | jenkins | v1.33.1 | 06 Aug 24 08:25 UTC | 06 Aug 24 08:25 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p flannel-405163 sudo find                            | flannel-405163               | jenkins | v1.33.1 | 06 Aug 24 08:25 UTC | 06 Aug 24 08:25 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p flannel-405163 sudo crio                            | flannel-405163               | jenkins | v1.33.1 | 06 Aug 24 08:25 UTC | 06 Aug 24 08:25 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p flannel-405163                                      | flannel-405163               | jenkins | v1.33.1 | 06 Aug 24 08:25 UTC | 06 Aug 24 08:25 UTC |
	| delete  | -p                                                     | disable-driver-mounts-607762 | jenkins | v1.33.1 | 06 Aug 24 08:25 UTC | 06 Aug 24 08:25 UTC |
	|         | disable-driver-mounts-607762                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-990751 | jenkins | v1.33.1 | 06 Aug 24 08:25 UTC | 06 Aug 24 08:27 UTC |
	|         | default-k8s-diff-port-990751                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-703340             | no-preload-703340            | jenkins | v1.33.1 | 06 Aug 24 08:26 UTC | 06 Aug 24 08:26 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-703340                                   | no-preload-703340            | jenkins | v1.33.1 | 06 Aug 24 08:26 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-324808            | embed-certs-324808           | jenkins | v1.33.1 | 06 Aug 24 08:26 UTC | 06 Aug 24 08:26 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-324808                                  | embed-certs-324808           | jenkins | v1.33.1 | 06 Aug 24 08:26 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-990751  | default-k8s-diff-port-990751 | jenkins | v1.33.1 | 06 Aug 24 08:27 UTC | 06 Aug 24 08:27 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-990751 | jenkins | v1.33.1 | 06 Aug 24 08:27 UTC |                     |
	|         | default-k8s-diff-port-990751                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-703340                  | no-preload-703340            | jenkins | v1.33.1 | 06 Aug 24 08:28 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-703340                                   | no-preload-703340            | jenkins | v1.33.1 | 06 Aug 24 08:28 UTC | 06 Aug 24 08:40 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-409903        | old-k8s-version-409903       | jenkins | v1.33.1 | 06 Aug 24 08:28 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-324808                 | embed-certs-324808           | jenkins | v1.33.1 | 06 Aug 24 08:29 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-324808                                  | embed-certs-324808           | jenkins | v1.33.1 | 06 Aug 24 08:29 UTC | 06 Aug 24 08:38 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-990751       | default-k8s-diff-port-990751 | jenkins | v1.33.1 | 06 Aug 24 08:30 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-990751 | jenkins | v1.33.1 | 06 Aug 24 08:30 UTC | 06 Aug 24 08:38 UTC |
	|         | default-k8s-diff-port-990751                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-409903                              | old-k8s-version-409903       | jenkins | v1.33.1 | 06 Aug 24 08:30 UTC | 06 Aug 24 08:30 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-409903             | old-k8s-version-409903       | jenkins | v1.33.1 | 06 Aug 24 08:30 UTC | 06 Aug 24 08:30 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-409903                              | old-k8s-version-409903       | jenkins | v1.33.1 | 06 Aug 24 08:30 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/06 08:30:58
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0806 08:30:58.757517   79927 out.go:291] Setting OutFile to fd 1 ...
	I0806 08:30:58.757772   79927 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 08:30:58.757780   79927 out.go:304] Setting ErrFile to fd 2...
	I0806 08:30:58.757784   79927 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 08:30:58.757992   79927 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19370-13057/.minikube/bin
	I0806 08:30:58.758527   79927 out.go:298] Setting JSON to false
	I0806 08:30:58.759405   79927 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8005,"bootTime":1722925054,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0806 08:30:58.759463   79927 start.go:139] virtualization: kvm guest
	I0806 08:30:58.762200   79927 out.go:177] * [old-k8s-version-409903] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0806 08:30:58.763461   79927 out.go:177]   - MINIKUBE_LOCATION=19370
	I0806 08:30:58.763514   79927 notify.go:220] Checking for updates...
	I0806 08:30:58.766017   79927 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0806 08:30:58.767180   79927 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19370-13057/kubeconfig
	I0806 08:30:58.768270   79927 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19370-13057/.minikube
	I0806 08:30:58.769303   79927 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0806 08:30:58.770385   79927 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0806 08:30:58.771931   79927 config.go:182] Loaded profile config "old-k8s-version-409903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0806 08:30:58.772319   79927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:30:58.772407   79927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:30:58.787845   79927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39563
	I0806 08:30:58.788270   79927 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:30:58.788863   79927 main.go:141] libmachine: Using API Version  1
	I0806 08:30:58.788879   79927 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:30:58.789169   79927 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:30:58.789314   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .DriverName
	I0806 08:30:58.791190   79927 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0806 08:30:58.792487   79927 driver.go:392] Setting default libvirt URI to qemu:///system
	I0806 08:30:58.792776   79927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:30:58.792811   79927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:30:58.807347   79927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33093
	I0806 08:30:58.807788   79927 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:30:58.808260   79927 main.go:141] libmachine: Using API Version  1
	I0806 08:30:58.808276   79927 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:30:58.808633   79927 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:30:58.808849   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .DriverName
	I0806 08:30:58.845305   79927 out.go:177] * Using the kvm2 driver based on existing profile
	I0806 08:30:58.846560   79927 start.go:297] selected driver: kvm2
	I0806 08:30:58.846580   79927 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-409903 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-409903 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.158 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 08:30:58.846684   79927 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0806 08:30:58.847335   79927 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 08:30:58.847395   79927 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19370-13057/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0806 08:30:58.862013   79927 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0806 08:30:58.862398   79927 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0806 08:30:58.862431   79927 cni.go:84] Creating CNI manager for ""
	I0806 08:30:58.862441   79927 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0806 08:30:58.862493   79927 start.go:340] cluster config:
	{Name:old-k8s-version-409903 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-409903 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.158 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 08:30:58.862609   79927 iso.go:125] acquiring lock: {Name:mke58d71de28babe305c5eb0c31c274e9713bb3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 08:30:58.864484   79927 out.go:177] * Starting "old-k8s-version-409903" primary control-plane node in "old-k8s-version-409903" cluster
	I0806 08:30:58.865763   79927 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0806 08:30:58.865796   79927 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0806 08:30:58.865803   79927 cache.go:56] Caching tarball of preloaded images
	I0806 08:30:58.865912   79927 preload.go:172] Found /home/jenkins/minikube-integration/19370-13057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0806 08:30:58.865928   79927 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0806 08:30:58.866013   79927 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/old-k8s-version-409903/config.json ...
	I0806 08:30:58.866228   79927 start.go:360] acquireMachinesLock for old-k8s-version-409903: {Name:mkc602ac9c8cc3360dcc0fcb8104303837aaab61 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 08:31:01.700664   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:31:04.772608   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:31:10.852634   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:31:13.924605   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:31:20.004661   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:31:23.076602   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:31:29.156621   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:31:32.228616   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:31:38.308684   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:31:41.380622   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:31:47.460632   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:31:50.532638   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:31:56.612610   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:31:59.684696   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:32:05.764642   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:32:08.836560   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:32:14.916653   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:32:17.988720   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:32:24.068597   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:32:27.140686   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:32:33.220630   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:32:36.292609   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:32:42.372654   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:32:45.444619   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:32:51.524637   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:32:54.596677   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:33:00.676664   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:33:03.748647   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:33:09.828643   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:33:12.900674   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:33:18.980642   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:33:22.052616   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:33:28.132639   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:33:31.204678   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:33:34.209491   79219 start.go:364] duration metric: took 4m19.788239383s to acquireMachinesLock for "embed-certs-324808"
	I0806 08:33:34.209554   79219 start.go:96] Skipping create...Using existing machine configuration
	I0806 08:33:34.209563   79219 fix.go:54] fixHost starting: 
	I0806 08:33:34.209914   79219 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:33:34.209945   79219 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:33:34.225180   79219 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44037
	I0806 08:33:34.225607   79219 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:33:34.226014   79219 main.go:141] libmachine: Using API Version  1
	I0806 08:33:34.226033   79219 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:33:34.226323   79219 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:33:34.226522   79219 main.go:141] libmachine: (embed-certs-324808) Calling .DriverName
	I0806 08:33:34.226684   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetState
	I0806 08:33:34.228172   79219 fix.go:112] recreateIfNeeded on embed-certs-324808: state=Stopped err=<nil>
	I0806 08:33:34.228200   79219 main.go:141] libmachine: (embed-certs-324808) Calling .DriverName
	W0806 08:33:34.228355   79219 fix.go:138] unexpected machine state, will restart: <nil>
	I0806 08:33:34.231261   79219 out.go:177] * Restarting existing kvm2 VM for "embed-certs-324808" ...
	I0806 08:33:34.232543   79219 main.go:141] libmachine: (embed-certs-324808) Calling .Start
	I0806 08:33:34.232728   79219 main.go:141] libmachine: (embed-certs-324808) Ensuring networks are active...
	I0806 08:33:34.233475   79219 main.go:141] libmachine: (embed-certs-324808) Ensuring network default is active
	I0806 08:33:34.233819   79219 main.go:141] libmachine: (embed-certs-324808) Ensuring network mk-embed-certs-324808 is active
	I0806 08:33:34.234291   79219 main.go:141] libmachine: (embed-certs-324808) Getting domain xml...
	I0806 08:33:34.235129   79219 main.go:141] libmachine: (embed-certs-324808) Creating domain...
	I0806 08:33:34.206591   78922 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 08:33:34.206636   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetMachineName
	I0806 08:33:34.206950   78922 buildroot.go:166] provisioning hostname "no-preload-703340"
	I0806 08:33:34.206975   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetMachineName
	I0806 08:33:34.207187   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHHostname
	I0806 08:33:34.209379   78922 machine.go:97] duration metric: took 4m37.416826471s to provisionDockerMachine
	I0806 08:33:34.209416   78922 fix.go:56] duration metric: took 4m37.443014145s for fixHost
	I0806 08:33:34.209424   78922 start.go:83] releasing machines lock for "no-preload-703340", held for 4m37.443037097s
	W0806 08:33:34.209442   78922 start.go:714] error starting host: provision: host is not running
	W0806 08:33:34.209538   78922 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0806 08:33:34.209545   78922 start.go:729] Will try again in 5 seconds ...
	I0806 08:33:35.434516   79219 main.go:141] libmachine: (embed-certs-324808) Waiting to get IP...
	I0806 08:33:35.435546   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:35.435973   79219 main.go:141] libmachine: (embed-certs-324808) DBG | unable to find current IP address of domain embed-certs-324808 in network mk-embed-certs-324808
	I0806 08:33:35.436045   79219 main.go:141] libmachine: (embed-certs-324808) DBG | I0806 08:33:35.435966   80472 retry.go:31] will retry after 190.247547ms: waiting for machine to come up
	I0806 08:33:35.627362   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:35.627768   79219 main.go:141] libmachine: (embed-certs-324808) DBG | unable to find current IP address of domain embed-certs-324808 in network mk-embed-certs-324808
	I0806 08:33:35.627803   79219 main.go:141] libmachine: (embed-certs-324808) DBG | I0806 08:33:35.627742   80472 retry.go:31] will retry after 306.927129ms: waiting for machine to come up
	I0806 08:33:35.936242   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:35.936714   79219 main.go:141] libmachine: (embed-certs-324808) DBG | unable to find current IP address of domain embed-certs-324808 in network mk-embed-certs-324808
	I0806 08:33:35.936738   79219 main.go:141] libmachine: (embed-certs-324808) DBG | I0806 08:33:35.936661   80472 retry.go:31] will retry after 363.399025ms: waiting for machine to come up
	I0806 08:33:36.301454   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:36.301934   79219 main.go:141] libmachine: (embed-certs-324808) DBG | unable to find current IP address of domain embed-certs-324808 in network mk-embed-certs-324808
	I0806 08:33:36.301967   79219 main.go:141] libmachine: (embed-certs-324808) DBG | I0806 08:33:36.301879   80472 retry.go:31] will retry after 495.07719ms: waiting for machine to come up
	I0806 08:33:36.798489   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:36.798930   79219 main.go:141] libmachine: (embed-certs-324808) DBG | unable to find current IP address of domain embed-certs-324808 in network mk-embed-certs-324808
	I0806 08:33:36.798958   79219 main.go:141] libmachine: (embed-certs-324808) DBG | I0806 08:33:36.798870   80472 retry.go:31] will retry after 755.286107ms: waiting for machine to come up
	I0806 08:33:37.555799   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:37.556155   79219 main.go:141] libmachine: (embed-certs-324808) DBG | unable to find current IP address of domain embed-certs-324808 in network mk-embed-certs-324808
	I0806 08:33:37.556187   79219 main.go:141] libmachine: (embed-certs-324808) DBG | I0806 08:33:37.556094   80472 retry.go:31] will retry after 600.530399ms: waiting for machine to come up
	I0806 08:33:38.157817   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:38.158303   79219 main.go:141] libmachine: (embed-certs-324808) DBG | unable to find current IP address of domain embed-certs-324808 in network mk-embed-certs-324808
	I0806 08:33:38.158334   79219 main.go:141] libmachine: (embed-certs-324808) DBG | I0806 08:33:38.158241   80472 retry.go:31] will retry after 943.262354ms: waiting for machine to come up
	I0806 08:33:39.103226   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:39.103615   79219 main.go:141] libmachine: (embed-certs-324808) DBG | unable to find current IP address of domain embed-certs-324808 in network mk-embed-certs-324808
	I0806 08:33:39.103645   79219 main.go:141] libmachine: (embed-certs-324808) DBG | I0806 08:33:39.103563   80472 retry.go:31] will retry after 1.100862399s: waiting for machine to come up
	I0806 08:33:39.211204   78922 start.go:360] acquireMachinesLock for no-preload-703340: {Name:mkc602ac9c8cc3360dcc0fcb8104303837aaab61 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 08:33:40.205796   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:40.206130   79219 main.go:141] libmachine: (embed-certs-324808) DBG | unable to find current IP address of domain embed-certs-324808 in network mk-embed-certs-324808
	I0806 08:33:40.206160   79219 main.go:141] libmachine: (embed-certs-324808) DBG | I0806 08:33:40.206077   80472 retry.go:31] will retry after 1.405550913s: waiting for machine to come up
	I0806 08:33:41.613629   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:41.614060   79219 main.go:141] libmachine: (embed-certs-324808) DBG | unable to find current IP address of domain embed-certs-324808 in network mk-embed-certs-324808
	I0806 08:33:41.614092   79219 main.go:141] libmachine: (embed-certs-324808) DBG | I0806 08:33:41.614001   80472 retry.go:31] will retry after 2.190867783s: waiting for machine to come up
	I0806 08:33:43.807417   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:43.808017   79219 main.go:141] libmachine: (embed-certs-324808) DBG | unable to find current IP address of domain embed-certs-324808 in network mk-embed-certs-324808
	I0806 08:33:43.808049   79219 main.go:141] libmachine: (embed-certs-324808) DBG | I0806 08:33:43.807958   80472 retry.go:31] will retry after 2.233558066s: waiting for machine to come up
	I0806 08:33:46.043427   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:46.043803   79219 main.go:141] libmachine: (embed-certs-324808) DBG | unable to find current IP address of domain embed-certs-324808 in network mk-embed-certs-324808
	I0806 08:33:46.043830   79219 main.go:141] libmachine: (embed-certs-324808) DBG | I0806 08:33:46.043754   80472 retry.go:31] will retry after 3.171710196s: waiting for machine to come up
	I0806 08:33:49.218967   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:49.219319   79219 main.go:141] libmachine: (embed-certs-324808) DBG | unable to find current IP address of domain embed-certs-324808 in network mk-embed-certs-324808
	I0806 08:33:49.219349   79219 main.go:141] libmachine: (embed-certs-324808) DBG | I0806 08:33:49.219266   80472 retry.go:31] will retry after 4.141473633s: waiting for machine to come up
	I0806 08:33:53.361887   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.362300   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has current primary IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.362315   79219 main.go:141] libmachine: (embed-certs-324808) Found IP for machine: 192.168.39.190
	I0806 08:33:53.362324   79219 main.go:141] libmachine: (embed-certs-324808) Reserving static IP address...
	I0806 08:33:53.362762   79219 main.go:141] libmachine: (embed-certs-324808) Reserved static IP address: 192.168.39.190
	I0806 08:33:53.362812   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "embed-certs-324808", mac: "52:54:00:b9:df:8c", ip: "192.168.39.190"} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:33:53.362826   79219 main.go:141] libmachine: (embed-certs-324808) Waiting for SSH to be available...
	I0806 08:33:53.362858   79219 main.go:141] libmachine: (embed-certs-324808) DBG | skip adding static IP to network mk-embed-certs-324808 - found existing host DHCP lease matching {name: "embed-certs-324808", mac: "52:54:00:b9:df:8c", ip: "192.168.39.190"}
	I0806 08:33:53.362871   79219 main.go:141] libmachine: (embed-certs-324808) DBG | Getting to WaitForSSH function...
	I0806 08:33:53.364761   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.365170   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:33:53.365204   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.365301   79219 main.go:141] libmachine: (embed-certs-324808) DBG | Using SSH client type: external
	I0806 08:33:53.365333   79219 main.go:141] libmachine: (embed-certs-324808) DBG | Using SSH private key: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/embed-certs-324808/id_rsa (-rw-------)
	I0806 08:33:53.365366   79219 main.go:141] libmachine: (embed-certs-324808) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.190 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19370-13057/.minikube/machines/embed-certs-324808/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0806 08:33:53.365378   79219 main.go:141] libmachine: (embed-certs-324808) DBG | About to run SSH command:
	I0806 08:33:53.365386   79219 main.go:141] libmachine: (embed-certs-324808) DBG | exit 0
	I0806 08:33:53.492385   79219 main.go:141] libmachine: (embed-certs-324808) DBG | SSH cmd err, output: <nil>: 
	I0806 08:33:53.492739   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetConfigRaw
	I0806 08:33:53.493342   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetIP
	I0806 08:33:53.495371   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.495663   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:33:53.495693   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.495982   79219 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/embed-certs-324808/config.json ...
	I0806 08:33:53.496215   79219 machine.go:94] provisionDockerMachine start ...
	I0806 08:33:53.496238   79219 main.go:141] libmachine: (embed-certs-324808) Calling .DriverName
	I0806 08:33:53.496460   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHHostname
	I0806 08:33:53.498625   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.498924   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:33:53.498945   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.499096   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHPort
	I0806 08:33:53.499265   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHKeyPath
	I0806 08:33:53.499394   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHKeyPath
	I0806 08:33:53.499593   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHUsername
	I0806 08:33:53.499815   79219 main.go:141] libmachine: Using SSH client type: native
	I0806 08:33:53.500050   79219 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.190 22 <nil> <nil>}
	I0806 08:33:53.500063   79219 main.go:141] libmachine: About to run SSH command:
	hostname
	I0806 08:33:53.612698   79219 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0806 08:33:53.612725   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetMachineName
	I0806 08:33:53.612979   79219 buildroot.go:166] provisioning hostname "embed-certs-324808"
	I0806 08:33:53.613002   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetMachineName
	I0806 08:33:53.613164   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHHostname
	I0806 08:33:53.615731   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.616123   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:33:53.616157   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.616316   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHPort
	I0806 08:33:53.616530   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHKeyPath
	I0806 08:33:53.616676   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHKeyPath
	I0806 08:33:53.616810   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHUsername
	I0806 08:33:53.616961   79219 main.go:141] libmachine: Using SSH client type: native
	I0806 08:33:53.617124   79219 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.190 22 <nil> <nil>}
	I0806 08:33:53.617138   79219 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-324808 && echo "embed-certs-324808" | sudo tee /etc/hostname
	I0806 08:33:53.742677   79219 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-324808
	
	I0806 08:33:53.742722   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHHostname
	I0806 08:33:53.745670   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.745984   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:33:53.746015   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.746195   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHPort
	I0806 08:33:53.746380   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHKeyPath
	I0806 08:33:53.746545   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHKeyPath
	I0806 08:33:53.746676   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHUsername
	I0806 08:33:53.746867   79219 main.go:141] libmachine: Using SSH client type: native
	I0806 08:33:53.747033   79219 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.190 22 <nil> <nil>}
	I0806 08:33:53.747048   79219 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-324808' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-324808/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-324808' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0806 08:33:53.869596   79219 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 08:33:53.869629   79219 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19370-13057/.minikube CaCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19370-13057/.minikube}
	I0806 08:33:53.869652   79219 buildroot.go:174] setting up certificates
	I0806 08:33:53.869663   79219 provision.go:84] configureAuth start
	I0806 08:33:53.869680   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetMachineName
	I0806 08:33:53.869953   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetIP
	I0806 08:33:53.872701   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.873071   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:33:53.873105   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.873246   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHHostname
	I0806 08:33:53.875521   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.875928   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:33:53.875959   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.876009   79219 provision.go:143] copyHostCerts
	I0806 08:33:53.876077   79219 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem, removing ...
	I0806 08:33:53.876089   79219 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem
	I0806 08:33:53.876170   79219 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem (1123 bytes)
	I0806 08:33:53.876287   79219 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem, removing ...
	I0806 08:33:53.876297   79219 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem
	I0806 08:33:53.876337   79219 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem (1675 bytes)
	I0806 08:33:53.876450   79219 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem, removing ...
	I0806 08:33:53.876460   79219 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem
	I0806 08:33:53.876499   79219 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem (1078 bytes)
	I0806 08:33:53.876578   79219 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem org=jenkins.embed-certs-324808 san=[127.0.0.1 192.168.39.190 embed-certs-324808 localhost minikube]
	I0806 08:33:53.961275   79219 provision.go:177] copyRemoteCerts
	I0806 08:33:53.961344   79219 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0806 08:33:53.961375   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHHostname
	I0806 08:33:53.964004   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.964356   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:33:53.964404   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.964596   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHPort
	I0806 08:33:53.964829   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHKeyPath
	I0806 08:33:53.964983   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHUsername
	I0806 08:33:53.965115   79219 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/embed-certs-324808/id_rsa Username:docker}
	I0806 08:33:54.050335   79219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0806 08:33:54.074977   79219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0806 08:33:54.100201   79219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0806 08:33:54.123356   79219 provision.go:87] duration metric: took 253.6775ms to configureAuth
	I0806 08:33:54.123386   79219 buildroot.go:189] setting minikube options for container-runtime
	I0806 08:33:54.123556   79219 config.go:182] Loaded profile config "embed-certs-324808": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 08:33:54.123634   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHHostname
	I0806 08:33:54.126279   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:54.126721   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:33:54.126746   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:54.126941   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHPort
	I0806 08:33:54.127143   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHKeyPath
	I0806 08:33:54.127309   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHKeyPath
	I0806 08:33:54.127444   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHUsername
	I0806 08:33:54.127611   79219 main.go:141] libmachine: Using SSH client type: native
	I0806 08:33:54.127785   79219 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.190 22 <nil> <nil>}
	I0806 08:33:54.127800   79219 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0806 08:33:54.653395   79614 start.go:364] duration metric: took 3m35.719137455s to acquireMachinesLock for "default-k8s-diff-port-990751"
	I0806 08:33:54.653444   79614 start.go:96] Skipping create...Using existing machine configuration
	I0806 08:33:54.653452   79614 fix.go:54] fixHost starting: 
	I0806 08:33:54.653812   79614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:33:54.653844   79614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:33:54.673686   79614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35867
	I0806 08:33:54.674123   79614 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:33:54.674581   79614 main.go:141] libmachine: Using API Version  1
	I0806 08:33:54.674604   79614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:33:54.674950   79614 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:33:54.675108   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .DriverName
	I0806 08:33:54.675246   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetState
	I0806 08:33:54.676908   79614 fix.go:112] recreateIfNeeded on default-k8s-diff-port-990751: state=Stopped err=<nil>
	I0806 08:33:54.676939   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .DriverName
	W0806 08:33:54.677097   79614 fix.go:138] unexpected machine state, will restart: <nil>
	I0806 08:33:54.678912   79614 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-990751" ...
	I0806 08:33:54.403758   79219 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0806 08:33:54.403785   79219 machine.go:97] duration metric: took 907.554493ms to provisionDockerMachine
	I0806 08:33:54.403799   79219 start.go:293] postStartSetup for "embed-certs-324808" (driver="kvm2")
	I0806 08:33:54.403814   79219 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0806 08:33:54.403854   79219 main.go:141] libmachine: (embed-certs-324808) Calling .DriverName
	I0806 08:33:54.404139   79219 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0806 08:33:54.404168   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHHostname
	I0806 08:33:54.406768   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:54.407114   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:33:54.407142   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:54.407328   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHPort
	I0806 08:33:54.407510   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHKeyPath
	I0806 08:33:54.407647   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHUsername
	I0806 08:33:54.407765   79219 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/embed-certs-324808/id_rsa Username:docker}
	I0806 08:33:54.495482   79219 ssh_runner.go:195] Run: cat /etc/os-release
	I0806 08:33:54.499691   79219 info.go:137] Remote host: Buildroot 2023.02.9
	I0806 08:33:54.499725   79219 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-13057/.minikube/addons for local assets ...
	I0806 08:33:54.499915   79219 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-13057/.minikube/files for local assets ...
	I0806 08:33:54.500104   79219 filesync.go:149] local asset: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem -> 202462.pem in /etc/ssl/certs
	I0806 08:33:54.500243   79219 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0806 08:33:54.509690   79219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem --> /etc/ssl/certs/202462.pem (1708 bytes)
	I0806 08:33:54.534233   79219 start.go:296] duration metric: took 130.420444ms for postStartSetup
	I0806 08:33:54.534282   79219 fix.go:56] duration metric: took 20.324719602s for fixHost
	I0806 08:33:54.534307   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHHostname
	I0806 08:33:54.536871   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:54.537202   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:33:54.537234   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:54.537393   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHPort
	I0806 08:33:54.537562   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHKeyPath
	I0806 08:33:54.537746   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHKeyPath
	I0806 08:33:54.537911   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHUsername
	I0806 08:33:54.538105   79219 main.go:141] libmachine: Using SSH client type: native
	I0806 08:33:54.538285   79219 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.190 22 <nil> <nil>}
	I0806 08:33:54.538295   79219 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0806 08:33:54.653264   79219 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722933234.630593565
	
	I0806 08:33:54.653288   79219 fix.go:216] guest clock: 1722933234.630593565
	I0806 08:33:54.653298   79219 fix.go:229] Guest: 2024-08-06 08:33:54.630593565 +0000 UTC Remote: 2024-08-06 08:33:54.534287484 +0000 UTC m=+280.251105524 (delta=96.306081ms)
	I0806 08:33:54.653327   79219 fix.go:200] guest clock delta is within tolerance: 96.306081ms
	I0806 08:33:54.653337   79219 start.go:83] releasing machines lock for "embed-certs-324808", held for 20.443802529s
	I0806 08:33:54.653361   79219 main.go:141] libmachine: (embed-certs-324808) Calling .DriverName
	I0806 08:33:54.653640   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetIP
	I0806 08:33:54.656332   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:54.656741   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:33:54.656766   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:54.656933   79219 main.go:141] libmachine: (embed-certs-324808) Calling .DriverName
	I0806 08:33:54.657443   79219 main.go:141] libmachine: (embed-certs-324808) Calling .DriverName
	I0806 08:33:54.657623   79219 main.go:141] libmachine: (embed-certs-324808) Calling .DriverName
	I0806 08:33:54.657705   79219 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0806 08:33:54.657748   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHHostname
	I0806 08:33:54.657857   79219 ssh_runner.go:195] Run: cat /version.json
	I0806 08:33:54.657877   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHHostname
	I0806 08:33:54.660302   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:54.660482   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:54.660694   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:33:54.660729   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:54.660823   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:33:54.660831   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHPort
	I0806 08:33:54.660844   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:54.661011   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHPort
	I0806 08:33:54.661094   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHKeyPath
	I0806 08:33:54.661165   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHKeyPath
	I0806 08:33:54.661223   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHUsername
	I0806 08:33:54.661280   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHUsername
	I0806 08:33:54.661338   79219 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/embed-certs-324808/id_rsa Username:docker}
	I0806 08:33:54.661386   79219 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/embed-certs-324808/id_rsa Username:docker}
	I0806 08:33:54.770935   79219 ssh_runner.go:195] Run: systemctl --version
	I0806 08:33:54.777166   79219 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0806 08:33:54.920252   79219 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0806 08:33:54.927740   79219 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0806 08:33:54.927829   79219 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0806 08:33:54.949691   79219 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0806 08:33:54.949720   79219 start.go:495] detecting cgroup driver to use...
	I0806 08:33:54.949794   79219 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0806 08:33:54.967575   79219 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 08:33:54.982019   79219 docker.go:217] disabling cri-docker service (if available) ...
	I0806 08:33:54.982094   79219 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0806 08:33:55.001629   79219 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0806 08:33:55.018332   79219 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0806 08:33:55.139207   79219 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0806 08:33:55.302225   79219 docker.go:233] disabling docker service ...
	I0806 08:33:55.302300   79219 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0806 08:33:55.321470   79219 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0806 08:33:55.334925   79219 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0806 08:33:55.456388   79219 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0806 08:33:55.579725   79219 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0806 08:33:55.594362   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 08:33:55.613084   79219 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0806 08:33:55.613165   79219 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:33:55.623414   79219 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0806 08:33:55.623487   79219 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:33:55.633837   79219 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:33:55.645893   79219 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:33:55.658623   79219 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0806 08:33:55.670016   79219 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:33:55.681227   79219 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:33:55.700047   79219 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:33:55.711409   79219 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0806 08:33:55.721076   79219 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0806 08:33:55.721138   79219 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0806 08:33:55.734470   79219 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0806 08:33:55.750919   79219 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 08:33:55.880306   79219 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0806 08:33:56.031323   79219 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0806 08:33:56.031401   79219 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0806 08:33:56.036456   79219 start.go:563] Will wait 60s for crictl version
	I0806 08:33:56.036522   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:33:56.040542   79219 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0806 08:33:56.081166   79219 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0806 08:33:56.081258   79219 ssh_runner.go:195] Run: crio --version
	I0806 08:33:56.109963   79219 ssh_runner.go:195] Run: crio --version
	I0806 08:33:56.141118   79219 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0806 08:33:54.680267   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .Start
	I0806 08:33:54.680448   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Ensuring networks are active...
	I0806 08:33:54.681411   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Ensuring network default is active
	I0806 08:33:54.681778   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Ensuring network mk-default-k8s-diff-port-990751 is active
	I0806 08:33:54.682205   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Getting domain xml...
	I0806 08:33:54.683028   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Creating domain...
	I0806 08:33:55.964313   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting to get IP...
	I0806 08:33:55.965483   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:33:55.965965   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | unable to find current IP address of domain default-k8s-diff-port-990751 in network mk-default-k8s-diff-port-990751
	I0806 08:33:55.966053   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | I0806 08:33:55.965953   80608 retry.go:31] will retry after 294.177981ms: waiting for machine to come up
	I0806 08:33:56.261404   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:33:56.262032   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | unable to find current IP address of domain default-k8s-diff-port-990751 in network mk-default-k8s-diff-port-990751
	I0806 08:33:56.262059   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | I0806 08:33:56.261969   80608 retry.go:31] will retry after 375.14393ms: waiting for machine to come up
	I0806 08:33:56.638714   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:33:56.639150   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | unable to find current IP address of domain default-k8s-diff-port-990751 in network mk-default-k8s-diff-port-990751
	I0806 08:33:56.639178   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | I0806 08:33:56.639118   80608 retry.go:31] will retry after 439.582122ms: waiting for machine to come up
	I0806 08:33:57.080785   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:33:57.081335   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | unable to find current IP address of domain default-k8s-diff-port-990751 in network mk-default-k8s-diff-port-990751
	I0806 08:33:57.081373   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | I0806 08:33:57.081299   80608 retry.go:31] will retry after 523.875054ms: waiting for machine to come up
	I0806 08:33:57.606731   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:33:57.607220   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | unable to find current IP address of domain default-k8s-diff-port-990751 in network mk-default-k8s-diff-port-990751
	I0806 08:33:57.607250   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | I0806 08:33:57.607178   80608 retry.go:31] will retry after 727.363483ms: waiting for machine to come up
	I0806 08:33:58.336352   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:33:58.336852   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | unable to find current IP address of domain default-k8s-diff-port-990751 in network mk-default-k8s-diff-port-990751
	I0806 08:33:58.336885   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | I0806 08:33:58.336782   80608 retry.go:31] will retry after 662.304027ms: waiting for machine to come up
	I0806 08:33:56.142523   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetIP
	I0806 08:33:56.145641   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:56.145997   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:33:56.146023   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:56.146239   79219 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0806 08:33:56.150763   79219 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 08:33:56.165617   79219 kubeadm.go:883] updating cluster {Name:embed-certs-324808 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-324808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.190 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0806 08:33:56.165775   79219 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0806 08:33:56.165837   79219 ssh_runner.go:195] Run: sudo crictl images --output json
	I0806 08:33:56.205302   79219 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0806 08:33:56.205393   79219 ssh_runner.go:195] Run: which lz4
	I0806 08:33:56.209791   79219 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0806 08:33:56.214338   79219 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0806 08:33:56.214369   79219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0806 08:33:57.735973   79219 crio.go:462] duration metric: took 1.526221921s to copy over tarball
	I0806 08:33:57.736049   79219 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0806 08:33:59.000478   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:33:59.000905   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | unable to find current IP address of domain default-k8s-diff-port-990751 in network mk-default-k8s-diff-port-990751
	I0806 08:33:59.000971   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | I0806 08:33:59.000829   80608 retry.go:31] will retry after 1.110230284s: waiting for machine to come up
	I0806 08:34:00.112658   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:00.113097   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | unable to find current IP address of domain default-k8s-diff-port-990751 in network mk-default-k8s-diff-port-990751
	I0806 08:34:00.113124   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | I0806 08:34:00.113053   80608 retry.go:31] will retry after 1.081213279s: waiting for machine to come up
	I0806 08:34:01.195646   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:01.196062   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | unable to find current IP address of domain default-k8s-diff-port-990751 in network mk-default-k8s-diff-port-990751
	I0806 08:34:01.196097   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | I0806 08:34:01.196014   80608 retry.go:31] will retry after 1.334553019s: waiting for machine to come up
	I0806 08:34:02.532009   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:02.532492   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | unable to find current IP address of domain default-k8s-diff-port-990751 in network mk-default-k8s-diff-port-990751
	I0806 08:34:02.532520   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | I0806 08:34:02.532449   80608 retry.go:31] will retry after 1.42743583s: waiting for machine to come up
	I0806 08:33:59.991340   79219 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.255258522s)
	I0806 08:33:59.991368   79219 crio.go:469] duration metric: took 2.255367027s to extract the tarball
	I0806 08:33:59.991378   79219 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0806 08:34:00.030261   79219 ssh_runner.go:195] Run: sudo crictl images --output json
	I0806 08:34:00.075217   79219 crio.go:514] all images are preloaded for cri-o runtime.
	I0806 08:34:00.075243   79219 cache_images.go:84] Images are preloaded, skipping loading
	I0806 08:34:00.075253   79219 kubeadm.go:934] updating node { 192.168.39.190 8443 v1.30.3 crio true true} ...
	I0806 08:34:00.075396   79219 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-324808 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.190
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-324808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0806 08:34:00.075477   79219 ssh_runner.go:195] Run: crio config
	I0806 08:34:00.124134   79219 cni.go:84] Creating CNI manager for ""
	I0806 08:34:00.124160   79219 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0806 08:34:00.124174   79219 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0806 08:34:00.124195   79219 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.190 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-324808 NodeName:embed-certs-324808 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.190"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.190 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0806 08:34:00.124348   79219 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.190
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-324808"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.190
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.190"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0806 08:34:00.124446   79219 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0806 08:34:00.134782   79219 binaries.go:44] Found k8s binaries, skipping transfer
	I0806 08:34:00.134874   79219 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0806 08:34:00.144892   79219 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0806 08:34:00.164614   79219 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0806 08:34:00.183911   79219 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0806 08:34:00.203821   79219 ssh_runner.go:195] Run: grep 192.168.39.190	control-plane.minikube.internal$ /etc/hosts
	I0806 08:34:00.207980   79219 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.190	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 08:34:00.220566   79219 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 08:34:00.339428   79219 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 08:34:00.357441   79219 certs.go:68] Setting up /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/embed-certs-324808 for IP: 192.168.39.190
	I0806 08:34:00.357470   79219 certs.go:194] generating shared ca certs ...
	I0806 08:34:00.357486   79219 certs.go:226] acquiring lock for ca certs: {Name:mkf5c5aed91f4108054d9f2c742cad66c86afbed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:34:00.357648   79219 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.key
	I0806 08:34:00.357694   79219 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.key
	I0806 08:34:00.357702   79219 certs.go:256] generating profile certs ...
	I0806 08:34:00.357788   79219 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/embed-certs-324808/client.key
	I0806 08:34:00.357870   79219 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/embed-certs-324808/apiserver.key.078f53b0
	I0806 08:34:00.357907   79219 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/embed-certs-324808/proxy-client.key
	I0806 08:34:00.358033   79219 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246.pem (1338 bytes)
	W0806 08:34:00.358070   79219 certs.go:480] ignoring /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246_empty.pem, impossibly tiny 0 bytes
	I0806 08:34:00.358085   79219 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem (1675 bytes)
	I0806 08:34:00.358122   79219 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem (1078 bytes)
	I0806 08:34:00.358151   79219 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem (1123 bytes)
	I0806 08:34:00.358174   79219 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem (1675 bytes)
	I0806 08:34:00.358231   79219 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem (1708 bytes)
	I0806 08:34:00.359196   79219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0806 08:34:00.397544   79219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0806 08:34:00.436824   79219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0806 08:34:00.478633   79219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0806 08:34:00.525641   79219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/embed-certs-324808/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0806 08:34:00.571003   79219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/embed-certs-324808/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0806 08:34:00.596423   79219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/embed-certs-324808/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0806 08:34:00.621129   79219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/embed-certs-324808/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0806 08:34:00.646297   79219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem --> /usr/share/ca-certificates/202462.pem (1708 bytes)
	I0806 08:34:00.672396   79219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0806 08:34:00.698468   79219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246.pem --> /usr/share/ca-certificates/20246.pem (1338 bytes)
	I0806 08:34:00.724115   79219 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0806 08:34:00.742200   79219 ssh_runner.go:195] Run: openssl version
	I0806 08:34:00.748418   79219 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202462.pem && ln -fs /usr/share/ca-certificates/202462.pem /etc/ssl/certs/202462.pem"
	I0806 08:34:00.760234   79219 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202462.pem
	I0806 08:34:00.764947   79219 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  6 07:17 /usr/share/ca-certificates/202462.pem
	I0806 08:34:00.765013   79219 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202462.pem
	I0806 08:34:00.771065   79219 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202462.pem /etc/ssl/certs/3ec20f2e.0"
	I0806 08:34:00.782995   79219 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0806 08:34:00.795092   79219 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0806 08:34:00.800011   79219 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  6 07:07 /usr/share/ca-certificates/minikubeCA.pem
	I0806 08:34:00.800079   79219 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0806 08:34:00.806038   79219 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0806 08:34:00.817660   79219 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20246.pem && ln -fs /usr/share/ca-certificates/20246.pem /etc/ssl/certs/20246.pem"
	I0806 08:34:00.829045   79219 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20246.pem
	I0806 08:34:00.833640   79219 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  6 07:17 /usr/share/ca-certificates/20246.pem
	I0806 08:34:00.833704   79219 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20246.pem
	I0806 08:34:00.839618   79219 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20246.pem /etc/ssl/certs/51391683.0"
	I0806 08:34:00.850938   79219 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0806 08:34:00.855577   79219 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0806 08:34:00.861960   79219 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0806 08:34:00.868626   79219 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0806 08:34:00.875291   79219 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0806 08:34:00.881553   79219 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0806 08:34:00.887650   79219 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0806 08:34:00.894134   79219 kubeadm.go:392] StartCluster: {Name:embed-certs-324808 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-324808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.190 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 08:34:00.894223   79219 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0806 08:34:00.894272   79219 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0806 08:34:00.938980   79219 cri.go:89] found id: ""
	I0806 08:34:00.939046   79219 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0806 08:34:00.951339   79219 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0806 08:34:00.951369   79219 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0806 08:34:00.951430   79219 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0806 08:34:00.962789   79219 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0806 08:34:00.964165   79219 kubeconfig.go:125] found "embed-certs-324808" server: "https://192.168.39.190:8443"
	I0806 08:34:00.967221   79219 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0806 08:34:00.977500   79219 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.190
	I0806 08:34:00.977535   79219 kubeadm.go:1160] stopping kube-system containers ...
	I0806 08:34:00.977547   79219 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0806 08:34:00.977603   79219 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0806 08:34:01.017218   79219 cri.go:89] found id: ""
	I0806 08:34:01.017315   79219 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0806 08:34:01.035305   79219 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0806 08:34:01.046210   79219 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0806 08:34:01.046233   79219 kubeadm.go:157] found existing configuration files:
	
	I0806 08:34:01.046290   79219 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0806 08:34:01.056904   79219 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0806 08:34:01.056974   79219 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0806 08:34:01.066640   79219 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0806 08:34:01.076042   79219 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0806 08:34:01.076114   79219 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0806 08:34:01.085735   79219 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0806 08:34:01.094602   79219 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0806 08:34:01.094661   79219 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0806 08:34:01.104160   79219 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0806 08:34:01.113176   79219 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0806 08:34:01.113249   79219 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0806 08:34:01.122679   79219 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0806 08:34:01.133032   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:01.259256   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:02.252011   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:02.481629   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:02.570215   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:02.710597   79219 api_server.go:52] waiting for apiserver process to appear ...
	I0806 08:34:02.710700   79219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:03.210803   79219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:03.711111   79219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:03.738971   79219 api_server.go:72] duration metric: took 1.028375323s to wait for apiserver process to appear ...
	I0806 08:34:03.739003   79219 api_server.go:88] waiting for apiserver healthz status ...
	I0806 08:34:03.739025   79219 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8443/healthz ...
	I0806 08:34:03.962232   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:03.962678   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | unable to find current IP address of domain default-k8s-diff-port-990751 in network mk-default-k8s-diff-port-990751
	I0806 08:34:03.962709   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | I0806 08:34:03.962638   80608 retry.go:31] will retry after 2.113116057s: waiting for machine to come up
	I0806 08:34:06.078010   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:06.078537   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | unable to find current IP address of domain default-k8s-diff-port-990751 in network mk-default-k8s-diff-port-990751
	I0806 08:34:06.078570   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | I0806 08:34:06.078503   80608 retry.go:31] will retry after 2.580076274s: waiting for machine to come up
	I0806 08:34:08.662313   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:08.662651   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | unable to find current IP address of domain default-k8s-diff-port-990751 in network mk-default-k8s-diff-port-990751
	I0806 08:34:08.662673   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | I0806 08:34:08.662609   80608 retry.go:31] will retry after 4.365982677s: waiting for machine to come up
	I0806 08:34:06.791233   79219 api_server.go:279] https://192.168.39.190:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0806 08:34:06.791272   79219 api_server.go:103] status: https://192.168.39.190:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0806 08:34:06.791311   79219 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8443/healthz ...
	I0806 08:34:06.804038   79219 api_server.go:279] https://192.168.39.190:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0806 08:34:06.804074   79219 api_server.go:103] status: https://192.168.39.190:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0806 08:34:07.239192   79219 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8443/healthz ...
	I0806 08:34:07.244973   79219 api_server.go:279] https://192.168.39.190:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0806 08:34:07.245005   79219 api_server.go:103] status: https://192.168.39.190:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0806 08:34:07.739556   79219 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8443/healthz ...
	I0806 08:34:07.754354   79219 api_server.go:279] https://192.168.39.190:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0806 08:34:07.754382   79219 api_server.go:103] status: https://192.168.39.190:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0806 08:34:08.240114   79219 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8443/healthz ...
	I0806 08:34:08.244799   79219 api_server.go:279] https://192.168.39.190:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0806 08:34:08.244829   79219 api_server.go:103] status: https://192.168.39.190:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0806 08:34:08.739407   79219 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8443/healthz ...
	I0806 08:34:08.743770   79219 api_server.go:279] https://192.168.39.190:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0806 08:34:08.743804   79219 api_server.go:103] status: https://192.168.39.190:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0806 08:34:09.239855   79219 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8443/healthz ...
	I0806 08:34:09.244078   79219 api_server.go:279] https://192.168.39.190:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0806 08:34:09.244135   79219 api_server.go:103] status: https://192.168.39.190:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0806 08:34:09.739859   79219 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8443/healthz ...
	I0806 08:34:09.744236   79219 api_server.go:279] https://192.168.39.190:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0806 08:34:09.744272   79219 api_server.go:103] status: https://192.168.39.190:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0806 08:34:10.240036   79219 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8443/healthz ...
	I0806 08:34:10.244217   79219 api_server.go:279] https://192.168.39.190:8443/healthz returned 200:
	ok
	I0806 08:34:10.250333   79219 api_server.go:141] control plane version: v1.30.3
	I0806 08:34:10.250361   79219 api_server.go:131] duration metric: took 6.511350709s to wait for apiserver health ...
	I0806 08:34:10.250370   79219 cni.go:84] Creating CNI manager for ""
	I0806 08:34:10.250376   79219 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0806 08:34:10.252472   79219 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0806 08:34:10.253779   79219 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0806 08:34:10.264772   79219 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0806 08:34:10.282904   79219 system_pods.go:43] waiting for kube-system pods to appear ...
	I0806 08:34:10.292343   79219 system_pods.go:59] 8 kube-system pods found
	I0806 08:34:10.292397   79219 system_pods.go:61] "coredns-7db6d8ff4d-4xxwm" [eeb47dea-32e9-458f-9ad9-2af6d4e68cc0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0806 08:34:10.292408   79219 system_pods.go:61] "etcd-embed-certs-324808" [1e0a82cd-6704-4da3-8992-15c68a7aaf70] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0806 08:34:10.292419   79219 system_pods.go:61] "kube-apiserver-embed-certs-324808" [a33c54d9-1fc5-4243-8fe9-d535c40908c3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0806 08:34:10.292426   79219 system_pods.go:61] "kube-controller-manager-embed-certs-324808" [b1c519b8-9c96-4c87-90f3-cc277cfe7763] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0806 08:34:10.292432   79219 system_pods.go:61] "kube-proxy-8lbzv" [0c45e6c0-9b7f-4c48-bff5-38d4a5949bec] Running
	I0806 08:34:10.292438   79219 system_pods.go:61] "kube-scheduler-embed-certs-324808" [78c2827f-341a-4441-bf7e-eff947fc3ea8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0806 08:34:10.292445   79219 system_pods.go:61] "metrics-server-569cc877fc-8xb9k" [7a5fb326-11a7-48fa-bcde-a22026a1dccb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0806 08:34:10.292452   79219 system_pods.go:61] "storage-provisioner" [15c7d89a-e994-4cca-931c-f646157a15e8] Running
	I0806 08:34:10.292465   79219 system_pods.go:74] duration metric: took 9.536183ms to wait for pod list to return data ...
	I0806 08:34:10.292474   79219 node_conditions.go:102] verifying NodePressure condition ...
	I0806 08:34:10.295584   79219 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0806 08:34:10.295616   79219 node_conditions.go:123] node cpu capacity is 2
	I0806 08:34:10.295632   79219 node_conditions.go:105] duration metric: took 3.152403ms to run NodePressure ...
	I0806 08:34:10.295654   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:10.566724   79219 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0806 08:34:10.571777   79219 kubeadm.go:739] kubelet initialised
	I0806 08:34:10.571801   79219 kubeadm.go:740] duration metric: took 5.042307ms waiting for restarted kubelet to initialise ...
	I0806 08:34:10.571812   79219 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 08:34:10.576738   79219 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-4xxwm" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:10.581943   79219 pod_ready.go:97] node "embed-certs-324808" hosting pod "coredns-7db6d8ff4d-4xxwm" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-324808" has status "Ready":"False"
	I0806 08:34:10.581976   79219 pod_ready.go:81] duration metric: took 5.207743ms for pod "coredns-7db6d8ff4d-4xxwm" in "kube-system" namespace to be "Ready" ...
	E0806 08:34:10.581989   79219 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-324808" hosting pod "coredns-7db6d8ff4d-4xxwm" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-324808" has status "Ready":"False"
	I0806 08:34:10.581997   79219 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-324808" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:10.586983   79219 pod_ready.go:97] node "embed-certs-324808" hosting pod "etcd-embed-certs-324808" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-324808" has status "Ready":"False"
	I0806 08:34:10.587007   79219 pod_ready.go:81] duration metric: took 5.000306ms for pod "etcd-embed-certs-324808" in "kube-system" namespace to be "Ready" ...
	E0806 08:34:10.587016   79219 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-324808" hosting pod "etcd-embed-certs-324808" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-324808" has status "Ready":"False"
	I0806 08:34:10.587022   79219 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-324808" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:10.592042   79219 pod_ready.go:97] node "embed-certs-324808" hosting pod "kube-apiserver-embed-certs-324808" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-324808" has status "Ready":"False"
	I0806 08:34:10.592066   79219 pod_ready.go:81] duration metric: took 5.035031ms for pod "kube-apiserver-embed-certs-324808" in "kube-system" namespace to be "Ready" ...
	E0806 08:34:10.592074   79219 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-324808" hosting pod "kube-apiserver-embed-certs-324808" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-324808" has status "Ready":"False"
	I0806 08:34:10.592080   79219 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-324808" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:10.687156   79219 pod_ready.go:97] node "embed-certs-324808" hosting pod "kube-controller-manager-embed-certs-324808" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-324808" has status "Ready":"False"
	I0806 08:34:10.687193   79219 pod_ready.go:81] duration metric: took 95.100916ms for pod "kube-controller-manager-embed-certs-324808" in "kube-system" namespace to be "Ready" ...
	E0806 08:34:10.687207   79219 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-324808" hosting pod "kube-controller-manager-embed-certs-324808" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-324808" has status "Ready":"False"
	I0806 08:34:10.687217   79219 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-8lbzv" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:11.086180   79219 pod_ready.go:97] node "embed-certs-324808" hosting pod "kube-proxy-8lbzv" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-324808" has status "Ready":"False"
	I0806 08:34:11.086213   79219 pod_ready.go:81] duration metric: took 398.984678ms for pod "kube-proxy-8lbzv" in "kube-system" namespace to be "Ready" ...
	E0806 08:34:11.086224   79219 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-324808" hosting pod "kube-proxy-8lbzv" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-324808" has status "Ready":"False"
	I0806 08:34:11.086230   79219 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-324808" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:11.486971   79219 pod_ready.go:97] node "embed-certs-324808" hosting pod "kube-scheduler-embed-certs-324808" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-324808" has status "Ready":"False"
	I0806 08:34:11.486995   79219 pod_ready.go:81] duration metric: took 400.759449ms for pod "kube-scheduler-embed-certs-324808" in "kube-system" namespace to be "Ready" ...
	E0806 08:34:11.487005   79219 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-324808" hosting pod "kube-scheduler-embed-certs-324808" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-324808" has status "Ready":"False"
	I0806 08:34:11.487011   79219 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:11.887030   79219 pod_ready.go:97] node "embed-certs-324808" hosting pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-324808" has status "Ready":"False"
	I0806 08:34:11.887052   79219 pod_ready.go:81] duration metric: took 400.033763ms for pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace to be "Ready" ...
	E0806 08:34:11.887061   79219 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-324808" hosting pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-324808" has status "Ready":"False"
	I0806 08:34:11.887068   79219 pod_ready.go:38] duration metric: took 1.315245982s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 08:34:11.887084   79219 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0806 08:34:11.899633   79219 ops.go:34] apiserver oom_adj: -16
	I0806 08:34:11.899649   79219 kubeadm.go:597] duration metric: took 10.948272769s to restartPrimaryControlPlane
	I0806 08:34:11.899657   79219 kubeadm.go:394] duration metric: took 11.005530169s to StartCluster
	I0806 08:34:11.899676   79219 settings.go:142] acquiring lock: {Name:mkb0bfbcae3b4205ef43487a9de84cfd8afd2e5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:34:11.899745   79219 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19370-13057/kubeconfig
	I0806 08:34:11.901341   79219 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/kubeconfig: {Name:mk5c066914acf8e36556b29792f75b98076ca34a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:34:11.901603   79219 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.190 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0806 08:34:11.901676   79219 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0806 08:34:11.901753   79219 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-324808"
	I0806 08:34:11.901785   79219 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-324808"
	W0806 08:34:11.901796   79219 addons.go:243] addon storage-provisioner should already be in state true
	I0806 08:34:11.901788   79219 addons.go:69] Setting default-storageclass=true in profile "embed-certs-324808"
	I0806 08:34:11.901834   79219 host.go:66] Checking if "embed-certs-324808" exists ...
	I0806 08:34:11.901851   79219 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-324808"
	I0806 08:34:11.901854   79219 config.go:182] Loaded profile config "embed-certs-324808": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 08:34:11.901842   79219 addons.go:69] Setting metrics-server=true in profile "embed-certs-324808"
	I0806 08:34:11.901921   79219 addons.go:234] Setting addon metrics-server=true in "embed-certs-324808"
	W0806 08:34:11.901940   79219 addons.go:243] addon metrics-server should already be in state true
	I0806 08:34:11.901998   79219 host.go:66] Checking if "embed-certs-324808" exists ...
	I0806 08:34:11.902216   79219 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:34:11.902252   79219 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:34:11.902280   79219 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:34:11.902308   79219 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:34:11.902383   79219 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:34:11.902411   79219 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:34:11.904343   79219 out.go:177] * Verifying Kubernetes components...
	I0806 08:34:11.905934   79219 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 08:34:11.917271   79219 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32971
	I0806 08:34:11.917375   79219 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38975
	I0806 08:34:11.917833   79219 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:34:11.917874   79219 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:34:11.917981   79219 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33569
	I0806 08:34:11.918394   79219 main.go:141] libmachine: Using API Version  1
	I0806 08:34:11.918416   79219 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:34:11.918453   79219 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:34:11.918530   79219 main.go:141] libmachine: Using API Version  1
	I0806 08:34:11.918552   79219 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:34:11.918761   79219 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:34:11.918819   79219 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:34:11.918913   79219 main.go:141] libmachine: Using API Version  1
	I0806 08:34:11.918939   79219 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:34:11.918965   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetState
	I0806 08:34:11.919280   79219 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:34:11.919322   79219 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:34:11.919367   79219 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:34:11.919873   79219 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:34:11.919901   79219 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:34:11.921815   79219 addons.go:234] Setting addon default-storageclass=true in "embed-certs-324808"
	W0806 08:34:11.921834   79219 addons.go:243] addon default-storageclass should already be in state true
	I0806 08:34:11.921856   79219 host.go:66] Checking if "embed-certs-324808" exists ...
	I0806 08:34:11.922132   79219 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:34:11.922168   79219 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:34:11.936191   79219 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40029
	I0806 08:34:11.936678   79219 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:34:11.937209   79219 main.go:141] libmachine: Using API Version  1
	I0806 08:34:11.937234   79219 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:34:11.940428   79219 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35087
	I0806 08:34:11.940508   79219 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38421
	I0806 08:34:11.940845   79219 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:34:11.941345   79219 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:34:11.941346   79219 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:34:11.941707   79219 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:34:11.941752   79219 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:34:11.941831   79219 main.go:141] libmachine: Using API Version  1
	I0806 08:34:11.941872   79219 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:34:11.941904   79219 main.go:141] libmachine: Using API Version  1
	I0806 08:34:11.941917   79219 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:34:11.942273   79219 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:34:11.942359   79219 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:34:11.942514   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetState
	I0806 08:34:11.942532   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetState
	I0806 08:34:11.944317   79219 main.go:141] libmachine: (embed-certs-324808) Calling .DriverName
	I0806 08:34:11.944503   79219 main.go:141] libmachine: (embed-certs-324808) Calling .DriverName
	I0806 08:34:11.946511   79219 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0806 08:34:11.946525   79219 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 08:34:11.947929   79219 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0806 08:34:11.947948   79219 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0806 08:34:11.947976   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHHostname
	I0806 08:34:11.948044   79219 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0806 08:34:11.948066   79219 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0806 08:34:11.948085   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHHostname
	I0806 08:34:11.951758   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:34:11.951988   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:34:11.952181   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:34:11.952205   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:34:11.952530   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:34:11.952565   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHPort
	I0806 08:34:11.952566   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:34:11.952693   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHPort
	I0806 08:34:11.952819   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHKeyPath
	I0806 08:34:11.952858   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHKeyPath
	I0806 08:34:11.952970   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHUsername
	I0806 08:34:11.952974   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHUsername
	I0806 08:34:11.953143   79219 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/embed-certs-324808/id_rsa Username:docker}
	I0806 08:34:11.953207   79219 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/embed-certs-324808/id_rsa Username:docker}
	I0806 08:34:11.959949   79219 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35275
	I0806 08:34:11.960353   79219 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:34:11.960900   79219 main.go:141] libmachine: Using API Version  1
	I0806 08:34:11.960929   79219 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:34:11.961241   79219 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:34:11.961451   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetState
	I0806 08:34:11.963031   79219 main.go:141] libmachine: (embed-certs-324808) Calling .DriverName
	I0806 08:34:11.963333   79219 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0806 08:34:11.963349   79219 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0806 08:34:11.963366   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHHostname
	I0806 08:34:11.966079   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:34:11.966448   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:34:11.966481   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:34:11.966559   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHPort
	I0806 08:34:11.966737   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHKeyPath
	I0806 08:34:11.966881   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHUsername
	I0806 08:34:11.967013   79219 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/embed-certs-324808/id_rsa Username:docker}
	I0806 08:34:12.081433   79219 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 08:34:12.100701   79219 node_ready.go:35] waiting up to 6m0s for node "embed-certs-324808" to be "Ready" ...
	I0806 08:34:12.196131   79219 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0806 08:34:12.196151   79219 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0806 08:34:12.201603   79219 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0806 08:34:12.217590   79219 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0806 08:34:12.217626   79219 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0806 08:34:12.239354   79219 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0806 08:34:12.239377   79219 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0806 08:34:12.254418   79219 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0806 08:34:12.266126   79219 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0806 08:34:13.441725   79219 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.187269206s)
	I0806 08:34:13.441780   79219 main.go:141] libmachine: Making call to close driver server
	I0806 08:34:13.441791   79219 main.go:141] libmachine: (embed-certs-324808) Calling .Close
	I0806 08:34:13.441856   79219 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.240222324s)
	I0806 08:34:13.441895   79219 main.go:141] libmachine: Making call to close driver server
	I0806 08:34:13.441910   79219 main.go:141] libmachine: (embed-certs-324808) Calling .Close
	I0806 08:34:13.442096   79219 main.go:141] libmachine: (embed-certs-324808) DBG | Closing plugin on server side
	I0806 08:34:13.442116   79219 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:34:13.442128   79219 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:34:13.442136   79219 main.go:141] libmachine: Making call to close driver server
	I0806 08:34:13.442147   79219 main.go:141] libmachine: (embed-certs-324808) Calling .Close
	I0806 08:34:13.442211   79219 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:34:13.442225   79219 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:34:13.442235   79219 main.go:141] libmachine: Making call to close driver server
	I0806 08:34:13.442245   79219 main.go:141] libmachine: (embed-certs-324808) Calling .Close
	I0806 08:34:13.442414   79219 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:34:13.442424   79219 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:34:13.443716   79219 main.go:141] libmachine: (embed-certs-324808) DBG | Closing plugin on server side
	I0806 08:34:13.443727   79219 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:34:13.443748   79219 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:34:13.448919   79219 main.go:141] libmachine: Making call to close driver server
	I0806 08:34:13.448945   79219 main.go:141] libmachine: (embed-certs-324808) Calling .Close
	I0806 08:34:13.449225   79219 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:34:13.449242   79219 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:34:13.521481   79219 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.255292404s)
	I0806 08:34:13.521542   79219 main.go:141] libmachine: Making call to close driver server
	I0806 08:34:13.521561   79219 main.go:141] libmachine: (embed-certs-324808) Calling .Close
	I0806 08:34:13.521875   79219 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:34:13.521898   79219 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:34:13.521899   79219 main.go:141] libmachine: (embed-certs-324808) DBG | Closing plugin on server side
	I0806 08:34:13.521916   79219 main.go:141] libmachine: Making call to close driver server
	I0806 08:34:13.521927   79219 main.go:141] libmachine: (embed-certs-324808) Calling .Close
	I0806 08:34:13.522147   79219 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:34:13.522165   79219 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:34:13.522175   79219 addons.go:475] Verifying addon metrics-server=true in "embed-certs-324808"
	I0806 08:34:13.522185   79219 main.go:141] libmachine: (embed-certs-324808) DBG | Closing plugin on server side
	I0806 08:34:13.524209   79219 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0806 08:34:13.032813   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.033366   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Found IP for machine: 192.168.50.235
	I0806 08:34:13.033394   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Reserving static IP address...
	I0806 08:34:13.033414   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has current primary IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.033956   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-990751", mac: "52:54:00:fb:51:bd", ip: "192.168.50.235"} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:13.033992   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Reserved static IP address: 192.168.50.235
	I0806 08:34:13.034012   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | skip adding static IP to network mk-default-k8s-diff-port-990751 - found existing host DHCP lease matching {name: "default-k8s-diff-port-990751", mac: "52:54:00:fb:51:bd", ip: "192.168.50.235"}
	I0806 08:34:13.034031   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | Getting to WaitForSSH function...
	I0806 08:34:13.034046   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for SSH to be available...
	I0806 08:34:13.036483   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.036843   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:13.036874   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.037013   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | Using SSH client type: external
	I0806 08:34:13.037042   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | Using SSH private key: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/default-k8s-diff-port-990751/id_rsa (-rw-------)
	I0806 08:34:13.037066   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.235 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19370-13057/.minikube/machines/default-k8s-diff-port-990751/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0806 08:34:13.037078   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | About to run SSH command:
	I0806 08:34:13.037119   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | exit 0
	I0806 08:34:13.164932   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | SSH cmd err, output: <nil>: 
	I0806 08:34:13.165301   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetConfigRaw
	I0806 08:34:13.165892   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetIP
	I0806 08:34:13.169003   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.169408   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:13.169449   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.169783   79614 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/default-k8s-diff-port-990751/config.json ...
	I0806 08:34:13.170431   79614 machine.go:94] provisionDockerMachine start ...
	I0806 08:34:13.170452   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .DriverName
	I0806 08:34:13.170703   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHHostname
	I0806 08:34:13.173536   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.173994   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:13.174022   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.174241   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHPort
	I0806 08:34:13.174456   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHKeyPath
	I0806 08:34:13.174632   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHKeyPath
	I0806 08:34:13.174789   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHUsername
	I0806 08:34:13.175056   79614 main.go:141] libmachine: Using SSH client type: native
	I0806 08:34:13.175329   79614 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.235 22 <nil> <nil>}
	I0806 08:34:13.175344   79614 main.go:141] libmachine: About to run SSH command:
	hostname
	I0806 08:34:13.281288   79614 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0806 08:34:13.281322   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetMachineName
	I0806 08:34:13.281573   79614 buildroot.go:166] provisioning hostname "default-k8s-diff-port-990751"
	I0806 08:34:13.281606   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetMachineName
	I0806 08:34:13.281808   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHHostname
	I0806 08:34:13.284463   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.284825   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:13.284860   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.284988   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHPort
	I0806 08:34:13.285178   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHKeyPath
	I0806 08:34:13.285350   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHKeyPath
	I0806 08:34:13.285468   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHUsername
	I0806 08:34:13.285614   79614 main.go:141] libmachine: Using SSH client type: native
	I0806 08:34:13.285807   79614 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.235 22 <nil> <nil>}
	I0806 08:34:13.285821   79614 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-990751 && echo "default-k8s-diff-port-990751" | sudo tee /etc/hostname
	I0806 08:34:13.413970   79614 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-990751
	
	I0806 08:34:13.414005   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHHostname
	I0806 08:34:13.417279   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.417708   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:13.417757   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.417947   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHPort
	I0806 08:34:13.418153   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHKeyPath
	I0806 08:34:13.418337   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHKeyPath
	I0806 08:34:13.418495   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHUsername
	I0806 08:34:13.418683   79614 main.go:141] libmachine: Using SSH client type: native
	I0806 08:34:13.418935   79614 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.235 22 <nil> <nil>}
	I0806 08:34:13.418970   79614 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-990751' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-990751/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-990751' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0806 08:34:13.534550   79614 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 08:34:13.534578   79614 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19370-13057/.minikube CaCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19370-13057/.minikube}
	I0806 08:34:13.534630   79614 buildroot.go:174] setting up certificates
	I0806 08:34:13.534647   79614 provision.go:84] configureAuth start
	I0806 08:34:13.534664   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetMachineName
	I0806 08:34:13.534962   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetIP
	I0806 08:34:13.537582   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.538144   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:13.538173   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.538329   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHHostname
	I0806 08:34:13.541132   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.541422   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:13.541447   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.541638   79614 provision.go:143] copyHostCerts
	I0806 08:34:13.541693   79614 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem, removing ...
	I0806 08:34:13.541703   79614 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem
	I0806 08:34:13.541761   79614 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem (1078 bytes)
	I0806 08:34:13.541862   79614 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem, removing ...
	I0806 08:34:13.541870   79614 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem
	I0806 08:34:13.541891   79614 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem (1123 bytes)
	I0806 08:34:13.541943   79614 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem, removing ...
	I0806 08:34:13.541950   79614 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem
	I0806 08:34:13.541966   79614 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem (1675 bytes)
	I0806 08:34:13.542013   79614 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-990751 san=[127.0.0.1 192.168.50.235 default-k8s-diff-port-990751 localhost minikube]
	I0806 08:34:13.632290   79614 provision.go:177] copyRemoteCerts
	I0806 08:34:13.632345   79614 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0806 08:34:13.632389   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHHostname
	I0806 08:34:13.635332   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.635619   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:13.635644   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.635868   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHPort
	I0806 08:34:13.636069   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHKeyPath
	I0806 08:34:13.636208   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHUsername
	I0806 08:34:13.636381   79614 sshutil.go:53] new ssh client: &{IP:192.168.50.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/default-k8s-diff-port-990751/id_rsa Username:docker}
	I0806 08:34:13.719415   79614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0806 08:34:13.745981   79614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0806 08:34:13.775906   79614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0806 08:34:13.806948   79614 provision.go:87] duration metric: took 272.283009ms to configureAuth
	I0806 08:34:13.806986   79614 buildroot.go:189] setting minikube options for container-runtime
	I0806 08:34:13.807234   79614 config.go:182] Loaded profile config "default-k8s-diff-port-990751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 08:34:13.807328   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHHostname
	I0806 08:34:13.810500   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.810895   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:13.810928   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.811074   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHPort
	I0806 08:34:13.811302   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHKeyPath
	I0806 08:34:13.811497   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHKeyPath
	I0806 08:34:13.811627   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHUsername
	I0806 08:34:13.811896   79614 main.go:141] libmachine: Using SSH client type: native
	I0806 08:34:13.812124   79614 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.235 22 <nil> <nil>}
	I0806 08:34:13.812148   79614 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0806 08:34:13.525425   79219 addons.go:510] duration metric: took 1.623751418s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0806 08:34:14.104863   79219 node_ready.go:53] node "embed-certs-324808" has status "Ready":"False"
	I0806 08:34:14.353523   79927 start.go:364] duration metric: took 3m15.487259208s to acquireMachinesLock for "old-k8s-version-409903"
	I0806 08:34:14.353614   79927 start.go:96] Skipping create...Using existing machine configuration
	I0806 08:34:14.353629   79927 fix.go:54] fixHost starting: 
	I0806 08:34:14.354059   79927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:34:14.354096   79927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:34:14.373696   79927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33985
	I0806 08:34:14.374148   79927 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:34:14.374646   79927 main.go:141] libmachine: Using API Version  1
	I0806 08:34:14.374672   79927 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:34:14.374995   79927 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:34:14.375277   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .DriverName
	I0806 08:34:14.375472   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetState
	I0806 08:34:14.377209   79927 fix.go:112] recreateIfNeeded on old-k8s-version-409903: state=Stopped err=<nil>
	I0806 08:34:14.377251   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .DriverName
	W0806 08:34:14.377387   79927 fix.go:138] unexpected machine state, will restart: <nil>
	I0806 08:34:14.379029   79927 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-409903" ...
	I0806 08:34:14.113926   79614 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0806 08:34:14.113953   79614 machine.go:97] duration metric: took 943.50747ms to provisionDockerMachine
	I0806 08:34:14.113967   79614 start.go:293] postStartSetup for "default-k8s-diff-port-990751" (driver="kvm2")
	I0806 08:34:14.113982   79614 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0806 08:34:14.114004   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .DriverName
	I0806 08:34:14.114313   79614 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0806 08:34:14.114339   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHHostname
	I0806 08:34:14.116618   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:14.117028   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:14.117067   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:14.117162   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHPort
	I0806 08:34:14.117371   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHKeyPath
	I0806 08:34:14.117545   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHUsername
	I0806 08:34:14.117738   79614 sshutil.go:53] new ssh client: &{IP:192.168.50.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/default-k8s-diff-port-990751/id_rsa Username:docker}
	I0806 08:34:14.199514   79614 ssh_runner.go:195] Run: cat /etc/os-release
	I0806 08:34:14.204063   79614 info.go:137] Remote host: Buildroot 2023.02.9
	I0806 08:34:14.204084   79614 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-13057/.minikube/addons for local assets ...
	I0806 08:34:14.204139   79614 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-13057/.minikube/files for local assets ...
	I0806 08:34:14.204219   79614 filesync.go:149] local asset: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem -> 202462.pem in /etc/ssl/certs
	I0806 08:34:14.204308   79614 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0806 08:34:14.213847   79614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem --> /etc/ssl/certs/202462.pem (1708 bytes)
	I0806 08:34:14.241524   79614 start.go:296] duration metric: took 127.543182ms for postStartSetup
	I0806 08:34:14.241570   79614 fix.go:56] duration metric: took 19.588116399s for fixHost
	I0806 08:34:14.241590   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHHostname
	I0806 08:34:14.244333   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:14.244702   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:14.244733   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:14.244936   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHPort
	I0806 08:34:14.245178   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHKeyPath
	I0806 08:34:14.245338   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHKeyPath
	I0806 08:34:14.245497   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHUsername
	I0806 08:34:14.245637   79614 main.go:141] libmachine: Using SSH client type: native
	I0806 08:34:14.245783   79614 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.235 22 <nil> <nil>}
	I0806 08:34:14.245793   79614 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0806 08:34:14.353301   79614 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722933254.326161993
	
	I0806 08:34:14.353325   79614 fix.go:216] guest clock: 1722933254.326161993
	I0806 08:34:14.353339   79614 fix.go:229] Guest: 2024-08-06 08:34:14.326161993 +0000 UTC Remote: 2024-08-06 08:34:14.2415744 +0000 UTC m=+235.446489438 (delta=84.587593ms)
	I0806 08:34:14.353389   79614 fix.go:200] guest clock delta is within tolerance: 84.587593ms
	I0806 08:34:14.353398   79614 start.go:83] releasing machines lock for "default-k8s-diff-port-990751", held for 19.699973637s
	I0806 08:34:14.353432   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .DriverName
	I0806 08:34:14.353751   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetIP
	I0806 08:34:14.356590   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:14.357031   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:14.357065   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:14.357349   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .DriverName
	I0806 08:34:14.357896   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .DriverName
	I0806 08:34:14.358125   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .DriverName
	I0806 08:34:14.358222   79614 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0806 08:34:14.358265   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHHostname
	I0806 08:34:14.358387   79614 ssh_runner.go:195] Run: cat /version.json
	I0806 08:34:14.358415   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHHostname
	I0806 08:34:14.361131   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:14.361444   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:14.361590   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:14.361629   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:14.361801   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHPort
	I0806 08:34:14.361905   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:14.361940   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:14.362025   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHKeyPath
	I0806 08:34:14.362228   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHUsername
	I0806 08:34:14.362242   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHPort
	I0806 08:34:14.362420   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHKeyPath
	I0806 08:34:14.362424   79614 sshutil.go:53] new ssh client: &{IP:192.168.50.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/default-k8s-diff-port-990751/id_rsa Username:docker}
	I0806 08:34:14.362600   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHUsername
	I0806 08:34:14.362717   79614 sshutil.go:53] new ssh client: &{IP:192.168.50.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/default-k8s-diff-port-990751/id_rsa Username:docker}
	I0806 08:34:14.469232   79614 ssh_runner.go:195] Run: systemctl --version
	I0806 08:34:14.476929   79614 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0806 08:34:14.629487   79614 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0806 08:34:14.637276   79614 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0806 08:34:14.637349   79614 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0806 08:34:14.654612   79614 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0806 08:34:14.654634   79614 start.go:495] detecting cgroup driver to use...
	I0806 08:34:14.654699   79614 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0806 08:34:14.672255   79614 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 08:34:14.689135   79614 docker.go:217] disabling cri-docker service (if available) ...
	I0806 08:34:14.689203   79614 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0806 08:34:14.705745   79614 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0806 08:34:14.721267   79614 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0806 08:34:14.841773   79614 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0806 08:34:14.982757   79614 docker.go:233] disabling docker service ...
	I0806 08:34:14.982837   79614 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0806 08:34:14.997637   79614 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0806 08:34:15.011404   79614 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0806 08:34:15.167108   79614 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0806 08:34:15.286749   79614 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0806 08:34:15.301818   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 08:34:15.320848   79614 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0806 08:34:15.320920   79614 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:15.331331   79614 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0806 08:34:15.331414   79614 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:15.342763   79614 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:15.353534   79614 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:15.365593   79614 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0806 08:34:15.379804   79614 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:15.390200   79614 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:15.410545   79614 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:15.423050   79614 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0806 08:34:15.432810   79614 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0806 08:34:15.432878   79614 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0806 08:34:15.447298   79614 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0806 08:34:15.457992   79614 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 08:34:15.606288   79614 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0806 08:34:15.792322   79614 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0806 08:34:15.792411   79614 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0806 08:34:15.798556   79614 start.go:563] Will wait 60s for crictl version
	I0806 08:34:15.798615   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:34:15.802732   79614 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0806 08:34:15.856880   79614 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0806 08:34:15.856963   79614 ssh_runner.go:195] Run: crio --version
	I0806 08:34:15.890496   79614 ssh_runner.go:195] Run: crio --version
	I0806 08:34:15.924486   79614 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0806 08:34:14.380306   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .Start
	I0806 08:34:14.380486   79927 main.go:141] libmachine: (old-k8s-version-409903) Ensuring networks are active...
	I0806 08:34:14.381155   79927 main.go:141] libmachine: (old-k8s-version-409903) Ensuring network default is active
	I0806 08:34:14.381514   79927 main.go:141] libmachine: (old-k8s-version-409903) Ensuring network mk-old-k8s-version-409903 is active
	I0806 08:34:14.381820   79927 main.go:141] libmachine: (old-k8s-version-409903) Getting domain xml...
	I0806 08:34:14.382679   79927 main.go:141] libmachine: (old-k8s-version-409903) Creating domain...
	I0806 08:34:15.725416   79927 main.go:141] libmachine: (old-k8s-version-409903) Waiting to get IP...
	I0806 08:34:15.726492   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:15.727078   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:34:15.727276   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:34:15.727179   80849 retry.go:31] will retry after 303.235731ms: waiting for machine to come up
	I0806 08:34:16.031996   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:16.032592   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:34:16.032621   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:34:16.032542   80849 retry.go:31] will retry after 355.466914ms: waiting for machine to come up
	I0806 08:34:16.390212   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:16.390736   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:34:16.390768   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:34:16.390674   80849 retry.go:31] will retry after 432.942387ms: waiting for machine to come up
	I0806 08:34:16.825739   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:16.826366   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:34:16.826390   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:34:16.826317   80849 retry.go:31] will retry after 481.17687ms: waiting for machine to come up
	I0806 08:34:17.309005   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:17.309768   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:34:17.309799   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:34:17.309745   80849 retry.go:31] will retry after 631.184181ms: waiting for machine to come up
	I0806 08:34:17.942220   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:17.942742   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:34:17.942766   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:34:17.942701   80849 retry.go:31] will retry after 602.253362ms: waiting for machine to come up
	I0806 08:34:18.546333   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:18.546918   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:34:18.546944   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:34:18.546855   80849 retry.go:31] will retry after 1.061939923s: waiting for machine to come up
	I0806 08:34:15.925709   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetIP
	I0806 08:34:15.929420   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:15.929901   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:15.929931   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:15.930159   79614 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0806 08:34:15.934916   79614 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 08:34:15.949815   79614 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-990751 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-990751 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.235 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0806 08:34:15.949971   79614 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0806 08:34:15.950054   79614 ssh_runner.go:195] Run: sudo crictl images --output json
	I0806 08:34:16.004551   79614 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0806 08:34:16.004631   79614 ssh_runner.go:195] Run: which lz4
	I0806 08:34:16.011388   79614 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0806 08:34:16.017475   79614 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0806 08:34:16.017511   79614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0806 08:34:17.564750   79614 crio.go:462] duration metric: took 1.553401521s to copy over tarball
	I0806 08:34:17.564840   79614 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0806 08:34:16.106922   79219 node_ready.go:53] node "embed-certs-324808" has status "Ready":"False"
	I0806 08:34:17.104735   79219 node_ready.go:49] node "embed-certs-324808" has status "Ready":"True"
	I0806 08:34:17.104778   79219 node_ready.go:38] duration metric: took 5.004030652s for node "embed-certs-324808" to be "Ready" ...
	I0806 08:34:17.104790   79219 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 08:34:17.113668   79219 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-4xxwm" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:17.120777   79219 pod_ready.go:92] pod "coredns-7db6d8ff4d-4xxwm" in "kube-system" namespace has status "Ready":"True"
	I0806 08:34:17.120801   79219 pod_ready.go:81] duration metric: took 7.097875ms for pod "coredns-7db6d8ff4d-4xxwm" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:17.120811   79219 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-324808" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:17.126581   79219 pod_ready.go:92] pod "etcd-embed-certs-324808" in "kube-system" namespace has status "Ready":"True"
	I0806 08:34:17.126606   79219 pod_ready.go:81] duration metric: took 5.788426ms for pod "etcd-embed-certs-324808" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:17.126619   79219 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-324808" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:18.136739   79219 pod_ready.go:92] pod "kube-apiserver-embed-certs-324808" in "kube-system" namespace has status "Ready":"True"
	I0806 08:34:18.136767   79219 pod_ready.go:81] duration metric: took 1.010139212s for pod "kube-apiserver-embed-certs-324808" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:18.136781   79219 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-324808" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:19.610165   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:19.610567   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:34:19.610598   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:34:19.610542   80849 retry.go:31] will retry after 1.339464672s: waiting for machine to come up
	I0806 08:34:20.951505   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:20.952049   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:34:20.952080   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:34:20.951975   80849 retry.go:31] will retry after 1.622574331s: waiting for machine to come up
	I0806 08:34:22.576150   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:22.576559   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:34:22.576582   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:34:22.576508   80849 retry.go:31] will retry after 1.408827574s: waiting for machine to come up
	I0806 08:34:19.920581   79614 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.355705846s)
	I0806 08:34:19.920609   79614 crio.go:469] duration metric: took 2.355825462s to extract the tarball
	I0806 08:34:19.920619   79614 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0806 08:34:19.959118   79614 ssh_runner.go:195] Run: sudo crictl images --output json
	I0806 08:34:20.002413   79614 crio.go:514] all images are preloaded for cri-o runtime.
	I0806 08:34:20.002449   79614 cache_images.go:84] Images are preloaded, skipping loading
	I0806 08:34:20.002461   79614 kubeadm.go:934] updating node { 192.168.50.235 8444 v1.30.3 crio true true} ...
	I0806 08:34:20.002588   79614 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-990751 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.235
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-990751 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0806 08:34:20.002655   79614 ssh_runner.go:195] Run: crio config
	I0806 08:34:20.058943   79614 cni.go:84] Creating CNI manager for ""
	I0806 08:34:20.058970   79614 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0806 08:34:20.058981   79614 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0806 08:34:20.059010   79614 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.235 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-990751 NodeName:default-k8s-diff-port-990751 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.235"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.235 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0806 08:34:20.059247   79614 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.235
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-990751"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.235
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.235"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0806 08:34:20.059337   79614 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0806 08:34:20.072766   79614 binaries.go:44] Found k8s binaries, skipping transfer
	I0806 08:34:20.072876   79614 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0806 08:34:20.085730   79614 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0806 08:34:20.105230   79614 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0806 08:34:20.124800   79614 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0806 08:34:20.146547   79614 ssh_runner.go:195] Run: grep 192.168.50.235	control-plane.minikube.internal$ /etc/hosts
	I0806 08:34:20.150692   79614 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.235	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 08:34:20.164006   79614 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 08:34:20.286820   79614 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 08:34:20.304141   79614 certs.go:68] Setting up /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/default-k8s-diff-port-990751 for IP: 192.168.50.235
	I0806 08:34:20.304166   79614 certs.go:194] generating shared ca certs ...
	I0806 08:34:20.304187   79614 certs.go:226] acquiring lock for ca certs: {Name:mkf5c5aed91f4108054d9f2c742cad66c86afbed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:34:20.304386   79614 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.key
	I0806 08:34:20.304448   79614 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.key
	I0806 08:34:20.304464   79614 certs.go:256] generating profile certs ...
	I0806 08:34:20.304567   79614 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/default-k8s-diff-port-990751/client.key
	I0806 08:34:20.304648   79614 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/default-k8s-diff-port-990751/apiserver.key.aa7eccb7
	I0806 08:34:20.304715   79614 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/default-k8s-diff-port-990751/proxy-client.key
	I0806 08:34:20.304872   79614 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246.pem (1338 bytes)
	W0806 08:34:20.304919   79614 certs.go:480] ignoring /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246_empty.pem, impossibly tiny 0 bytes
	I0806 08:34:20.304929   79614 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem (1675 bytes)
	I0806 08:34:20.304960   79614 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem (1078 bytes)
	I0806 08:34:20.304989   79614 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem (1123 bytes)
	I0806 08:34:20.305019   79614 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem (1675 bytes)
	I0806 08:34:20.305089   79614 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem (1708 bytes)
	I0806 08:34:20.305821   79614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0806 08:34:20.341201   79614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0806 08:34:20.375406   79614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0806 08:34:20.407409   79614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0806 08:34:20.449555   79614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/default-k8s-diff-port-990751/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0806 08:34:20.479479   79614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/default-k8s-diff-port-990751/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0806 08:34:20.518719   79614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/default-k8s-diff-port-990751/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0806 08:34:20.544212   79614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/default-k8s-diff-port-990751/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0806 08:34:20.569413   79614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0806 08:34:20.594573   79614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246.pem --> /usr/share/ca-certificates/20246.pem (1338 bytes)
	I0806 08:34:20.620183   79614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem --> /usr/share/ca-certificates/202462.pem (1708 bytes)
	I0806 08:34:20.648311   79614 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0806 08:34:20.670845   79614 ssh_runner.go:195] Run: openssl version
	I0806 08:34:20.677437   79614 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202462.pem && ln -fs /usr/share/ca-certificates/202462.pem /etc/ssl/certs/202462.pem"
	I0806 08:34:20.688990   79614 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202462.pem
	I0806 08:34:20.694512   79614 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  6 07:17 /usr/share/ca-certificates/202462.pem
	I0806 08:34:20.694580   79614 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202462.pem
	I0806 08:34:20.701362   79614 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202462.pem /etc/ssl/certs/3ec20f2e.0"
	I0806 08:34:20.712966   79614 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0806 08:34:20.724455   79614 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0806 08:34:20.729111   79614 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  6 07:07 /usr/share/ca-certificates/minikubeCA.pem
	I0806 08:34:20.729165   79614 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0806 08:34:20.734973   79614 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0806 08:34:20.746387   79614 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20246.pem && ln -fs /usr/share/ca-certificates/20246.pem /etc/ssl/certs/20246.pem"
	I0806 08:34:20.757763   79614 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20246.pem
	I0806 08:34:20.762396   79614 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  6 07:17 /usr/share/ca-certificates/20246.pem
	I0806 08:34:20.762454   79614 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20246.pem
	I0806 08:34:20.768261   79614 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20246.pem /etc/ssl/certs/51391683.0"
	I0806 08:34:20.779553   79614 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0806 08:34:20.785854   79614 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0806 08:34:20.794054   79614 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0806 08:34:20.802652   79614 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0806 08:34:20.809929   79614 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0806 08:34:20.816886   79614 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0806 08:34:20.823667   79614 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0806 08:34:20.830355   79614 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-990751 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-990751 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.235 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 08:34:20.830456   79614 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0806 08:34:20.830527   79614 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0806 08:34:20.878368   79614 cri.go:89] found id: ""
	I0806 08:34:20.878455   79614 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0806 08:34:20.889175   79614 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0806 08:34:20.889196   79614 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0806 08:34:20.889250   79614 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0806 08:34:20.899842   79614 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0806 08:34:20.901037   79614 kubeconfig.go:125] found "default-k8s-diff-port-990751" server: "https://192.168.50.235:8444"
	I0806 08:34:20.902781   79614 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0806 08:34:20.914810   79614 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.235
	I0806 08:34:20.914847   79614 kubeadm.go:1160] stopping kube-system containers ...
	I0806 08:34:20.914859   79614 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0806 08:34:20.914905   79614 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0806 08:34:20.956719   79614 cri.go:89] found id: ""
	I0806 08:34:20.956772   79614 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0806 08:34:20.977367   79614 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0806 08:34:20.989491   79614 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0806 08:34:20.989513   79614 kubeadm.go:157] found existing configuration files:
	
	I0806 08:34:20.989565   79614 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0806 08:34:21.000759   79614 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0806 08:34:21.000820   79614 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0806 08:34:21.012404   79614 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0806 08:34:21.023449   79614 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0806 08:34:21.023536   79614 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0806 08:34:21.035354   79614 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0806 08:34:21.045285   79614 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0806 08:34:21.045347   79614 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0806 08:34:21.057182   79614 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0806 08:34:21.067431   79614 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0806 08:34:21.067497   79614 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0806 08:34:21.077955   79614 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0806 08:34:21.088824   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:21.209627   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:21.772624   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:22.022615   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:22.107829   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:22.235446   79614 api_server.go:52] waiting for apiserver process to appear ...
	I0806 08:34:22.235552   79614 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:22.735843   79614 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:23.236385   79614 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:23.314994   79614 api_server.go:72] duration metric: took 1.079547638s to wait for apiserver process to appear ...
	I0806 08:34:23.315025   79614 api_server.go:88] waiting for apiserver healthz status ...
	I0806 08:34:23.315071   79614 api_server.go:253] Checking apiserver healthz at https://192.168.50.235:8444/healthz ...
	I0806 08:34:23.315581   79614 api_server.go:269] stopped: https://192.168.50.235:8444/healthz: Get "https://192.168.50.235:8444/healthz": dial tcp 192.168.50.235:8444: connect: connection refused
	I0806 08:34:23.815428   79614 api_server.go:253] Checking apiserver healthz at https://192.168.50.235:8444/healthz ...
	I0806 08:34:20.145118   79219 pod_ready.go:102] pod "kube-controller-manager-embed-certs-324808" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:23.051556   79219 pod_ready.go:102] pod "kube-controller-manager-embed-certs-324808" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:23.077556   79219 pod_ready.go:92] pod "kube-controller-manager-embed-certs-324808" in "kube-system" namespace has status "Ready":"True"
	I0806 08:34:23.077581   79219 pod_ready.go:81] duration metric: took 4.94079164s for pod "kube-controller-manager-embed-certs-324808" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:23.077594   79219 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8lbzv" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:23.091745   79219 pod_ready.go:92] pod "kube-proxy-8lbzv" in "kube-system" namespace has status "Ready":"True"
	I0806 08:34:23.091773   79219 pod_ready.go:81] duration metric: took 14.170363ms for pod "kube-proxy-8lbzv" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:23.091789   79219 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-324808" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:23.105567   79219 pod_ready.go:92] pod "kube-scheduler-embed-certs-324808" in "kube-system" namespace has status "Ready":"True"
	I0806 08:34:23.105597   79219 pod_ready.go:81] duration metric: took 13.798762ms for pod "kube-scheduler-embed-certs-324808" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:23.105612   79219 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:23.987256   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:23.987658   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:34:23.987683   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:34:23.987617   80849 retry.go:31] will retry after 2.000071561s: waiting for machine to come up
	I0806 08:34:25.989501   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:25.989880   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:34:25.989906   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:34:25.989838   80849 retry.go:31] will retry after 2.508269501s: waiting for machine to come up
	I0806 08:34:28.501449   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:28.501842   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:34:28.501874   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:34:28.501802   80849 retry.go:31] will retry after 2.7583106s: waiting for machine to come up
	I0806 08:34:26.696273   79614 api_server.go:279] https://192.168.50.235:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0806 08:34:26.696325   79614 api_server.go:103] status: https://192.168.50.235:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0806 08:34:26.696342   79614 api_server.go:253] Checking apiserver healthz at https://192.168.50.235:8444/healthz ...
	I0806 08:34:26.704047   79614 api_server.go:279] https://192.168.50.235:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0806 08:34:26.704078   79614 api_server.go:103] status: https://192.168.50.235:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0806 08:34:26.815341   79614 api_server.go:253] Checking apiserver healthz at https://192.168.50.235:8444/healthz ...
	I0806 08:34:26.819686   79614 api_server.go:279] https://192.168.50.235:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0806 08:34:26.819718   79614 api_server.go:103] status: https://192.168.50.235:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0806 08:34:27.315242   79614 api_server.go:253] Checking apiserver healthz at https://192.168.50.235:8444/healthz ...
	I0806 08:34:27.320753   79614 api_server.go:279] https://192.168.50.235:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0806 08:34:27.320789   79614 api_server.go:103] status: https://192.168.50.235:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0806 08:34:27.815164   79614 api_server.go:253] Checking apiserver healthz at https://192.168.50.235:8444/healthz ...
	I0806 08:34:27.821943   79614 api_server.go:279] https://192.168.50.235:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0806 08:34:27.821967   79614 api_server.go:103] status: https://192.168.50.235:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0806 08:34:28.315169   79614 api_server.go:253] Checking apiserver healthz at https://192.168.50.235:8444/healthz ...
	I0806 08:34:28.321921   79614 api_server.go:279] https://192.168.50.235:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0806 08:34:28.321945   79614 api_server.go:103] status: https://192.168.50.235:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0806 08:34:28.815717   79614 api_server.go:253] Checking apiserver healthz at https://192.168.50.235:8444/healthz ...
	I0806 08:34:28.819946   79614 api_server.go:279] https://192.168.50.235:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0806 08:34:28.819973   79614 api_server.go:103] status: https://192.168.50.235:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0806 08:34:25.112333   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:27.112488   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:29.315643   79614 api_server.go:253] Checking apiserver healthz at https://192.168.50.235:8444/healthz ...
	I0806 08:34:29.319784   79614 api_server.go:279] https://192.168.50.235:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0806 08:34:29.319813   79614 api_server.go:103] status: https://192.168.50.235:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0806 08:34:29.815945   79614 api_server.go:253] Checking apiserver healthz at https://192.168.50.235:8444/healthz ...
	I0806 08:34:29.821441   79614 api_server.go:279] https://192.168.50.235:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0806 08:34:29.821476   79614 api_server.go:103] status: https://192.168.50.235:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0806 08:34:30.315997   79614 api_server.go:253] Checking apiserver healthz at https://192.168.50.235:8444/healthz ...
	I0806 08:34:30.321179   79614 api_server.go:279] https://192.168.50.235:8444/healthz returned 200:
	ok
	I0806 08:34:30.327083   79614 api_server.go:141] control plane version: v1.30.3
	I0806 08:34:30.327113   79614 api_server.go:131] duration metric: took 7.012080775s to wait for apiserver health ...
	I0806 08:34:30.327122   79614 cni.go:84] Creating CNI manager for ""
	I0806 08:34:30.327128   79614 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0806 08:34:30.328860   79614 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0806 08:34:30.330232   79614 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0806 08:34:30.341069   79614 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0806 08:34:30.359470   79614 system_pods.go:43] waiting for kube-system pods to appear ...
	I0806 08:34:30.369165   79614 system_pods.go:59] 8 kube-system pods found
	I0806 08:34:30.369209   79614 system_pods.go:61] "coredns-7db6d8ff4d-gx8tt" [48f83ba8-a238-4baf-94af-88370fefcb02] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0806 08:34:30.369227   79614 system_pods.go:61] "etcd-default-k8s-diff-port-990751" [bf342ba4-1e11-40bf-80fd-02b402043ecf] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0806 08:34:30.369246   79614 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-990751" [11ee42c4-c9e9-41cf-ace2-deac0f30153a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0806 08:34:30.369256   79614 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-990751" [f17fca30-0afc-4ed8-a8cc-cb64e69723f3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0806 08:34:30.369265   79614 system_pods.go:61] "kube-proxy-lpf88" [866c10f0-2c44-4c03-b460-ff2194bd1ec6] Running
	I0806 08:34:30.369272   79614 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-990751" [2e591ea7-07bd-464f-bf0e-bc24bfcca016] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0806 08:34:30.369282   79614 system_pods.go:61] "metrics-server-569cc877fc-hnxd4" [f369ac70-49b7-448a-9852-89232fb66da9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0806 08:34:30.369290   79614 system_pods.go:61] "storage-provisioner" [0faff3fc-d34b-418e-972b-95a2e36b577a] Running
	I0806 08:34:30.369299   79614 system_pods.go:74] duration metric: took 9.80217ms to wait for pod list to return data ...
	I0806 08:34:30.369311   79614 node_conditions.go:102] verifying NodePressure condition ...
	I0806 08:34:30.372222   79614 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0806 08:34:30.372250   79614 node_conditions.go:123] node cpu capacity is 2
	I0806 08:34:30.372261   79614 node_conditions.go:105] duration metric: took 2.944552ms to run NodePressure ...
	I0806 08:34:30.372279   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:30.639677   79614 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0806 08:34:30.643923   79614 kubeadm.go:739] kubelet initialised
	I0806 08:34:30.643952   79614 kubeadm.go:740] duration metric: took 4.248408ms waiting for restarted kubelet to initialise ...
	I0806 08:34:30.643962   79614 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 08:34:30.648766   79614 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-gx8tt" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:30.653672   79614 pod_ready.go:97] node "default-k8s-diff-port-990751" hosting pod "coredns-7db6d8ff4d-gx8tt" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-990751" has status "Ready":"False"
	I0806 08:34:30.653703   79614 pod_ready.go:81] duration metric: took 4.90961ms for pod "coredns-7db6d8ff4d-gx8tt" in "kube-system" namespace to be "Ready" ...
	E0806 08:34:30.653714   79614 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-990751" hosting pod "coredns-7db6d8ff4d-gx8tt" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-990751" has status "Ready":"False"
	I0806 08:34:30.653723   79614 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-990751" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:30.658080   79614 pod_ready.go:97] node "default-k8s-diff-port-990751" hosting pod "etcd-default-k8s-diff-port-990751" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-990751" has status "Ready":"False"
	I0806 08:34:30.658106   79614 pod_ready.go:81] duration metric: took 4.370083ms for pod "etcd-default-k8s-diff-port-990751" in "kube-system" namespace to be "Ready" ...
	E0806 08:34:30.658119   79614 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-990751" hosting pod "etcd-default-k8s-diff-port-990751" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-990751" has status "Ready":"False"
	I0806 08:34:30.658125   79614 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-990751" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:30.662246   79614 pod_ready.go:97] node "default-k8s-diff-port-990751" hosting pod "kube-apiserver-default-k8s-diff-port-990751" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-990751" has status "Ready":"False"
	I0806 08:34:30.662271   79614 pod_ready.go:81] duration metric: took 4.139553ms for pod "kube-apiserver-default-k8s-diff-port-990751" in "kube-system" namespace to be "Ready" ...
	E0806 08:34:30.662280   79614 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-990751" hosting pod "kube-apiserver-default-k8s-diff-port-990751" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-990751" has status "Ready":"False"
	I0806 08:34:30.662286   79614 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-990751" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:30.763246   79614 pod_ready.go:97] node "default-k8s-diff-port-990751" hosting pod "kube-controller-manager-default-k8s-diff-port-990751" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-990751" has status "Ready":"False"
	I0806 08:34:30.763286   79614 pod_ready.go:81] duration metric: took 100.991212ms for pod "kube-controller-manager-default-k8s-diff-port-990751" in "kube-system" namespace to be "Ready" ...
	E0806 08:34:30.763302   79614 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-990751" hosting pod "kube-controller-manager-default-k8s-diff-port-990751" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-990751" has status "Ready":"False"
	I0806 08:34:30.763311   79614 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-lpf88" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:31.162972   79614 pod_ready.go:97] node "default-k8s-diff-port-990751" hosting pod "kube-proxy-lpf88" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-990751" has status "Ready":"False"
	I0806 08:34:31.162998   79614 pod_ready.go:81] duration metric: took 399.678401ms for pod "kube-proxy-lpf88" in "kube-system" namespace to be "Ready" ...
	E0806 08:34:31.163006   79614 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-990751" hosting pod "kube-proxy-lpf88" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-990751" has status "Ready":"False"
	I0806 08:34:31.163012   79614 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-990751" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:31.562538   79614 pod_ready.go:97] node "default-k8s-diff-port-990751" hosting pod "kube-scheduler-default-k8s-diff-port-990751" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-990751" has status "Ready":"False"
	I0806 08:34:31.562564   79614 pod_ready.go:81] duration metric: took 399.544055ms for pod "kube-scheduler-default-k8s-diff-port-990751" in "kube-system" namespace to be "Ready" ...
	E0806 08:34:31.562575   79614 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-990751" hosting pod "kube-scheduler-default-k8s-diff-port-990751" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-990751" has status "Ready":"False"
	I0806 08:34:31.562582   79614 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:31.963410   79614 pod_ready.go:97] node "default-k8s-diff-port-990751" hosting pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-990751" has status "Ready":"False"
	I0806 08:34:31.963438   79614 pod_ready.go:81] duration metric: took 400.849931ms for pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace to be "Ready" ...
	E0806 08:34:31.963449   79614 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-990751" hosting pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-990751" has status "Ready":"False"
	I0806 08:34:31.963456   79614 pod_ready.go:38] duration metric: took 1.319483278s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 08:34:31.963472   79614 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0806 08:34:31.975982   79614 ops.go:34] apiserver oom_adj: -16
	I0806 08:34:31.976006   79614 kubeadm.go:597] duration metric: took 11.086801084s to restartPrimaryControlPlane
	I0806 08:34:31.976016   79614 kubeadm.go:394] duration metric: took 11.145668347s to StartCluster
	I0806 08:34:31.976035   79614 settings.go:142] acquiring lock: {Name:mkb0bfbcae3b4205ef43487a9de84cfd8afd2e5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:34:31.976131   79614 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19370-13057/kubeconfig
	I0806 08:34:31.977626   79614 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/kubeconfig: {Name:mk5c066914acf8e36556b29792f75b98076ca34a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:34:31.977871   79614 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.235 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0806 08:34:31.977927   79614 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0806 08:34:31.978004   79614 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-990751"
	I0806 08:34:31.978058   79614 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-990751"
	W0806 08:34:31.978070   79614 addons.go:243] addon storage-provisioner should already be in state true
	I0806 08:34:31.978060   79614 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-990751"
	I0806 08:34:31.978092   79614 config.go:182] Loaded profile config "default-k8s-diff-port-990751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 08:34:31.978076   79614 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-990751"
	I0806 08:34:31.978124   79614 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-990751"
	I0806 08:34:31.978154   79614 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-990751"
	I0806 08:34:31.978111   79614 host.go:66] Checking if "default-k8s-diff-port-990751" exists ...
	W0806 08:34:31.978172   79614 addons.go:243] addon metrics-server should already be in state true
	I0806 08:34:31.978250   79614 host.go:66] Checking if "default-k8s-diff-port-990751" exists ...
	I0806 08:34:31.978595   79614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:34:31.978651   79614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:34:31.978602   79614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:34:31.978691   79614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:34:31.978650   79614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:34:31.978806   79614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:34:31.979749   79614 out.go:177] * Verifying Kubernetes components...
	I0806 08:34:31.981485   79614 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 08:34:31.994362   79614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36289
	I0806 08:34:31.995043   79614 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:34:31.995594   79614 main.go:141] libmachine: Using API Version  1
	I0806 08:34:31.995610   79614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:34:31.995940   79614 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:34:31.996152   79614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36601
	I0806 08:34:31.996485   79614 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:34:31.996569   79614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:34:31.996613   79614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:34:31.996785   79614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42273
	I0806 08:34:31.997043   79614 main.go:141] libmachine: Using API Version  1
	I0806 08:34:31.997062   79614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:34:31.997335   79614 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:34:31.997424   79614 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:34:31.997601   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetState
	I0806 08:34:31.997759   79614 main.go:141] libmachine: Using API Version  1
	I0806 08:34:31.997775   79614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:34:31.998047   79614 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:34:31.998511   79614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:34:31.998546   79614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:34:32.001381   79614 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-990751"
	W0806 08:34:32.001397   79614 addons.go:243] addon default-storageclass should already be in state true
	I0806 08:34:32.001427   79614 host.go:66] Checking if "default-k8s-diff-port-990751" exists ...
	I0806 08:34:32.001690   79614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:34:32.001721   79614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:34:32.015095   79614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41633
	I0806 08:34:32.015597   79614 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:34:32.016197   79614 main.go:141] libmachine: Using API Version  1
	I0806 08:34:32.016220   79614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:34:32.016245   79614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33731
	I0806 08:34:32.016951   79614 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:34:32.016953   79614 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:34:32.017491   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetState
	I0806 08:34:32.017572   79614 main.go:141] libmachine: Using API Version  1
	I0806 08:34:32.017601   79614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:34:32.017951   79614 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:34:32.018222   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetState
	I0806 08:34:32.020339   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .DriverName
	I0806 08:34:32.020347   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .DriverName
	I0806 08:34:32.021417   79614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40949
	I0806 08:34:32.021814   79614 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:34:32.022232   79614 main.go:141] libmachine: Using API Version  1
	I0806 08:34:32.022254   79614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:34:32.022609   79614 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:34:32.022690   79614 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 08:34:32.022690   79614 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0806 08:34:32.023365   79614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:34:32.023410   79614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:34:32.024304   79614 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0806 08:34:32.024326   79614 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0806 08:34:32.024343   79614 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0806 08:34:32.024358   79614 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0806 08:34:32.024387   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHHostname
	I0806 08:34:32.024346   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHHostname
	I0806 08:34:32.029046   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:32.029292   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:32.029490   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:32.029516   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:32.029672   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:32.029676   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHPort
	I0806 08:34:32.029694   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:32.029840   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHPort
	I0806 08:34:32.029866   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHKeyPath
	I0806 08:34:32.030000   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHKeyPath
	I0806 08:34:32.030122   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHUsername
	I0806 08:34:32.030124   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHUsername
	I0806 08:34:32.030265   79614 sshutil.go:53] new ssh client: &{IP:192.168.50.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/default-k8s-diff-port-990751/id_rsa Username:docker}
	I0806 08:34:32.030265   79614 sshutil.go:53] new ssh client: &{IP:192.168.50.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/default-k8s-diff-port-990751/id_rsa Username:docker}
	I0806 08:34:32.043523   79614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40021
	I0806 08:34:32.043915   79614 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:34:32.044560   79614 main.go:141] libmachine: Using API Version  1
	I0806 08:34:32.044583   79614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:34:32.044969   79614 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:34:32.045214   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetState
	I0806 08:34:32.046980   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .DriverName
	I0806 08:34:32.047261   79614 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0806 08:34:32.047277   79614 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0806 08:34:32.047296   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHHostname
	I0806 08:34:32.050654   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:32.050991   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:32.051020   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:32.051152   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHPort
	I0806 08:34:32.051329   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHKeyPath
	I0806 08:34:32.051495   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHUsername
	I0806 08:34:32.051613   79614 sshutil.go:53] new ssh client: &{IP:192.168.50.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/default-k8s-diff-port-990751/id_rsa Username:docker}
	I0806 08:34:32.192056   79614 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 08:34:32.231428   79614 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-990751" to be "Ready" ...
	I0806 08:34:32.278049   79614 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0806 08:34:32.378290   79614 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0806 08:34:32.378317   79614 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0806 08:34:32.399728   79614 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0806 08:34:32.419412   79614 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0806 08:34:32.419441   79614 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0806 08:34:32.470691   79614 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0806 08:34:32.470726   79614 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0806 08:34:32.514191   79614 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0806 08:34:32.590737   79614 main.go:141] libmachine: Making call to close driver server
	I0806 08:34:32.590772   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .Close
	I0806 08:34:32.591110   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | Closing plugin on server side
	I0806 08:34:32.591177   79614 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:34:32.591190   79614 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:34:32.591202   79614 main.go:141] libmachine: Making call to close driver server
	I0806 08:34:32.591212   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .Close
	I0806 08:34:32.591465   79614 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:34:32.591492   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | Closing plugin on server side
	I0806 08:34:32.591505   79614 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:34:32.597276   79614 main.go:141] libmachine: Making call to close driver server
	I0806 08:34:32.597297   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .Close
	I0806 08:34:32.597547   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | Closing plugin on server side
	I0806 08:34:32.597564   79614 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:34:32.597579   79614 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:34:33.401000   79614 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.001229332s)
	I0806 08:34:33.401045   79614 main.go:141] libmachine: Making call to close driver server
	I0806 08:34:33.401062   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .Close
	I0806 08:34:33.401346   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | Closing plugin on server side
	I0806 08:34:33.401402   79614 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:34:33.401426   79614 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:34:33.401443   79614 main.go:141] libmachine: Making call to close driver server
	I0806 08:34:33.401452   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .Close
	I0806 08:34:33.401692   79614 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:34:33.401720   79614 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:34:33.455633   79614 main.go:141] libmachine: Making call to close driver server
	I0806 08:34:33.455663   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .Close
	I0806 08:34:33.455947   79614 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:34:33.455964   79614 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:34:33.455964   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | Closing plugin on server side
	I0806 08:34:33.455972   79614 main.go:141] libmachine: Making call to close driver server
	I0806 08:34:33.455980   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .Close
	I0806 08:34:33.456288   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | Closing plugin on server side
	I0806 08:34:33.456286   79614 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:34:33.456328   79614 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:34:33.456347   79614 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-990751"
	I0806 08:34:33.459411   79614 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0806 08:34:31.262960   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:31.263282   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:34:31.263304   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:34:31.263238   80849 retry.go:31] will retry after 4.76596973s: waiting for machine to come up
	I0806 08:34:33.460780   79614 addons.go:510] duration metric: took 1.482853001s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0806 08:34:29.611714   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:31.612385   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:34.111936   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:37.353538   78922 start.go:364] duration metric: took 58.142260904s to acquireMachinesLock for "no-preload-703340"
	I0806 08:34:37.353587   78922 start.go:96] Skipping create...Using existing machine configuration
	I0806 08:34:37.353598   78922 fix.go:54] fixHost starting: 
	I0806 08:34:37.354016   78922 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:34:37.354050   78922 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:34:37.373838   78922 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37209
	I0806 08:34:37.374283   78922 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:34:37.374874   78922 main.go:141] libmachine: Using API Version  1
	I0806 08:34:37.374910   78922 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:34:37.375250   78922 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:34:37.375448   78922 main.go:141] libmachine: (no-preload-703340) Calling .DriverName
	I0806 08:34:37.375584   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetState
	I0806 08:34:37.377251   78922 fix.go:112] recreateIfNeeded on no-preload-703340: state=Stopped err=<nil>
	I0806 08:34:37.377275   78922 main.go:141] libmachine: (no-preload-703340) Calling .DriverName
	W0806 08:34:37.377421   78922 fix.go:138] unexpected machine state, will restart: <nil>
	I0806 08:34:37.379343   78922 out.go:177] * Restarting existing kvm2 VM for "no-preload-703340" ...
	I0806 08:34:36.030512   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.030950   79927 main.go:141] libmachine: (old-k8s-version-409903) Found IP for machine: 192.168.61.158
	I0806 08:34:36.030972   79927 main.go:141] libmachine: (old-k8s-version-409903) Reserving static IP address...
	I0806 08:34:36.030989   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has current primary IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.031349   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "old-k8s-version-409903", mac: "52:54:00:12:30:88", ip: "192.168.61.158"} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:36.031376   79927 main.go:141] libmachine: (old-k8s-version-409903) Reserved static IP address: 192.168.61.158
	I0806 08:34:36.031398   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | skip adding static IP to network mk-old-k8s-version-409903 - found existing host DHCP lease matching {name: "old-k8s-version-409903", mac: "52:54:00:12:30:88", ip: "192.168.61.158"}
	I0806 08:34:36.031416   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | Getting to WaitForSSH function...
	I0806 08:34:36.031428   79927 main.go:141] libmachine: (old-k8s-version-409903) Waiting for SSH to be available...
	I0806 08:34:36.033684   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.034094   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:36.034124   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.034327   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | Using SSH client type: external
	I0806 08:34:36.034366   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | Using SSH private key: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/old-k8s-version-409903/id_rsa (-rw-------)
	I0806 08:34:36.034395   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.158 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19370-13057/.minikube/machines/old-k8s-version-409903/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0806 08:34:36.034407   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | About to run SSH command:
	I0806 08:34:36.034415   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | exit 0
	I0806 08:34:36.164605   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | SSH cmd err, output: <nil>: 
	I0806 08:34:36.164952   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetConfigRaw
	I0806 08:34:36.165509   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetIP
	I0806 08:34:36.168135   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.168471   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:36.168499   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.168756   79927 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/old-k8s-version-409903/config.json ...
	I0806 08:34:36.168956   79927 machine.go:94] provisionDockerMachine start ...
	I0806 08:34:36.168974   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .DriverName
	I0806 08:34:36.169186   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHHostname
	I0806 08:34:36.171367   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.171702   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:36.171723   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.171902   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHPort
	I0806 08:34:36.172080   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:34:36.172240   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:34:36.172400   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHUsername
	I0806 08:34:36.172569   79927 main.go:141] libmachine: Using SSH client type: native
	I0806 08:34:36.172798   79927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.158 22 <nil> <nil>}
	I0806 08:34:36.172812   79927 main.go:141] libmachine: About to run SSH command:
	hostname
	I0806 08:34:36.288738   79927 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0806 08:34:36.288770   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetMachineName
	I0806 08:34:36.289073   79927 buildroot.go:166] provisioning hostname "old-k8s-version-409903"
	I0806 08:34:36.289088   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetMachineName
	I0806 08:34:36.289272   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHHostname
	I0806 08:34:36.291958   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.292326   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:36.292354   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.292462   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHPort
	I0806 08:34:36.292662   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:34:36.292807   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:34:36.292953   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHUsername
	I0806 08:34:36.293122   79927 main.go:141] libmachine: Using SSH client type: native
	I0806 08:34:36.293287   79927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.158 22 <nil> <nil>}
	I0806 08:34:36.293300   79927 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-409903 && echo "old-k8s-version-409903" | sudo tee /etc/hostname
	I0806 08:34:36.427591   79927 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-409903
	
	I0806 08:34:36.427620   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHHostname
	I0806 08:34:36.430395   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.430724   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:36.430758   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.430900   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHPort
	I0806 08:34:36.431133   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:34:36.431311   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:34:36.431476   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHUsername
	I0806 08:34:36.431662   79927 main.go:141] libmachine: Using SSH client type: native
	I0806 08:34:36.431876   79927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.158 22 <nil> <nil>}
	I0806 08:34:36.431895   79927 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-409903' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-409903/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-409903' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0806 08:34:36.554542   79927 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 08:34:36.554574   79927 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19370-13057/.minikube CaCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19370-13057/.minikube}
	I0806 08:34:36.554613   79927 buildroot.go:174] setting up certificates
	I0806 08:34:36.554629   79927 provision.go:84] configureAuth start
	I0806 08:34:36.554646   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetMachineName
	I0806 08:34:36.554932   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetIP
	I0806 08:34:36.557899   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.558351   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:36.558373   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.558530   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHHostname
	I0806 08:34:36.560878   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.561153   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:36.561182   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.561339   79927 provision.go:143] copyHostCerts
	I0806 08:34:36.561400   79927 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem, removing ...
	I0806 08:34:36.561410   79927 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem
	I0806 08:34:36.561462   79927 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem (1078 bytes)
	I0806 08:34:36.561563   79927 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem, removing ...
	I0806 08:34:36.561572   79927 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem
	I0806 08:34:36.561594   79927 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem (1123 bytes)
	I0806 08:34:36.561647   79927 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem, removing ...
	I0806 08:34:36.561653   79927 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem
	I0806 08:34:36.561670   79927 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem (1675 bytes)
	I0806 08:34:36.561752   79927 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-409903 san=[127.0.0.1 192.168.61.158 localhost minikube old-k8s-version-409903]
	I0806 08:34:36.646180   79927 provision.go:177] copyRemoteCerts
	I0806 08:34:36.646233   79927 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0806 08:34:36.646257   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHHostname
	I0806 08:34:36.649020   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.649380   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:36.649405   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.649564   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHPort
	I0806 08:34:36.649791   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:34:36.649962   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHUsername
	I0806 08:34:36.650092   79927 sshutil.go:53] new ssh client: &{IP:192.168.61.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/old-k8s-version-409903/id_rsa Username:docker}
	I0806 08:34:36.738988   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0806 08:34:36.765104   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0806 08:34:36.791946   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0806 08:34:36.817128   79927 provision.go:87] duration metric: took 262.482885ms to configureAuth
	I0806 08:34:36.817157   79927 buildroot.go:189] setting minikube options for container-runtime
	I0806 08:34:36.817352   79927 config.go:182] Loaded profile config "old-k8s-version-409903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0806 08:34:36.817435   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHHostname
	I0806 08:34:36.820142   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.820480   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:36.820520   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.820659   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHPort
	I0806 08:34:36.820892   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:34:36.821095   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:34:36.821289   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHUsername
	I0806 08:34:36.821459   79927 main.go:141] libmachine: Using SSH client type: native
	I0806 08:34:36.821703   79927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.158 22 <nil> <nil>}
	I0806 08:34:36.821722   79927 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0806 08:34:37.099949   79927 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0806 08:34:37.099979   79927 machine.go:97] duration metric: took 931.010538ms to provisionDockerMachine
	I0806 08:34:37.099993   79927 start.go:293] postStartSetup for "old-k8s-version-409903" (driver="kvm2")
	I0806 08:34:37.100007   79927 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0806 08:34:37.100028   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .DriverName
	I0806 08:34:37.100379   79927 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0806 08:34:37.100414   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHHostname
	I0806 08:34:37.103297   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:37.103691   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:37.103717   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:37.103887   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHPort
	I0806 08:34:37.104106   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:34:37.104263   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHUsername
	I0806 08:34:37.104402   79927 sshutil.go:53] new ssh client: &{IP:192.168.61.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/old-k8s-version-409903/id_rsa Username:docker}
	I0806 08:34:37.192035   79927 ssh_runner.go:195] Run: cat /etc/os-release
	I0806 08:34:37.196693   79927 info.go:137] Remote host: Buildroot 2023.02.9
	I0806 08:34:37.196712   79927 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-13057/.minikube/addons for local assets ...
	I0806 08:34:37.196776   79927 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-13057/.minikube/files for local assets ...
	I0806 08:34:37.196861   79927 filesync.go:149] local asset: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem -> 202462.pem in /etc/ssl/certs
	I0806 08:34:37.196957   79927 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0806 08:34:37.206906   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem --> /etc/ssl/certs/202462.pem (1708 bytes)
	I0806 08:34:37.232270   79927 start.go:296] duration metric: took 132.262362ms for postStartSetup
	I0806 08:34:37.232311   79927 fix.go:56] duration metric: took 22.878683419s for fixHost
	I0806 08:34:37.232332   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHHostname
	I0806 08:34:37.235278   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:37.235721   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:37.235751   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:37.235946   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHPort
	I0806 08:34:37.236168   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:34:37.236337   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:34:37.236491   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHUsername
	I0806 08:34:37.236640   79927 main.go:141] libmachine: Using SSH client type: native
	I0806 08:34:37.236793   79927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.158 22 <nil> <nil>}
	I0806 08:34:37.236803   79927 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0806 08:34:37.353370   79927 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722933277.327882532
	
	I0806 08:34:37.353397   79927 fix.go:216] guest clock: 1722933277.327882532
	I0806 08:34:37.353408   79927 fix.go:229] Guest: 2024-08-06 08:34:37.327882532 +0000 UTC Remote: 2024-08-06 08:34:37.232315481 +0000 UTC m=+218.506631326 (delta=95.567051ms)
	I0806 08:34:37.353439   79927 fix.go:200] guest clock delta is within tolerance: 95.567051ms
	I0806 08:34:37.353451   79927 start.go:83] releasing machines lock for "old-k8s-version-409903", held for 22.999876259s
	I0806 08:34:37.353490   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .DriverName
	I0806 08:34:37.353775   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetIP
	I0806 08:34:37.356788   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:37.357197   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:37.357224   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:37.357434   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .DriverName
	I0806 08:34:37.358035   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .DriverName
	I0806 08:34:37.358238   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .DriverName
	I0806 08:34:37.358320   79927 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0806 08:34:37.358360   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHHostname
	I0806 08:34:37.358478   79927 ssh_runner.go:195] Run: cat /version.json
	I0806 08:34:37.358505   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHHostname
	I0806 08:34:37.361125   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:37.361429   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:37.361464   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:37.361490   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:37.361681   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHPort
	I0806 08:34:37.361846   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:34:37.361889   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:37.361913   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:37.362018   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHUsername
	I0806 08:34:37.362078   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHPort
	I0806 08:34:37.362188   79927 sshutil.go:53] new ssh client: &{IP:192.168.61.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/old-k8s-version-409903/id_rsa Username:docker}
	I0806 08:34:37.362319   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:34:37.362452   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHUsername
	I0806 08:34:37.362596   79927 sshutil.go:53] new ssh client: &{IP:192.168.61.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/old-k8s-version-409903/id_rsa Username:docker}
	I0806 08:34:37.468465   79927 ssh_runner.go:195] Run: systemctl --version
	I0806 08:34:37.475436   79927 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0806 08:34:37.627725   79927 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0806 08:34:37.635346   79927 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0806 08:34:37.635432   79927 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0806 08:34:37.652546   79927 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0806 08:34:37.652571   79927 start.go:495] detecting cgroup driver to use...
	I0806 08:34:37.652642   79927 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0806 08:34:37.671386   79927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 08:34:37.686408   79927 docker.go:217] disabling cri-docker service (if available) ...
	I0806 08:34:37.686473   79927 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0806 08:34:37.701731   79927 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0806 08:34:37.717528   79927 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0806 08:34:37.845160   79927 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0806 08:34:37.999278   79927 docker.go:233] disabling docker service ...
	I0806 08:34:37.999361   79927 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0806 08:34:38.017101   79927 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0806 08:34:38.033189   79927 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0806 08:34:38.213620   79927 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0806 08:34:38.374790   79927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0806 08:34:38.393394   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 08:34:38.419807   79927 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0806 08:34:38.419878   79927 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:38.434870   79927 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0806 08:34:38.434946   79927 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:38.449029   79927 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:38.460440   79927 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:38.473654   79927 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0806 08:34:38.486032   79927 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0806 08:34:38.495861   79927 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0806 08:34:38.495927   79927 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0806 08:34:38.511021   79927 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0806 08:34:38.521985   79927 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 08:34:38.651919   79927 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0806 08:34:34.235598   79614 node_ready.go:53] node "default-k8s-diff-port-990751" has status "Ready":"False"
	I0806 08:34:36.734741   79614 node_ready.go:53] node "default-k8s-diff-port-990751" has status "Ready":"False"
	I0806 08:34:37.235469   79614 node_ready.go:49] node "default-k8s-diff-port-990751" has status "Ready":"True"
	I0806 08:34:37.235496   79614 node_ready.go:38] duration metric: took 5.004029432s for node "default-k8s-diff-port-990751" to be "Ready" ...
	I0806 08:34:37.235506   79614 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 08:34:37.242299   79614 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-gx8tt" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:37.247917   79614 pod_ready.go:92] pod "coredns-7db6d8ff4d-gx8tt" in "kube-system" namespace has status "Ready":"True"
	I0806 08:34:37.247936   79614 pod_ready.go:81] duration metric: took 5.613775ms for pod "coredns-7db6d8ff4d-gx8tt" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:37.247944   79614 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-990751" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:38.822367   79927 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0806 08:34:38.822444   79927 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0806 08:34:38.828148   79927 start.go:563] Will wait 60s for crictl version
	I0806 08:34:38.828211   79927 ssh_runner.go:195] Run: which crictl
	I0806 08:34:38.832084   79927 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0806 08:34:38.872571   79927 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0806 08:34:38.872658   79927 ssh_runner.go:195] Run: crio --version
	I0806 08:34:38.903507   79927 ssh_runner.go:195] Run: crio --version
	I0806 08:34:38.934955   79927 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0806 08:34:36.612552   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:38.614644   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:37.380808   78922 main.go:141] libmachine: (no-preload-703340) Calling .Start
	I0806 08:34:37.381005   78922 main.go:141] libmachine: (no-preload-703340) Ensuring networks are active...
	I0806 08:34:37.381749   78922 main.go:141] libmachine: (no-preload-703340) Ensuring network default is active
	I0806 08:34:37.382052   78922 main.go:141] libmachine: (no-preload-703340) Ensuring network mk-no-preload-703340 is active
	I0806 08:34:37.382556   78922 main.go:141] libmachine: (no-preload-703340) Getting domain xml...
	I0806 08:34:37.383386   78922 main.go:141] libmachine: (no-preload-703340) Creating domain...
	I0806 08:34:38.779397   78922 main.go:141] libmachine: (no-preload-703340) Waiting to get IP...
	I0806 08:34:38.780262   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:38.780737   78922 main.go:141] libmachine: (no-preload-703340) DBG | unable to find current IP address of domain no-preload-703340 in network mk-no-preload-703340
	I0806 08:34:38.780809   78922 main.go:141] libmachine: (no-preload-703340) DBG | I0806 08:34:38.780723   81046 retry.go:31] will retry after 194.059215ms: waiting for machine to come up
	I0806 08:34:38.976254   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:38.976734   78922 main.go:141] libmachine: (no-preload-703340) DBG | unable to find current IP address of domain no-preload-703340 in network mk-no-preload-703340
	I0806 08:34:38.976769   78922 main.go:141] libmachine: (no-preload-703340) DBG | I0806 08:34:38.976653   81046 retry.go:31] will retry after 288.140112ms: waiting for machine to come up
	I0806 08:34:39.266297   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:39.267031   78922 main.go:141] libmachine: (no-preload-703340) DBG | unable to find current IP address of domain no-preload-703340 in network mk-no-preload-703340
	I0806 08:34:39.267060   78922 main.go:141] libmachine: (no-preload-703340) DBG | I0806 08:34:39.266948   81046 retry.go:31] will retry after 354.804756ms: waiting for machine to come up
	I0806 08:34:39.623548   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:39.624083   78922 main.go:141] libmachine: (no-preload-703340) DBG | unable to find current IP address of domain no-preload-703340 in network mk-no-preload-703340
	I0806 08:34:39.624111   78922 main.go:141] libmachine: (no-preload-703340) DBG | I0806 08:34:39.623994   81046 retry.go:31] will retry after 418.851823ms: waiting for machine to come up
	I0806 08:34:40.044734   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:40.045142   78922 main.go:141] libmachine: (no-preload-703340) DBG | unable to find current IP address of domain no-preload-703340 in network mk-no-preload-703340
	I0806 08:34:40.045165   78922 main.go:141] libmachine: (no-preload-703340) DBG | I0806 08:34:40.045103   81046 retry.go:31] will retry after 525.29917ms: waiting for machine to come up
	I0806 08:34:40.571928   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:40.572395   78922 main.go:141] libmachine: (no-preload-703340) DBG | unable to find current IP address of domain no-preload-703340 in network mk-no-preload-703340
	I0806 08:34:40.572431   78922 main.go:141] libmachine: (no-preload-703340) DBG | I0806 08:34:40.572332   81046 retry.go:31] will retry after 609.965809ms: waiting for machine to come up
	I0806 08:34:41.184209   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:41.184793   78922 main.go:141] libmachine: (no-preload-703340) DBG | unable to find current IP address of domain no-preload-703340 in network mk-no-preload-703340
	I0806 08:34:41.184820   78922 main.go:141] libmachine: (no-preload-703340) DBG | I0806 08:34:41.184739   81046 retry.go:31] will retry after 1.120596s: waiting for machine to come up
	I0806 08:34:38.936206   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetIP
	I0806 08:34:38.939581   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:38.940049   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:38.940076   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:38.940336   79927 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0806 08:34:38.944801   79927 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 08:34:38.959454   79927 kubeadm.go:883] updating cluster {Name:old-k8s-version-409903 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-409903 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.158 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0806 08:34:38.959581   79927 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0806 08:34:38.959647   79927 ssh_runner.go:195] Run: sudo crictl images --output json
	I0806 08:34:39.014798   79927 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0806 08:34:39.014908   79927 ssh_runner.go:195] Run: which lz4
	I0806 08:34:39.019642   79927 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0806 08:34:39.025125   79927 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0806 08:34:39.025165   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0806 08:34:40.772902   79927 crio.go:462] duration metric: took 1.753273519s to copy over tarball
	I0806 08:34:40.772994   79927 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0806 08:34:39.256627   79614 pod_ready.go:102] pod "etcd-default-k8s-diff-port-990751" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:41.257158   79614 pod_ready.go:102] pod "etcd-default-k8s-diff-port-990751" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:42.754699   79614 pod_ready.go:92] pod "etcd-default-k8s-diff-port-990751" in "kube-system" namespace has status "Ready":"True"
	I0806 08:34:42.754729   79614 pod_ready.go:81] duration metric: took 5.506777292s for pod "etcd-default-k8s-diff-port-990751" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:42.754742   79614 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-990751" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:42.761260   79614 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-990751" in "kube-system" namespace has status "Ready":"True"
	I0806 08:34:42.761282   79614 pod_ready.go:81] duration metric: took 6.531767ms for pod "kube-apiserver-default-k8s-diff-port-990751" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:42.761292   79614 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-990751" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:42.769860   79614 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-990751" in "kube-system" namespace has status "Ready":"True"
	I0806 08:34:42.769884   79614 pod_ready.go:81] duration metric: took 8.585216ms for pod "kube-controller-manager-default-k8s-diff-port-990751" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:42.769898   79614 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lpf88" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:42.775420   79614 pod_ready.go:92] pod "kube-proxy-lpf88" in "kube-system" namespace has status "Ready":"True"
	I0806 08:34:42.775445   79614 pod_ready.go:81] duration metric: took 5.540587ms for pod "kube-proxy-lpf88" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:42.775453   79614 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-990751" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:42.781317   79614 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-990751" in "kube-system" namespace has status "Ready":"True"
	I0806 08:34:42.781341   79614 pod_ready.go:81] duration metric: took 5.88019ms for pod "kube-scheduler-default-k8s-diff-port-990751" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:42.781353   79614 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:40.614686   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:43.114736   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:42.306869   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:42.307378   78922 main.go:141] libmachine: (no-preload-703340) DBG | unable to find current IP address of domain no-preload-703340 in network mk-no-preload-703340
	I0806 08:34:42.307408   78922 main.go:141] libmachine: (no-preload-703340) DBG | I0806 08:34:42.307341   81046 retry.go:31] will retry after 1.26489185s: waiting for machine to come up
	I0806 08:34:43.574072   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:43.574494   78922 main.go:141] libmachine: (no-preload-703340) DBG | unable to find current IP address of domain no-preload-703340 in network mk-no-preload-703340
	I0806 08:34:43.574517   78922 main.go:141] libmachine: (no-preload-703340) DBG | I0806 08:34:43.574440   81046 retry.go:31] will retry after 1.547737608s: waiting for machine to come up
	I0806 08:34:45.124236   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:45.124794   78922 main.go:141] libmachine: (no-preload-703340) DBG | unable to find current IP address of domain no-preload-703340 in network mk-no-preload-703340
	I0806 08:34:45.124822   78922 main.go:141] libmachine: (no-preload-703340) DBG | I0806 08:34:45.124745   81046 retry.go:31] will retry after 1.491615644s: waiting for machine to come up
	I0806 08:34:46.617663   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:46.618159   78922 main.go:141] libmachine: (no-preload-703340) DBG | unable to find current IP address of domain no-preload-703340 in network mk-no-preload-703340
	I0806 08:34:46.618183   78922 main.go:141] libmachine: (no-preload-703340) DBG | I0806 08:34:46.618117   81046 retry.go:31] will retry after 2.284217057s: waiting for machine to come up
	I0806 08:34:43.977375   79927 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.204351244s)
	I0806 08:34:43.977404   79927 crio.go:469] duration metric: took 3.204457213s to extract the tarball
	I0806 08:34:43.977412   79927 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0806 08:34:44.023819   79927 ssh_runner.go:195] Run: sudo crictl images --output json
	I0806 08:34:44.070991   79927 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0806 08:34:44.071016   79927 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0806 08:34:44.071089   79927 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0806 08:34:44.071129   79927 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0806 08:34:44.071138   79927 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0806 08:34:44.071090   79927 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 08:34:44.071184   79927 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0806 08:34:44.071192   79927 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0806 08:34:44.071210   79927 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0806 08:34:44.071255   79927 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0806 08:34:44.072641   79927 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0806 08:34:44.072728   79927 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 08:34:44.072743   79927 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0806 08:34:44.072642   79927 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0806 08:34:44.072749   79927 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0806 08:34:44.072811   79927 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0806 08:34:44.073085   79927 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0806 08:34:44.073298   79927 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0806 08:34:44.239872   79927 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0806 08:34:44.287412   79927 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0806 08:34:44.287458   79927 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0806 08:34:44.287504   79927 ssh_runner.go:195] Run: which crictl
	I0806 08:34:44.292050   79927 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0806 08:34:44.309229   79927 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0806 08:34:44.336611   79927 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0806 08:34:44.371389   79927 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0806 08:34:44.371447   79927 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0806 08:34:44.371500   79927 ssh_runner.go:195] Run: which crictl
	I0806 08:34:44.375821   79927 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0806 08:34:44.412139   79927 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0806 08:34:44.415159   79927 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0806 08:34:44.415339   79927 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0806 08:34:44.418296   79927 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0806 08:34:44.423718   79927 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0806 08:34:44.437866   79927 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0806 08:34:44.494716   79927 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0806 08:34:44.494765   79927 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0806 08:34:44.494819   79927 ssh_runner.go:195] Run: which crictl
	I0806 08:34:44.526849   79927 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0806 08:34:44.526898   79927 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0806 08:34:44.526961   79927 ssh_runner.go:195] Run: which crictl
	I0806 08:34:44.555530   79927 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0806 08:34:44.555587   79927 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0806 08:34:44.555620   79927 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0806 08:34:44.555634   79927 ssh_runner.go:195] Run: which crictl
	I0806 08:34:44.555655   79927 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0806 08:34:44.555692   79927 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0806 08:34:44.555707   79927 ssh_runner.go:195] Run: which crictl
	I0806 08:34:44.555714   79927 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0806 08:34:44.555743   79927 ssh_runner.go:195] Run: which crictl
	I0806 08:34:44.555748   79927 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0806 08:34:44.555792   79927 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0806 08:34:44.561072   79927 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0806 08:34:44.624216   79927 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0806 08:34:44.639877   79927 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0806 08:34:44.639895   79927 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0806 08:34:44.639979   79927 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0806 08:34:44.653684   79927 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0806 08:34:44.711774   79927 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0806 08:34:44.711804   79927 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0806 08:34:44.964733   79927 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 08:34:45.112727   79927 cache_images.go:92] duration metric: took 1.041682516s to LoadCachedImages
	W0806 08:34:45.112813   79927 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0806 08:34:45.112837   79927 kubeadm.go:934] updating node { 192.168.61.158 8443 v1.20.0 crio true true} ...
	I0806 08:34:45.112964   79927 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-409903 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.158
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-409903 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0806 08:34:45.113047   79927 ssh_runner.go:195] Run: crio config
	I0806 08:34:45.165383   79927 cni.go:84] Creating CNI manager for ""
	I0806 08:34:45.165413   79927 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0806 08:34:45.165429   79927 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0806 08:34:45.165455   79927 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.158 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-409903 NodeName:old-k8s-version-409903 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.158"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.158 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0806 08:34:45.165687   79927 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.158
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-409903"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.158
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.158"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0806 08:34:45.165760   79927 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0806 08:34:45.177296   79927 binaries.go:44] Found k8s binaries, skipping transfer
	I0806 08:34:45.177371   79927 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0806 08:34:45.187299   79927 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0806 08:34:45.205986   79927 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0806 08:34:45.225072   79927 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0806 08:34:45.244222   79927 ssh_runner.go:195] Run: grep 192.168.61.158	control-plane.minikube.internal$ /etc/hosts
	I0806 08:34:45.248728   79927 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.158	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 08:34:45.263094   79927 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 08:34:45.399106   79927 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 08:34:45.417589   79927 certs.go:68] Setting up /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/old-k8s-version-409903 for IP: 192.168.61.158
	I0806 08:34:45.417611   79927 certs.go:194] generating shared ca certs ...
	I0806 08:34:45.417630   79927 certs.go:226] acquiring lock for ca certs: {Name:mkf5c5aed91f4108054d9f2c742cad66c86afbed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:34:45.417779   79927 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.key
	I0806 08:34:45.417820   79927 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.key
	I0806 08:34:45.417829   79927 certs.go:256] generating profile certs ...
	I0806 08:34:45.417913   79927 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/old-k8s-version-409903/client.key
	I0806 08:34:45.417970   79927 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/old-k8s-version-409903/apiserver.key.1ffd6785
	I0806 08:34:45.418060   79927 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/old-k8s-version-409903/proxy-client.key
	I0806 08:34:45.418242   79927 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246.pem (1338 bytes)
	W0806 08:34:45.418273   79927 certs.go:480] ignoring /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246_empty.pem, impossibly tiny 0 bytes
	I0806 08:34:45.418282   79927 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem (1675 bytes)
	I0806 08:34:45.418302   79927 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem (1078 bytes)
	I0806 08:34:45.418322   79927 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem (1123 bytes)
	I0806 08:34:45.418350   79927 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem (1675 bytes)
	I0806 08:34:45.418389   79927 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem (1708 bytes)
	I0806 08:34:45.419018   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0806 08:34:45.464550   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0806 08:34:45.500764   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0806 08:34:45.549225   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0806 08:34:45.597480   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/old-k8s-version-409903/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0806 08:34:45.639834   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/old-k8s-version-409903/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0806 08:34:45.680789   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/old-k8s-version-409903/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0806 08:34:45.737986   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/old-k8s-version-409903/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0806 08:34:45.766471   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem --> /usr/share/ca-certificates/202462.pem (1708 bytes)
	I0806 08:34:45.797047   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0806 08:34:45.825789   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246.pem --> /usr/share/ca-certificates/20246.pem (1338 bytes)
	I0806 08:34:45.860217   79927 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0806 08:34:45.883841   79927 ssh_runner.go:195] Run: openssl version
	I0806 08:34:45.891103   79927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202462.pem && ln -fs /usr/share/ca-certificates/202462.pem /etc/ssl/certs/202462.pem"
	I0806 08:34:45.905260   79927 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202462.pem
	I0806 08:34:45.910759   79927 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  6 07:17 /usr/share/ca-certificates/202462.pem
	I0806 08:34:45.910837   79927 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202462.pem
	I0806 08:34:45.918440   79927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202462.pem /etc/ssl/certs/3ec20f2e.0"
	I0806 08:34:45.932479   79927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0806 08:34:45.945678   79927 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0806 08:34:45.951612   79927 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  6 07:07 /usr/share/ca-certificates/minikubeCA.pem
	I0806 08:34:45.951691   79927 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0806 08:34:45.957749   79927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0806 08:34:45.969987   79927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20246.pem && ln -fs /usr/share/ca-certificates/20246.pem /etc/ssl/certs/20246.pem"
	I0806 08:34:45.982878   79927 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20246.pem
	I0806 08:34:45.987953   79927 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  6 07:17 /usr/share/ca-certificates/20246.pem
	I0806 08:34:45.988032   79927 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20246.pem
	I0806 08:34:45.994219   79927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20246.pem /etc/ssl/certs/51391683.0"
	I0806 08:34:46.006666   79927 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0806 08:34:46.011624   79927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0806 08:34:46.018416   79927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0806 08:34:46.025220   79927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0806 08:34:46.031854   79927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0806 08:34:46.038570   79927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0806 08:34:46.045142   79927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0806 08:34:46.052198   79927 kubeadm.go:392] StartCluster: {Name:old-k8s-version-409903 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-409903 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.158 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 08:34:46.052313   79927 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0806 08:34:46.052372   79927 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0806 08:34:46.098382   79927 cri.go:89] found id: ""
	I0806 08:34:46.098454   79927 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0806 08:34:46.110861   79927 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0806 08:34:46.110888   79927 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0806 08:34:46.110956   79927 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0806 08:34:46.123420   79927 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0806 08:34:46.124617   79927 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-409903" does not appear in /home/jenkins/minikube-integration/19370-13057/kubeconfig
	I0806 08:34:46.125562   79927 kubeconfig.go:62] /home/jenkins/minikube-integration/19370-13057/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-409903" cluster setting kubeconfig missing "old-k8s-version-409903" context setting]
	I0806 08:34:46.127017   79927 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/kubeconfig: {Name:mk5c066914acf8e36556b29792f75b98076ca34a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:34:46.173767   79927 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0806 08:34:46.186472   79927 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.158
	I0806 08:34:46.186517   79927 kubeadm.go:1160] stopping kube-system containers ...
	I0806 08:34:46.186532   79927 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0806 08:34:46.186592   79927 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0806 08:34:46.227817   79927 cri.go:89] found id: ""
	I0806 08:34:46.227914   79927 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0806 08:34:46.248248   79927 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0806 08:34:46.259555   79927 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0806 08:34:46.259578   79927 kubeadm.go:157] found existing configuration files:
	
	I0806 08:34:46.259647   79927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0806 08:34:46.271246   79927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0806 08:34:46.271311   79927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0806 08:34:46.283559   79927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0806 08:34:46.295584   79927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0806 08:34:46.295660   79927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0806 08:34:46.307223   79927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0806 08:34:46.318046   79927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0806 08:34:46.318136   79927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0806 08:34:46.328827   79927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0806 08:34:46.339577   79927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0806 08:34:46.339665   79927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0806 08:34:46.352107   79927 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0806 08:34:46.369776   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:46.505082   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:47.036627   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:47.285779   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:47.397777   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:47.517913   79927 api_server.go:52] waiting for apiserver process to appear ...
	I0806 08:34:47.518028   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:48.018399   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:48.518708   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:44.788404   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:46.789746   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:45.115046   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:47.611343   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:48.905056   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:48.905522   78922 main.go:141] libmachine: (no-preload-703340) DBG | unable to find current IP address of domain no-preload-703340 in network mk-no-preload-703340
	I0806 08:34:48.905548   78922 main.go:141] libmachine: (no-preload-703340) DBG | I0806 08:34:48.905484   81046 retry.go:31] will retry after 2.647234658s: waiting for machine to come up
	I0806 08:34:51.556304   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:51.556689   78922 main.go:141] libmachine: (no-preload-703340) DBG | unable to find current IP address of domain no-preload-703340 in network mk-no-preload-703340
	I0806 08:34:51.556712   78922 main.go:141] libmachine: (no-preload-703340) DBG | I0806 08:34:51.556663   81046 retry.go:31] will retry after 4.275379711s: waiting for machine to come up
	I0806 08:34:49.018572   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:49.518975   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:50.018355   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:50.518866   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:51.018156   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:51.518855   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:52.018270   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:52.518584   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:53.018606   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:53.518118   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:49.288691   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:51.787435   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:53.787824   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:49.612621   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:52.112274   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:55.834423   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:55.834961   78922 main.go:141] libmachine: (no-preload-703340) Found IP for machine: 192.168.72.126
	I0806 08:34:55.834980   78922 main.go:141] libmachine: (no-preload-703340) Reserving static IP address...
	I0806 08:34:55.834991   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has current primary IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:55.835314   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "no-preload-703340", mac: "52:54:00:a1:fa:d3", ip: "192.168.72.126"} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:34:55.835348   78922 main.go:141] libmachine: (no-preload-703340) Reserved static IP address: 192.168.72.126
	I0806 08:34:55.835363   78922 main.go:141] libmachine: (no-preload-703340) DBG | skip adding static IP to network mk-no-preload-703340 - found existing host DHCP lease matching {name: "no-preload-703340", mac: "52:54:00:a1:fa:d3", ip: "192.168.72.126"}
	I0806 08:34:55.835376   78922 main.go:141] libmachine: (no-preload-703340) DBG | Getting to WaitForSSH function...
	I0806 08:34:55.835388   78922 main.go:141] libmachine: (no-preload-703340) Waiting for SSH to be available...
	I0806 08:34:55.837645   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:55.838001   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:34:55.838030   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:55.838167   78922 main.go:141] libmachine: (no-preload-703340) DBG | Using SSH client type: external
	I0806 08:34:55.838193   78922 main.go:141] libmachine: (no-preload-703340) DBG | Using SSH private key: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/no-preload-703340/id_rsa (-rw-------)
	I0806 08:34:55.838234   78922 main.go:141] libmachine: (no-preload-703340) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.126 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19370-13057/.minikube/machines/no-preload-703340/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0806 08:34:55.838256   78922 main.go:141] libmachine: (no-preload-703340) DBG | About to run SSH command:
	I0806 08:34:55.838272   78922 main.go:141] libmachine: (no-preload-703340) DBG | exit 0
	I0806 08:34:55.968557   78922 main.go:141] libmachine: (no-preload-703340) DBG | SSH cmd err, output: <nil>: 
	I0806 08:34:55.968893   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetConfigRaw
	I0806 08:34:55.969485   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetIP
	I0806 08:34:55.971998   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:55.972457   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:34:55.972487   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:55.972694   78922 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/no-preload-703340/config.json ...
	I0806 08:34:55.972889   78922 machine.go:94] provisionDockerMachine start ...
	I0806 08:34:55.972908   78922 main.go:141] libmachine: (no-preload-703340) Calling .DriverName
	I0806 08:34:55.973115   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHHostname
	I0806 08:34:55.975361   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:55.975734   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:34:55.975753   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:55.975905   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHPort
	I0806 08:34:55.976051   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHKeyPath
	I0806 08:34:55.976265   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHKeyPath
	I0806 08:34:55.976424   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHUsername
	I0806 08:34:55.976635   78922 main.go:141] libmachine: Using SSH client type: native
	I0806 08:34:55.976891   78922 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.126 22 <nil> <nil>}
	I0806 08:34:55.976905   78922 main.go:141] libmachine: About to run SSH command:
	hostname
	I0806 08:34:56.084844   78922 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0806 08:34:56.084905   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetMachineName
	I0806 08:34:56.085226   78922 buildroot.go:166] provisioning hostname "no-preload-703340"
	I0806 08:34:56.085253   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetMachineName
	I0806 08:34:56.085477   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHHostname
	I0806 08:34:56.088358   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:56.088775   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:34:56.088806   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:56.088904   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHPort
	I0806 08:34:56.089070   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHKeyPath
	I0806 08:34:56.089262   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHKeyPath
	I0806 08:34:56.089417   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHUsername
	I0806 08:34:56.089600   78922 main.go:141] libmachine: Using SSH client type: native
	I0806 08:34:56.089810   78922 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.126 22 <nil> <nil>}
	I0806 08:34:56.089840   78922 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-703340 && echo "no-preload-703340" | sudo tee /etc/hostname
	I0806 08:34:56.216217   78922 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-703340
	
	I0806 08:34:56.216257   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHHostname
	I0806 08:34:56.218949   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:56.219274   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:34:56.219309   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:56.219488   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHPort
	I0806 08:34:56.219699   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHKeyPath
	I0806 08:34:56.219899   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHKeyPath
	I0806 08:34:56.220054   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHUsername
	I0806 08:34:56.220231   78922 main.go:141] libmachine: Using SSH client type: native
	I0806 08:34:56.220513   78922 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.126 22 <nil> <nil>}
	I0806 08:34:56.220544   78922 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-703340' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-703340/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-703340' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0806 08:34:56.337357   78922 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 08:34:56.337390   78922 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19370-13057/.minikube CaCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19370-13057/.minikube}
	I0806 08:34:56.337428   78922 buildroot.go:174] setting up certificates
	I0806 08:34:56.337439   78922 provision.go:84] configureAuth start
	I0806 08:34:56.337454   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetMachineName
	I0806 08:34:56.337736   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetIP
	I0806 08:34:56.340297   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:56.340655   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:34:56.340680   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:56.340898   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHHostname
	I0806 08:34:56.343013   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:56.343356   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:34:56.343395   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:56.343544   78922 provision.go:143] copyHostCerts
	I0806 08:34:56.343613   78922 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem, removing ...
	I0806 08:34:56.343626   78922 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem
	I0806 08:34:56.343692   78922 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem (1078 bytes)
	I0806 08:34:56.343806   78922 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem, removing ...
	I0806 08:34:56.343815   78922 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem
	I0806 08:34:56.343838   78922 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem (1123 bytes)
	I0806 08:34:56.343915   78922 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem, removing ...
	I0806 08:34:56.343929   78922 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem
	I0806 08:34:56.343948   78922 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem (1675 bytes)
	I0806 08:34:56.344044   78922 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem org=jenkins.no-preload-703340 san=[127.0.0.1 192.168.72.126 localhost minikube no-preload-703340]
	I0806 08:34:56.441531   78922 provision.go:177] copyRemoteCerts
	I0806 08:34:56.441588   78922 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0806 08:34:56.441610   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHHostname
	I0806 08:34:56.444202   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:56.444549   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:34:56.444574   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:56.444786   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHPort
	I0806 08:34:56.444977   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHKeyPath
	I0806 08:34:56.445125   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHUsername
	I0806 08:34:56.445327   78922 sshutil.go:53] new ssh client: &{IP:192.168.72.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/no-preload-703340/id_rsa Username:docker}
	I0806 08:34:56.530980   78922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0806 08:34:56.557880   78922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0806 08:34:56.584108   78922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0806 08:34:56.608415   78922 provision.go:87] duration metric: took 270.961494ms to configureAuth
	I0806 08:34:56.608449   78922 buildroot.go:189] setting minikube options for container-runtime
	I0806 08:34:56.608675   78922 config.go:182] Loaded profile config "no-preload-703340": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-rc.0
	I0806 08:34:56.608794   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHHostname
	I0806 08:34:56.611937   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:56.612383   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:34:56.612409   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:56.612585   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHPort
	I0806 08:34:56.612774   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHKeyPath
	I0806 08:34:56.612953   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHKeyPath
	I0806 08:34:56.613134   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHUsername
	I0806 08:34:56.613282   78922 main.go:141] libmachine: Using SSH client type: native
	I0806 08:34:56.613436   78922 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.126 22 <nil> <nil>}
	I0806 08:34:56.613450   78922 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0806 08:34:56.904446   78922 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0806 08:34:56.904474   78922 machine.go:97] duration metric: took 931.572346ms to provisionDockerMachine
	I0806 08:34:56.904484   78922 start.go:293] postStartSetup for "no-preload-703340" (driver="kvm2")
	I0806 08:34:56.904496   78922 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0806 08:34:56.904517   78922 main.go:141] libmachine: (no-preload-703340) Calling .DriverName
	I0806 08:34:56.905043   78922 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0806 08:34:56.905076   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHHostname
	I0806 08:34:56.907707   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:56.908101   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:34:56.908124   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:56.908356   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHPort
	I0806 08:34:56.908586   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHKeyPath
	I0806 08:34:56.908753   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHUsername
	I0806 08:34:56.908924   78922 sshutil.go:53] new ssh client: &{IP:192.168.72.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/no-preload-703340/id_rsa Username:docker}
	I0806 08:34:56.995555   78922 ssh_runner.go:195] Run: cat /etc/os-release
	I0806 08:34:56.999971   78922 info.go:137] Remote host: Buildroot 2023.02.9
	I0806 08:34:57.000004   78922 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-13057/.minikube/addons for local assets ...
	I0806 08:34:57.000079   78922 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-13057/.minikube/files for local assets ...
	I0806 08:34:57.000150   78922 filesync.go:149] local asset: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem -> 202462.pem in /etc/ssl/certs
	I0806 08:34:57.000243   78922 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0806 08:34:57.010178   78922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem --> /etc/ssl/certs/202462.pem (1708 bytes)
	I0806 08:34:57.035704   78922 start.go:296] duration metric: took 131.206204ms for postStartSetup
	I0806 08:34:57.035751   78922 fix.go:56] duration metric: took 19.682153204s for fixHost
	I0806 08:34:57.035775   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHHostname
	I0806 08:34:57.038230   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:57.038565   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:34:57.038596   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:57.038809   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHPort
	I0806 08:34:57.039016   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHKeyPath
	I0806 08:34:57.039197   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHKeyPath
	I0806 08:34:57.039308   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHUsername
	I0806 08:34:57.039486   78922 main.go:141] libmachine: Using SSH client type: native
	I0806 08:34:57.039697   78922 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.126 22 <nil> <nil>}
	I0806 08:34:57.039712   78922 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0806 08:34:57.149272   78922 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722933297.121300330
	
	I0806 08:34:57.149291   78922 fix.go:216] guest clock: 1722933297.121300330
	I0806 08:34:57.149298   78922 fix.go:229] Guest: 2024-08-06 08:34:57.12130033 +0000 UTC Remote: 2024-08-06 08:34:57.035756139 +0000 UTC m=+360.415626547 (delta=85.544191ms)
	I0806 08:34:57.149319   78922 fix.go:200] guest clock delta is within tolerance: 85.544191ms
	I0806 08:34:57.149326   78922 start.go:83] releasing machines lock for "no-preload-703340", held for 19.795759576s
	I0806 08:34:57.149348   78922 main.go:141] libmachine: (no-preload-703340) Calling .DriverName
	I0806 08:34:57.149624   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetIP
	I0806 08:34:57.152286   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:57.152699   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:34:57.152731   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:57.152905   78922 main.go:141] libmachine: (no-preload-703340) Calling .DriverName
	I0806 08:34:57.153488   78922 main.go:141] libmachine: (no-preload-703340) Calling .DriverName
	I0806 08:34:57.153680   78922 main.go:141] libmachine: (no-preload-703340) Calling .DriverName
	I0806 08:34:57.153764   78922 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0806 08:34:57.153803   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHHostname
	I0806 08:34:57.154068   78922 ssh_runner.go:195] Run: cat /version.json
	I0806 08:34:57.154090   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHHostname
	I0806 08:34:57.156678   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:57.157026   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:34:57.157053   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:57.157123   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:57.157327   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHPort
	I0806 08:34:57.157492   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHKeyPath
	I0806 08:34:57.157580   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:34:57.157621   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHUsername
	I0806 08:34:57.157602   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:57.157701   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHPort
	I0806 08:34:57.157919   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHKeyPath
	I0806 08:34:57.157933   78922 sshutil.go:53] new ssh client: &{IP:192.168.72.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/no-preload-703340/id_rsa Username:docker}
	I0806 08:34:57.158089   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHUsername
	I0806 08:34:57.158243   78922 sshutil.go:53] new ssh client: &{IP:192.168.72.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/no-preload-703340/id_rsa Username:docker}
	I0806 08:34:57.237574   78922 ssh_runner.go:195] Run: systemctl --version
	I0806 08:34:57.260498   78922 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0806 08:34:57.403619   78922 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0806 08:34:57.410198   78922 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0806 08:34:57.410267   78922 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0806 08:34:57.426321   78922 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0806 08:34:57.426345   78922 start.go:495] detecting cgroup driver to use...
	I0806 08:34:57.426414   78922 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0806 08:34:57.442340   78922 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 08:34:57.458008   78922 docker.go:217] disabling cri-docker service (if available) ...
	I0806 08:34:57.458061   78922 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0806 08:34:57.471516   78922 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0806 08:34:57.484996   78922 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0806 08:34:57.604605   78922 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0806 08:34:57.753590   78922 docker.go:233] disabling docker service ...
	I0806 08:34:57.753651   78922 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0806 08:34:57.768534   78922 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0806 08:34:57.782187   78922 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0806 08:34:57.936046   78922 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0806 08:34:58.076402   78922 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0806 08:34:58.091386   78922 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 08:34:58.112810   78922 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0806 08:34:58.112873   78922 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:58.123376   78922 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0806 08:34:58.123438   78922 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:58.133981   78922 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:58.144891   78922 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:58.156276   78922 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0806 08:34:58.166927   78922 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:58.176956   78922 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:58.193969   78922 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:58.204122   78922 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0806 08:34:58.213745   78922 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0806 08:34:58.213859   78922 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0806 08:34:58.226556   78922 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0806 08:34:58.236413   78922 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 08:34:58.359776   78922 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0806 08:34:58.505947   78922 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0806 08:34:58.506016   78922 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0806 08:34:58.511591   78922 start.go:563] Will wait 60s for crictl version
	I0806 08:34:58.511649   78922 ssh_runner.go:195] Run: which crictl
	I0806 08:34:58.516259   78922 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0806 08:34:58.568580   78922 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0806 08:34:58.568658   78922 ssh_runner.go:195] Run: crio --version
	I0806 08:34:58.607964   78922 ssh_runner.go:195] Run: crio --version
	I0806 08:34:58.642564   78922 out.go:177] * Preparing Kubernetes v1.31.0-rc.0 on CRI-O 1.29.1 ...
	I0806 08:34:54.018818   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:54.518987   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:55.018584   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:55.518848   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:56.018094   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:56.518052   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:57.018765   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:57.518128   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:58.018977   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:58.518233   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:55.789183   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:58.288863   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:54.612520   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:56.614634   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:59.115966   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:58.643802   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetIP
	I0806 08:34:58.646395   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:58.646714   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:34:58.646744   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:58.646948   78922 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0806 08:34:58.651136   78922 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 08:34:58.664347   78922 kubeadm.go:883] updating cluster {Name:no-preload-703340 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-rc.0 ClusterName:no-preload-703340 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.126 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m
0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0806 08:34:58.664479   78922 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime crio
	I0806 08:34:58.664511   78922 ssh_runner.go:195] Run: sudo crictl images --output json
	I0806 08:34:58.705043   78922 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-rc.0". assuming images are not preloaded.
	I0806 08:34:58.705080   78922 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-rc.0 registry.k8s.io/kube-controller-manager:v1.31.0-rc.0 registry.k8s.io/kube-scheduler:v1.31.0-rc.0 registry.k8s.io/kube-proxy:v1.31.0-rc.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0806 08:34:58.705157   78922 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 08:34:58.705170   78922 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0806 08:34:58.705177   78922 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0806 08:34:58.705245   78922 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0806 08:34:58.705272   78922 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0806 08:34:58.705281   78922 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0806 08:34:58.705160   78922 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0806 08:34:58.705469   78922 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0806 08:34:58.706962   78922 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0806 08:34:58.706993   78922 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 08:34:58.706986   78922 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0806 08:34:58.707018   78922 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0806 08:34:58.706968   78922 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0806 08:34:58.706989   78922 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0806 08:34:58.707066   78922 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0806 08:34:58.707315   78922 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0806 08:34:58.835957   78922 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0806 08:34:58.839572   78922 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0806 08:34:58.846955   78922 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0806 08:34:58.865707   78922 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0806 08:34:58.872729   78922 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0806 08:34:58.878484   78922 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0806 08:34:58.898935   78922 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0806 08:34:58.919621   78922 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-rc.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-rc.0" does not exist at hash "fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c" in container runtime
	I0806 08:34:58.919654   78922 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0806 08:34:58.919691   78922 ssh_runner.go:195] Run: which crictl
	I0806 08:34:58.949493   78922 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0806 08:34:58.949544   78922 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0806 08:34:58.949597   78922 ssh_runner.go:195] Run: which crictl
	I0806 08:34:58.971429   78922 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0806 08:34:58.971476   78922 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0806 08:34:58.971522   78922 ssh_runner.go:195] Run: which crictl
	I0806 08:34:59.015321   78922 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-rc.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-rc.0" does not exist at hash "0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c" in container runtime
	I0806 08:34:59.015359   78922 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0806 08:34:59.015397   78922 ssh_runner.go:195] Run: which crictl
	I0806 08:34:59.107718   78922 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-rc.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-rc.0" does not exist at hash "41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318" in container runtime
	I0806 08:34:59.107761   78922 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0806 08:34:59.107797   78922 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-rc.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-rc.0" does not exist at hash "c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0" in container runtime
	I0806 08:34:59.107867   78922 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0806 08:34:59.107904   78922 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0806 08:34:59.107987   78922 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0806 08:34:59.107809   78922 ssh_runner.go:195] Run: which crictl
	I0806 08:34:59.108037   78922 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0806 08:34:59.107994   78922 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0806 08:34:59.108037   78922 ssh_runner.go:195] Run: which crictl
	I0806 08:34:59.120323   78922 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0806 08:34:59.218864   78922 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0
	I0806 08:34:59.218982   78922 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-rc.0
	I0806 08:34:59.222696   78922 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0806 08:34:59.222735   78922 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0806 08:34:59.222794   78922 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0
	I0806 08:34:59.222820   78922 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0806 08:34:59.222854   78922 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0806 08:34:59.222882   78922 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-rc.0
	I0806 08:34:59.222900   78922 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-rc.0
	I0806 08:34:59.222933   78922 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.15-0
	I0806 08:34:59.222964   78922 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-rc.0
	I0806 08:34:59.227260   78922 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-rc.0 (exists)
	I0806 08:34:59.227280   78922 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-rc.0
	I0806 08:34:59.227319   78922 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-rc.0
	I0806 08:34:59.268220   78922 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-rc.0 (exists)
	I0806 08:34:59.268240   78922 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0
	I0806 08:34:59.268291   78922 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0806 08:34:59.268325   78922 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0806 08:34:59.268332   78922 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-rc.0
	I0806 08:34:59.268349   78922 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-rc.0 (exists)
	I0806 08:34:59.613734   78922 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 08:35:01.623115   78922 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-rc.0: (2.395772878s)
	I0806 08:35:01.623140   78922 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-rc.0: (2.354795538s)
	I0806 08:35:01.623152   78922 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0 from cache
	I0806 08:35:01.623161   78922 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-rc.0 (exists)
	I0806 08:35:01.623168   78922 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-rc.0
	I0806 08:35:01.623217   78922 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-rc.0
	I0806 08:35:01.623218   78922 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.009459709s)
	I0806 08:35:01.623310   78922 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0806 08:35:01.623340   78922 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 08:35:01.623366   78922 ssh_runner.go:195] Run: which crictl
	I0806 08:34:59.019117   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:59.518121   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:00.018102   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:00.518402   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:01.019125   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:01.518141   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:02.018063   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:02.518834   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:03.018074   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:03.518904   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:00.789181   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:02.790704   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:01.612449   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:03.655402   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:03.781865   78922 ssh_runner.go:235] Completed: which crictl: (2.158478864s)
	I0806 08:35:03.781942   78922 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 08:35:03.782003   78922 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-rc.0: (2.158736373s)
	I0806 08:35:03.782023   78922 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0 from cache
	I0806 08:35:03.782042   78922 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-rc.0
	I0806 08:35:03.782112   78922 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-rc.0
	I0806 08:35:05.247976   78922 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.466007397s)
	I0806 08:35:05.248028   78922 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-rc.0: (1.465889339s)
	I0806 08:35:05.248049   78922 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0806 08:35:05.248054   78922 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0 from cache
	I0806 08:35:05.248065   78922 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0806 08:35:05.248117   78922 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0806 08:35:05.248166   78922 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0806 08:35:04.018047   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:04.518891   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:05.018447   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:05.518506   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:06.019133   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:06.518070   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:07.018362   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:07.518320   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:08.019034   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:08.518740   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:05.288137   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:07.288969   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:06.113473   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:08.612266   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:09.135551   78922 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.887407478s)
	I0806 08:35:09.135591   78922 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0806 08:35:09.135592   78922 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (3.887405091s)
	I0806 08:35:09.135605   78922 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0806 08:35:09.135618   78922 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0806 08:35:09.135645   78922 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0806 08:35:11.131984   78922 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.996319459s)
	I0806 08:35:11.132013   78922 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0806 08:35:11.132024   78922 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-rc.0
	I0806 08:35:11.132070   78922 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-rc.0
	I0806 08:35:09.018349   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:09.518189   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:10.018824   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:10.518260   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:11.018914   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:11.518709   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:12.018349   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:12.518719   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:13.019022   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:13.518318   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:09.787576   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:11.789165   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:10.614185   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:13.112391   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:13.395512   78922 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-rc.0: (2.263417409s)
	I0806 08:35:13.395548   78922 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-rc.0 from cache
	I0806 08:35:13.395560   78922 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0806 08:35:13.395608   78922 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0806 08:35:14.150250   78922 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0806 08:35:14.150301   78922 cache_images.go:123] Successfully loaded all cached images
	I0806 08:35:14.150309   78922 cache_images.go:92] duration metric: took 15.445214072s to LoadCachedImages
	I0806 08:35:14.150322   78922 kubeadm.go:934] updating node { 192.168.72.126 8443 v1.31.0-rc.0 crio true true} ...
	I0806 08:35:14.150540   78922 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-703340 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.126
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-rc.0 ClusterName:no-preload-703340 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0806 08:35:14.150640   78922 ssh_runner.go:195] Run: crio config
	I0806 08:35:14.210763   78922 cni.go:84] Creating CNI manager for ""
	I0806 08:35:14.210794   78922 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0806 08:35:14.210806   78922 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0806 08:35:14.210842   78922 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.126 APIServerPort:8443 KubernetesVersion:v1.31.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-703340 NodeName:no-preload-703340 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.126"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.126 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0806 08:35:14.210979   78922 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.126
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-703340"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.126
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.126"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0806 08:35:14.211036   78922 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-rc.0
	I0806 08:35:14.222106   78922 binaries.go:44] Found k8s binaries, skipping transfer
	I0806 08:35:14.222189   78922 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0806 08:35:14.231944   78922 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0806 08:35:14.250491   78922 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0806 08:35:14.268955   78922 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0806 08:35:14.289661   78922 ssh_runner.go:195] Run: grep 192.168.72.126	control-plane.minikube.internal$ /etc/hosts
	I0806 08:35:14.294005   78922 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.126	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 08:35:14.307700   78922 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 08:35:14.451462   78922 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 08:35:14.478155   78922 certs.go:68] Setting up /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/no-preload-703340 for IP: 192.168.72.126
	I0806 08:35:14.478177   78922 certs.go:194] generating shared ca certs ...
	I0806 08:35:14.478191   78922 certs.go:226] acquiring lock for ca certs: {Name:mkf5c5aed91f4108054d9f2c742cad66c86afbed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:35:14.478365   78922 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.key
	I0806 08:35:14.478420   78922 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.key
	I0806 08:35:14.478434   78922 certs.go:256] generating profile certs ...
	I0806 08:35:14.478529   78922 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/no-preload-703340/client.key
	I0806 08:35:14.478615   78922 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/no-preload-703340/apiserver.key.0b170e14
	I0806 08:35:14.478665   78922 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/no-preload-703340/proxy-client.key
	I0806 08:35:14.478822   78922 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246.pem (1338 bytes)
	W0806 08:35:14.478871   78922 certs.go:480] ignoring /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246_empty.pem, impossibly tiny 0 bytes
	I0806 08:35:14.478885   78922 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem (1675 bytes)
	I0806 08:35:14.478941   78922 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem (1078 bytes)
	I0806 08:35:14.478978   78922 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem (1123 bytes)
	I0806 08:35:14.479014   78922 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem (1675 bytes)
	I0806 08:35:14.479078   78922 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem (1708 bytes)
	I0806 08:35:14.479773   78922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0806 08:35:14.527838   78922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0806 08:35:14.579788   78922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0806 08:35:14.614739   78922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0806 08:35:14.657559   78922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/no-preload-703340/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0806 08:35:14.699354   78922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/no-preload-703340/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0806 08:35:14.724875   78922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/no-preload-703340/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0806 08:35:14.751020   78922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/no-preload-703340/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0806 08:35:14.778300   78922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem --> /usr/share/ca-certificates/202462.pem (1708 bytes)
	I0806 08:35:14.805018   78922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0806 08:35:14.830791   78922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246.pem --> /usr/share/ca-certificates/20246.pem (1338 bytes)
	I0806 08:35:14.856120   78922 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0806 08:35:14.874909   78922 ssh_runner.go:195] Run: openssl version
	I0806 08:35:14.880931   78922 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202462.pem && ln -fs /usr/share/ca-certificates/202462.pem /etc/ssl/certs/202462.pem"
	I0806 08:35:14.892741   78922 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202462.pem
	I0806 08:35:14.897430   78922 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  6 07:17 /usr/share/ca-certificates/202462.pem
	I0806 08:35:14.897489   78922 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202462.pem
	I0806 08:35:14.903324   78922 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202462.pem /etc/ssl/certs/3ec20f2e.0"
	I0806 08:35:14.914505   78922 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0806 08:35:14.925465   78922 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0806 08:35:14.930490   78922 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  6 07:07 /usr/share/ca-certificates/minikubeCA.pem
	I0806 08:35:14.930559   78922 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0806 08:35:14.936384   78922 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0806 08:35:14.947836   78922 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20246.pem && ln -fs /usr/share/ca-certificates/20246.pem /etc/ssl/certs/20246.pem"
	I0806 08:35:14.959376   78922 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20246.pem
	I0806 08:35:14.964118   78922 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  6 07:17 /usr/share/ca-certificates/20246.pem
	I0806 08:35:14.964168   78922 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20246.pem
	I0806 08:35:14.970266   78922 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20246.pem /etc/ssl/certs/51391683.0"
	I0806 08:35:14.982244   78922 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0806 08:35:14.987367   78922 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0806 08:35:14.993508   78922 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0806 08:35:14.999400   78922 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0806 08:35:15.005751   78922 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0806 08:35:15.011588   78922 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0806 08:35:15.017766   78922 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0806 08:35:15.023853   78922 kubeadm.go:392] StartCluster: {Name:no-preload-703340 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-rc.0 ClusterName:no-preload-703340 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.126 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 08:35:15.024060   78922 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0806 08:35:15.024113   78922 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0806 08:35:15.072318   78922 cri.go:89] found id: ""
	I0806 08:35:15.072415   78922 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0806 08:35:15.083158   78922 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0806 08:35:15.083182   78922 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0806 08:35:15.083225   78922 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0806 08:35:15.093301   78922 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0806 08:35:15.094369   78922 kubeconfig.go:125] found "no-preload-703340" server: "https://192.168.72.126:8443"
	I0806 08:35:15.096545   78922 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0806 08:35:15.106190   78922 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.126
	I0806 08:35:15.106226   78922 kubeadm.go:1160] stopping kube-system containers ...
	I0806 08:35:15.106237   78922 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0806 08:35:15.106288   78922 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0806 08:35:15.145974   78922 cri.go:89] found id: ""
	I0806 08:35:15.146060   78922 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0806 08:35:15.163821   78922 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0806 08:35:15.174473   78922 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0806 08:35:15.174498   78922 kubeadm.go:157] found existing configuration files:
	
	I0806 08:35:15.174562   78922 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0806 08:35:15.185227   78922 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0806 08:35:15.185309   78922 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0806 08:35:15.195614   78922 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0806 08:35:15.205197   78922 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0806 08:35:15.205261   78922 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0806 08:35:15.215511   78922 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0806 08:35:15.225354   78922 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0806 08:35:15.225412   78922 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0806 08:35:15.234912   78922 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0806 08:35:15.244638   78922 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0806 08:35:15.244703   78922 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0806 08:35:15.255080   78922 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0806 08:35:15.265861   78922 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:35:15.399192   78922 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:35:16.182816   78922 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:35:16.397386   78922 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:35:16.463980   78922 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:35:16.550229   78922 api_server.go:52] waiting for apiserver process to appear ...
	I0806 08:35:16.550347   78922 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:14.018930   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:14.518251   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:15.018981   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:15.518764   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:16.018114   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:16.518157   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:17.018124   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:17.518604   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:18.018742   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:18.518755   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:14.290050   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:16.788472   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:18.788526   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:15.112664   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:17.113214   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:17.050447   78922 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:17.551220   78922 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:17.603538   78922 api_server.go:72] duration metric: took 1.053308623s to wait for apiserver process to appear ...
	I0806 08:35:17.603574   78922 api_server.go:88] waiting for apiserver healthz status ...
	I0806 08:35:17.603595   78922 api_server.go:253] Checking apiserver healthz at https://192.168.72.126:8443/healthz ...
	I0806 08:35:17.604064   78922 api_server.go:269] stopped: https://192.168.72.126:8443/healthz: Get "https://192.168.72.126:8443/healthz": dial tcp 192.168.72.126:8443: connect: connection refused
	I0806 08:35:18.104696   78922 api_server.go:253] Checking apiserver healthz at https://192.168.72.126:8443/healthz ...
	I0806 08:35:20.458841   78922 api_server.go:279] https://192.168.72.126:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0806 08:35:20.458876   78922 api_server.go:103] status: https://192.168.72.126:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0806 08:35:20.458893   78922 api_server.go:253] Checking apiserver healthz at https://192.168.72.126:8443/healthz ...
	I0806 08:35:20.495570   78922 api_server.go:279] https://192.168.72.126:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0806 08:35:20.495605   78922 api_server.go:103] status: https://192.168.72.126:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0806 08:35:20.603728   78922 api_server.go:253] Checking apiserver healthz at https://192.168.72.126:8443/healthz ...
	I0806 08:35:20.628351   78922 api_server.go:279] https://192.168.72.126:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0806 08:35:20.628409   78922 api_server.go:103] status: https://192.168.72.126:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0806 08:35:21.103717   78922 api_server.go:253] Checking apiserver healthz at https://192.168.72.126:8443/healthz ...
	I0806 08:35:21.109031   78922 api_server.go:279] https://192.168.72.126:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0806 08:35:21.109064   78922 api_server.go:103] status: https://192.168.72.126:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0806 08:35:21.604400   78922 api_server.go:253] Checking apiserver healthz at https://192.168.72.126:8443/healthz ...
	I0806 08:35:21.613667   78922 api_server.go:279] https://192.168.72.126:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0806 08:35:21.613706   78922 api_server.go:103] status: https://192.168.72.126:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0806 08:35:22.103921   78922 api_server.go:253] Checking apiserver healthz at https://192.168.72.126:8443/healthz ...
	I0806 08:35:22.111933   78922 api_server.go:279] https://192.168.72.126:8443/healthz returned 200:
	ok
	I0806 08:35:22.118406   78922 api_server.go:141] control plane version: v1.31.0-rc.0
	I0806 08:35:22.118430   78922 api_server.go:131] duration metric: took 4.514848902s to wait for apiserver health ...
	I0806 08:35:22.118438   78922 cni.go:84] Creating CNI manager for ""
	I0806 08:35:22.118444   78922 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0806 08:35:22.119689   78922 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0806 08:35:19.018302   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:19.518693   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:20.018804   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:20.518878   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:21.018974   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:21.518747   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:22.018530   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:22.519082   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:23.019088   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:23.518932   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:20.788766   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:23.288382   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:19.614878   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:22.113841   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:22.121321   78922 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0806 08:35:22.132760   78922 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0806 08:35:22.151770   78922 system_pods.go:43] waiting for kube-system pods to appear ...
	I0806 08:35:22.161775   78922 system_pods.go:59] 8 kube-system pods found
	I0806 08:35:22.161806   78922 system_pods.go:61] "coredns-6f6b679f8f-ft9vg" [0f9ea981-764b-4777-bf78-8798570b20d8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0806 08:35:22.161818   78922 system_pods.go:61] "etcd-no-preload-703340" [6adb80b2-0d33-4d53-a868-f83226cc7c26] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0806 08:35:22.161826   78922 system_pods.go:61] "kube-apiserver-no-preload-703340" [4a415305-2f94-42c5-901d-49f0b0114cf6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0806 08:35:22.161832   78922 system_pods.go:61] "kube-controller-manager-no-preload-703340" [0351f8f4-5046-4068-ada3-103394ee9ae2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0806 08:35:22.161837   78922 system_pods.go:61] "kube-proxy-7t2sd" [27f3aa37-eca2-450b-a98d-b3246a7e48ac] Running
	I0806 08:35:22.161841   78922 system_pods.go:61] "kube-scheduler-no-preload-703340" [ad4f6239-f5e1-4a76-91ea-8da5478e8992] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0806 08:35:22.161845   78922 system_pods.go:61] "metrics-server-6867b74b74-qz4vx" [800676b3-4940-4ae2-ad37-7adbd67ea8fa] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0806 08:35:22.161849   78922 system_pods.go:61] "storage-provisioner" [8fa414bf-3923-4c20-acd7-a48fba6f4044] Running
	I0806 08:35:22.161855   78922 system_pods.go:74] duration metric: took 10.062633ms to wait for pod list to return data ...
	I0806 08:35:22.161865   78922 node_conditions.go:102] verifying NodePressure condition ...
	I0806 08:35:22.165604   78922 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0806 08:35:22.165626   78922 node_conditions.go:123] node cpu capacity is 2
	I0806 08:35:22.165637   78922 node_conditions.go:105] duration metric: took 3.767899ms to run NodePressure ...
	I0806 08:35:22.165656   78922 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:35:22.445769   78922 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0806 08:35:22.451056   78922 kubeadm.go:739] kubelet initialised
	I0806 08:35:22.451078   78922 kubeadm.go:740] duration metric: took 5.279835ms waiting for restarted kubelet to initialise ...
	I0806 08:35:22.451085   78922 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 08:35:22.458199   78922 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6f6b679f8f-ft9vg" in "kube-system" namespace to be "Ready" ...
	I0806 08:35:22.463913   78922 pod_ready.go:97] node "no-preload-703340" hosting pod "coredns-6f6b679f8f-ft9vg" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-703340" has status "Ready":"False"
	I0806 08:35:22.463936   78922 pod_ready.go:81] duration metric: took 5.712258ms for pod "coredns-6f6b679f8f-ft9vg" in "kube-system" namespace to be "Ready" ...
	E0806 08:35:22.463945   78922 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-703340" hosting pod "coredns-6f6b679f8f-ft9vg" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-703340" has status "Ready":"False"
	I0806 08:35:22.463952   78922 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-703340" in "kube-system" namespace to be "Ready" ...
	I0806 08:35:22.471896   78922 pod_ready.go:97] node "no-preload-703340" hosting pod "etcd-no-preload-703340" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-703340" has status "Ready":"False"
	I0806 08:35:22.471921   78922 pod_ready.go:81] duration metric: took 7.962236ms for pod "etcd-no-preload-703340" in "kube-system" namespace to be "Ready" ...
	E0806 08:35:22.471930   78922 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-703340" hosting pod "etcd-no-preload-703340" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-703340" has status "Ready":"False"
	I0806 08:35:22.471936   78922 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-703340" in "kube-system" namespace to be "Ready" ...
	I0806 08:35:22.478365   78922 pod_ready.go:97] node "no-preload-703340" hosting pod "kube-apiserver-no-preload-703340" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-703340" has status "Ready":"False"
	I0806 08:35:22.478394   78922 pod_ready.go:81] duration metric: took 6.451038ms for pod "kube-apiserver-no-preload-703340" in "kube-system" namespace to be "Ready" ...
	E0806 08:35:22.478405   78922 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-703340" hosting pod "kube-apiserver-no-preload-703340" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-703340" has status "Ready":"False"
	I0806 08:35:22.478414   78922 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-703340" in "kube-system" namespace to be "Ready" ...
	I0806 08:35:22.559022   78922 pod_ready.go:97] node "no-preload-703340" hosting pod "kube-controller-manager-no-preload-703340" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-703340" has status "Ready":"False"
	I0806 08:35:22.559058   78922 pod_ready.go:81] duration metric: took 80.632686ms for pod "kube-controller-manager-no-preload-703340" in "kube-system" namespace to be "Ready" ...
	E0806 08:35:22.559071   78922 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-703340" hosting pod "kube-controller-manager-no-preload-703340" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-703340" has status "Ready":"False"
	I0806 08:35:22.559080   78922 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-7t2sd" in "kube-system" namespace to be "Ready" ...
	I0806 08:35:22.955121   78922 pod_ready.go:92] pod "kube-proxy-7t2sd" in "kube-system" namespace has status "Ready":"True"
	I0806 08:35:22.955142   78922 pod_ready.go:81] duration metric: took 396.053517ms for pod "kube-proxy-7t2sd" in "kube-system" namespace to be "Ready" ...
	I0806 08:35:22.955152   78922 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-703340" in "kube-system" namespace to be "Ready" ...
	I0806 08:35:24.961266   78922 pod_ready.go:102] pod "kube-scheduler-no-preload-703340" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:24.018764   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:24.519010   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:25.019090   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:25.518260   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:26.018784   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:26.518903   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:27.019011   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:27.518740   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:28.018299   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:28.518391   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:25.787771   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:27.788719   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:24.612790   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:26.612992   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:29.112518   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:26.962028   78922 pod_ready.go:102] pod "kube-scheduler-no-preload-703340" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:29.461941   78922 pod_ready.go:102] pod "kube-scheduler-no-preload-703340" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:29.018449   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:29.518446   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:30.018179   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:30.518270   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:31.018913   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:31.518140   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:32.018621   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:32.518573   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:33.018512   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:33.518604   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:29.790220   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:32.287668   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:31.112680   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:33.612484   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:31.961716   78922 pod_ready.go:102] pod "kube-scheduler-no-preload-703340" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:34.461368   78922 pod_ready.go:102] pod "kube-scheduler-no-preload-703340" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:34.961460   78922 pod_ready.go:92] pod "kube-scheduler-no-preload-703340" in "kube-system" namespace has status "Ready":"True"
	I0806 08:35:34.961483   78922 pod_ready.go:81] duration metric: took 12.006324097s for pod "kube-scheduler-no-preload-703340" in "kube-system" namespace to be "Ready" ...
	I0806 08:35:34.961495   78922 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace to be "Ready" ...
	I0806 08:35:34.018132   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:34.518476   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:35.018797   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:35.518507   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:36.018976   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:36.518062   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:37.018463   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:37.518648   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:38.018861   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:38.518236   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:34.288032   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:36.787530   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:38.789235   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:35.612530   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:38.114833   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:36.967646   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:38.969106   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:41.469056   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:39.018424   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:39.518065   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:40.018381   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:40.518979   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:41.018310   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:41.518277   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:42.018881   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:42.518157   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:43.018166   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:43.518740   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:41.287534   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:43.288666   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:40.115187   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:42.611895   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:43.967648   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:45.968277   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:44.018075   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:44.518545   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:45.018342   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:45.519066   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:46.018796   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:46.519031   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:47.018876   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:47.518649   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:35:47.518746   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:35:47.559141   79927 cri.go:89] found id: ""
	I0806 08:35:47.559175   79927 logs.go:276] 0 containers: []
	W0806 08:35:47.559188   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:35:47.559195   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:35:47.559253   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:35:47.596840   79927 cri.go:89] found id: ""
	I0806 08:35:47.596900   79927 logs.go:276] 0 containers: []
	W0806 08:35:47.596913   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:35:47.596921   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:35:47.596995   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:35:47.634932   79927 cri.go:89] found id: ""
	I0806 08:35:47.634961   79927 logs.go:276] 0 containers: []
	W0806 08:35:47.634969   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:35:47.634975   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:35:47.635031   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:35:47.672170   79927 cri.go:89] found id: ""
	I0806 08:35:47.672202   79927 logs.go:276] 0 containers: []
	W0806 08:35:47.672213   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:35:47.672220   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:35:47.672284   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:35:47.709643   79927 cri.go:89] found id: ""
	I0806 08:35:47.709672   79927 logs.go:276] 0 containers: []
	W0806 08:35:47.709683   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:35:47.709690   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:35:47.709752   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:35:47.750036   79927 cri.go:89] found id: ""
	I0806 08:35:47.750067   79927 logs.go:276] 0 containers: []
	W0806 08:35:47.750076   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:35:47.750082   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:35:47.750149   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:35:47.791837   79927 cri.go:89] found id: ""
	I0806 08:35:47.791862   79927 logs.go:276] 0 containers: []
	W0806 08:35:47.791872   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:35:47.791879   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:35:47.791938   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:35:47.831972   79927 cri.go:89] found id: ""
	I0806 08:35:47.831997   79927 logs.go:276] 0 containers: []
	W0806 08:35:47.832005   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:35:47.832014   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:35:47.832028   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:35:47.962228   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:35:47.962252   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:35:47.962267   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:35:48.031276   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:35:48.031314   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:35:48.075869   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:35:48.075912   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:35:48.131817   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:35:48.131859   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:35:45.788042   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:47.789501   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:44.612796   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:47.112009   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:48.468289   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:50.469101   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:50.647351   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:50.661916   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:35:50.662002   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:35:50.707294   79927 cri.go:89] found id: ""
	I0806 08:35:50.707323   79927 logs.go:276] 0 containers: []
	W0806 08:35:50.707335   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:35:50.707342   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:35:50.707396   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:35:50.744111   79927 cri.go:89] found id: ""
	I0806 08:35:50.744133   79927 logs.go:276] 0 containers: []
	W0806 08:35:50.744149   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:35:50.744160   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:35:50.744205   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:35:50.781482   79927 cri.go:89] found id: ""
	I0806 08:35:50.781513   79927 logs.go:276] 0 containers: []
	W0806 08:35:50.781524   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:35:50.781531   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:35:50.781591   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:35:50.828788   79927 cri.go:89] found id: ""
	I0806 08:35:50.828817   79927 logs.go:276] 0 containers: []
	W0806 08:35:50.828827   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:35:50.828834   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:35:50.828896   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:35:50.865168   79927 cri.go:89] found id: ""
	I0806 08:35:50.865199   79927 logs.go:276] 0 containers: []
	W0806 08:35:50.865211   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:35:50.865219   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:35:50.865273   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:35:50.902814   79927 cri.go:89] found id: ""
	I0806 08:35:50.902842   79927 logs.go:276] 0 containers: []
	W0806 08:35:50.902851   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:35:50.902856   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:35:50.902907   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:35:50.939861   79927 cri.go:89] found id: ""
	I0806 08:35:50.939891   79927 logs.go:276] 0 containers: []
	W0806 08:35:50.939899   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:35:50.939905   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:35:50.939961   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:35:50.984743   79927 cri.go:89] found id: ""
	I0806 08:35:50.984773   79927 logs.go:276] 0 containers: []
	W0806 08:35:50.984782   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:35:50.984793   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:35:50.984809   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:35:50.998476   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:35:50.998515   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:35:51.081100   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:35:51.081126   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:35:51.081139   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:35:51.157634   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:35:51.157679   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:35:51.210685   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:35:51.210716   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:35:50.288343   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:52.804477   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:49.612661   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:52.113950   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:54.114200   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:52.969251   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:55.468041   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:53.778620   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:53.793090   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:35:53.793154   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:35:53.837558   79927 cri.go:89] found id: ""
	I0806 08:35:53.837584   79927 logs.go:276] 0 containers: []
	W0806 08:35:53.837594   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:35:53.837599   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:35:53.837670   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:35:53.878855   79927 cri.go:89] found id: ""
	I0806 08:35:53.878884   79927 logs.go:276] 0 containers: []
	W0806 08:35:53.878893   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:35:53.878898   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:35:53.878945   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:35:53.918330   79927 cri.go:89] found id: ""
	I0806 08:35:53.918357   79927 logs.go:276] 0 containers: []
	W0806 08:35:53.918366   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:35:53.918371   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:35:53.918422   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:35:53.959156   79927 cri.go:89] found id: ""
	I0806 08:35:53.959186   79927 logs.go:276] 0 containers: []
	W0806 08:35:53.959196   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:35:53.959204   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:35:53.959309   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:35:53.996775   79927 cri.go:89] found id: ""
	I0806 08:35:53.996802   79927 logs.go:276] 0 containers: []
	W0806 08:35:53.996810   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:35:53.996816   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:35:53.996874   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:35:54.033047   79927 cri.go:89] found id: ""
	I0806 08:35:54.033072   79927 logs.go:276] 0 containers: []
	W0806 08:35:54.033082   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:35:54.033089   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:35:54.033152   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:35:54.069426   79927 cri.go:89] found id: ""
	I0806 08:35:54.069458   79927 logs.go:276] 0 containers: []
	W0806 08:35:54.069468   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:35:54.069475   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:35:54.069536   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:35:54.109114   79927 cri.go:89] found id: ""
	I0806 08:35:54.109143   79927 logs.go:276] 0 containers: []
	W0806 08:35:54.109153   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:35:54.109164   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:35:54.109179   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:35:54.161264   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:35:54.161298   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:35:54.175216   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:35:54.175247   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:35:54.254970   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:35:54.254999   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:35:54.255014   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:35:54.331561   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:35:54.331596   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:35:56.873110   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:56.889302   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:35:56.889378   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:35:56.931875   79927 cri.go:89] found id: ""
	I0806 08:35:56.931904   79927 logs.go:276] 0 containers: []
	W0806 08:35:56.931913   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:35:56.931919   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:35:56.931975   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:35:56.995549   79927 cri.go:89] found id: ""
	I0806 08:35:56.995587   79927 logs.go:276] 0 containers: []
	W0806 08:35:56.995601   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:35:56.995608   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:35:56.995675   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:35:57.042708   79927 cri.go:89] found id: ""
	I0806 08:35:57.042735   79927 logs.go:276] 0 containers: []
	W0806 08:35:57.042743   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:35:57.042749   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:35:57.042833   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:35:57.079160   79927 cri.go:89] found id: ""
	I0806 08:35:57.079187   79927 logs.go:276] 0 containers: []
	W0806 08:35:57.079197   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:35:57.079203   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:35:57.079260   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:35:57.116358   79927 cri.go:89] found id: ""
	I0806 08:35:57.116409   79927 logs.go:276] 0 containers: []
	W0806 08:35:57.116421   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:35:57.116429   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:35:57.116489   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:35:57.151051   79927 cri.go:89] found id: ""
	I0806 08:35:57.151079   79927 logs.go:276] 0 containers: []
	W0806 08:35:57.151087   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:35:57.151096   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:35:57.151148   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:35:57.189747   79927 cri.go:89] found id: ""
	I0806 08:35:57.189791   79927 logs.go:276] 0 containers: []
	W0806 08:35:57.189800   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:35:57.189806   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:35:57.189871   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:35:57.225251   79927 cri.go:89] found id: ""
	I0806 08:35:57.225277   79927 logs.go:276] 0 containers: []
	W0806 08:35:57.225287   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:35:57.225298   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:35:57.225314   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:35:57.238746   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:35:57.238773   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:35:57.312931   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:35:57.312959   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:35:57.312976   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:35:57.394104   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:35:57.394139   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:35:57.435993   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:35:57.436028   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:35:55.288712   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:57.789512   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:56.612875   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:58.613571   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:57.468429   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:59.969618   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:59.989556   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:00.004129   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:00.004205   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:00.042192   79927 cri.go:89] found id: ""
	I0806 08:36:00.042215   79927 logs.go:276] 0 containers: []
	W0806 08:36:00.042223   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:00.042228   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:00.042273   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:00.080921   79927 cri.go:89] found id: ""
	I0806 08:36:00.080956   79927 logs.go:276] 0 containers: []
	W0806 08:36:00.080966   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:00.080973   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:00.081033   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:00.123186   79927 cri.go:89] found id: ""
	I0806 08:36:00.123206   79927 logs.go:276] 0 containers: []
	W0806 08:36:00.123216   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:00.123223   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:00.123274   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:00.165252   79927 cri.go:89] found id: ""
	I0806 08:36:00.165281   79927 logs.go:276] 0 containers: []
	W0806 08:36:00.165289   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:00.165294   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:00.165340   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:00.201601   79927 cri.go:89] found id: ""
	I0806 08:36:00.201639   79927 logs.go:276] 0 containers: []
	W0806 08:36:00.201653   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:00.201662   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:00.201730   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:00.236933   79927 cri.go:89] found id: ""
	I0806 08:36:00.236964   79927 logs.go:276] 0 containers: []
	W0806 08:36:00.236975   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:00.236983   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:00.237043   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:00.273215   79927 cri.go:89] found id: ""
	I0806 08:36:00.273247   79927 logs.go:276] 0 containers: []
	W0806 08:36:00.273256   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:00.273261   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:00.273319   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:00.308075   79927 cri.go:89] found id: ""
	I0806 08:36:00.308105   79927 logs.go:276] 0 containers: []
	W0806 08:36:00.308116   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:00.308126   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:00.308141   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:00.360096   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:00.360146   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:00.374484   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:00.374521   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:00.454297   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:00.454318   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:00.454334   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:00.542665   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:00.542702   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:03.084297   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:03.098661   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:03.098737   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:03.134751   79927 cri.go:89] found id: ""
	I0806 08:36:03.134786   79927 logs.go:276] 0 containers: []
	W0806 08:36:03.134797   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:03.134805   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:03.134873   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:03.177412   79927 cri.go:89] found id: ""
	I0806 08:36:03.177442   79927 logs.go:276] 0 containers: []
	W0806 08:36:03.177453   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:03.177461   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:03.177516   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:03.213177   79927 cri.go:89] found id: ""
	I0806 08:36:03.213209   79927 logs.go:276] 0 containers: []
	W0806 08:36:03.213221   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:03.213229   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:03.213291   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:03.254569   79927 cri.go:89] found id: ""
	I0806 08:36:03.254594   79927 logs.go:276] 0 containers: []
	W0806 08:36:03.254603   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:03.254608   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:03.254656   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:03.292564   79927 cri.go:89] found id: ""
	I0806 08:36:03.292590   79927 logs.go:276] 0 containers: []
	W0806 08:36:03.292597   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:03.292610   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:03.292671   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:03.331572   79927 cri.go:89] found id: ""
	I0806 08:36:03.331596   79927 logs.go:276] 0 containers: []
	W0806 08:36:03.331605   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:03.331610   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:03.331658   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:03.366368   79927 cri.go:89] found id: ""
	I0806 08:36:03.366398   79927 logs.go:276] 0 containers: []
	W0806 08:36:03.366409   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:03.366417   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:03.366469   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:03.404345   79927 cri.go:89] found id: ""
	I0806 08:36:03.404387   79927 logs.go:276] 0 containers: []
	W0806 08:36:03.404399   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:03.404411   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:03.404425   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:03.454825   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:03.454864   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:03.469437   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:03.469466   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:03.553247   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:03.553274   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:03.553290   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:03.635461   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:03.635509   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:00.287469   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:02.787634   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:01.111573   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:03.613750   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:02.468084   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:04.468169   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:06.469036   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:06.175216   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:06.190083   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:06.190158   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:06.226428   79927 cri.go:89] found id: ""
	I0806 08:36:06.226459   79927 logs.go:276] 0 containers: []
	W0806 08:36:06.226468   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:06.226473   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:06.226529   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:06.263781   79927 cri.go:89] found id: ""
	I0806 08:36:06.263805   79927 logs.go:276] 0 containers: []
	W0806 08:36:06.263813   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:06.263819   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:06.263895   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:06.304335   79927 cri.go:89] found id: ""
	I0806 08:36:06.304386   79927 logs.go:276] 0 containers: []
	W0806 08:36:06.304397   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:06.304404   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:06.304466   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:06.341138   79927 cri.go:89] found id: ""
	I0806 08:36:06.341166   79927 logs.go:276] 0 containers: []
	W0806 08:36:06.341177   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:06.341184   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:06.341248   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:06.381547   79927 cri.go:89] found id: ""
	I0806 08:36:06.381577   79927 logs.go:276] 0 containers: []
	W0806 08:36:06.381587   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:06.381594   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:06.381662   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:06.417606   79927 cri.go:89] found id: ""
	I0806 08:36:06.417630   79927 logs.go:276] 0 containers: []
	W0806 08:36:06.417637   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:06.417643   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:06.417700   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:06.456258   79927 cri.go:89] found id: ""
	I0806 08:36:06.456287   79927 logs.go:276] 0 containers: []
	W0806 08:36:06.456298   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:06.456307   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:06.456404   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:06.494268   79927 cri.go:89] found id: ""
	I0806 08:36:06.494309   79927 logs.go:276] 0 containers: []
	W0806 08:36:06.494321   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:06.494332   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:06.494346   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:06.547670   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:06.547710   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:06.562020   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:06.562049   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:06.632691   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:06.632717   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:06.632729   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:06.721737   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:06.721780   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:04.787931   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:07.287463   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:06.111545   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:08.113637   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:08.968767   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:11.468847   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:09.265154   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:09.278664   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:09.278743   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:09.313464   79927 cri.go:89] found id: ""
	I0806 08:36:09.313494   79927 logs.go:276] 0 containers: []
	W0806 08:36:09.313506   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:09.313513   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:09.313561   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:09.352107   79927 cri.go:89] found id: ""
	I0806 08:36:09.352133   79927 logs.go:276] 0 containers: []
	W0806 08:36:09.352146   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:09.352153   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:09.352215   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:09.387547   79927 cri.go:89] found id: ""
	I0806 08:36:09.387581   79927 logs.go:276] 0 containers: []
	W0806 08:36:09.387594   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:09.387601   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:09.387684   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:09.422157   79927 cri.go:89] found id: ""
	I0806 08:36:09.422193   79927 logs.go:276] 0 containers: []
	W0806 08:36:09.422202   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:09.422207   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:09.422275   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:09.463065   79927 cri.go:89] found id: ""
	I0806 08:36:09.463106   79927 logs.go:276] 0 containers: []
	W0806 08:36:09.463122   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:09.463145   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:09.463209   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:09.499450   79927 cri.go:89] found id: ""
	I0806 08:36:09.499474   79927 logs.go:276] 0 containers: []
	W0806 08:36:09.499484   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:09.499491   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:09.499550   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:09.536411   79927 cri.go:89] found id: ""
	I0806 08:36:09.536442   79927 logs.go:276] 0 containers: []
	W0806 08:36:09.536453   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:09.536460   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:09.536533   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:09.572044   79927 cri.go:89] found id: ""
	I0806 08:36:09.572070   79927 logs.go:276] 0 containers: []
	W0806 08:36:09.572079   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:09.572087   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:09.572138   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:09.612385   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:09.612418   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:09.666790   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:09.666852   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:09.680707   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:09.680753   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:09.753446   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:09.753473   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:09.753491   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:12.336824   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:12.352545   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:12.352626   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:12.393158   79927 cri.go:89] found id: ""
	I0806 08:36:12.393188   79927 logs.go:276] 0 containers: []
	W0806 08:36:12.393199   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:12.393206   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:12.393255   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:12.428501   79927 cri.go:89] found id: ""
	I0806 08:36:12.428525   79927 logs.go:276] 0 containers: []
	W0806 08:36:12.428534   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:12.428539   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:12.428596   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:12.463033   79927 cri.go:89] found id: ""
	I0806 08:36:12.463062   79927 logs.go:276] 0 containers: []
	W0806 08:36:12.463072   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:12.463079   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:12.463136   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:12.500350   79927 cri.go:89] found id: ""
	I0806 08:36:12.500402   79927 logs.go:276] 0 containers: []
	W0806 08:36:12.500413   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:12.500421   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:12.500477   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:12.537323   79927 cri.go:89] found id: ""
	I0806 08:36:12.537350   79927 logs.go:276] 0 containers: []
	W0806 08:36:12.537360   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:12.537366   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:12.537414   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:12.575525   79927 cri.go:89] found id: ""
	I0806 08:36:12.575551   79927 logs.go:276] 0 containers: []
	W0806 08:36:12.575561   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:12.575568   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:12.575629   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:12.615375   79927 cri.go:89] found id: ""
	I0806 08:36:12.615436   79927 logs.go:276] 0 containers: []
	W0806 08:36:12.615451   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:12.615459   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:12.615531   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:12.655269   79927 cri.go:89] found id: ""
	I0806 08:36:12.655300   79927 logs.go:276] 0 containers: []
	W0806 08:36:12.655311   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:12.655324   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:12.655342   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:12.709097   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:12.709134   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:12.724330   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:12.724380   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:12.801075   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:12.801099   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:12.801111   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:12.885670   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:12.885704   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:09.288192   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:11.789794   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:10.613121   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:12.613812   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:13.969367   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:16.468276   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:15.427222   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:15.442092   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:15.442206   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:15.485634   79927 cri.go:89] found id: ""
	I0806 08:36:15.485657   79927 logs.go:276] 0 containers: []
	W0806 08:36:15.485665   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:15.485671   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:15.485715   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:15.520791   79927 cri.go:89] found id: ""
	I0806 08:36:15.520820   79927 logs.go:276] 0 containers: []
	W0806 08:36:15.520830   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:15.520837   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:15.520897   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:15.554041   79927 cri.go:89] found id: ""
	I0806 08:36:15.554065   79927 logs.go:276] 0 containers: []
	W0806 08:36:15.554073   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:15.554079   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:15.554143   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:15.587550   79927 cri.go:89] found id: ""
	I0806 08:36:15.587579   79927 logs.go:276] 0 containers: []
	W0806 08:36:15.587591   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:15.587598   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:15.587659   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:15.628657   79927 cri.go:89] found id: ""
	I0806 08:36:15.628681   79927 logs.go:276] 0 containers: []
	W0806 08:36:15.628689   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:15.628695   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:15.628750   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:15.665212   79927 cri.go:89] found id: ""
	I0806 08:36:15.665239   79927 logs.go:276] 0 containers: []
	W0806 08:36:15.665247   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:15.665252   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:15.665298   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:15.705675   79927 cri.go:89] found id: ""
	I0806 08:36:15.705701   79927 logs.go:276] 0 containers: []
	W0806 08:36:15.705709   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:15.705714   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:15.705762   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:15.739835   79927 cri.go:89] found id: ""
	I0806 08:36:15.739862   79927 logs.go:276] 0 containers: []
	W0806 08:36:15.739871   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:15.739879   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:15.739892   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:15.795719   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:15.795749   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:15.809940   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:15.809967   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:15.888710   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:15.888737   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:15.888753   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:15.972728   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:15.972772   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:18.513565   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:18.527480   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:18.527538   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:18.562783   79927 cri.go:89] found id: ""
	I0806 08:36:18.562812   79927 logs.go:276] 0 containers: []
	W0806 08:36:18.562824   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:18.562832   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:18.562896   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:18.600613   79927 cri.go:89] found id: ""
	I0806 08:36:18.600645   79927 logs.go:276] 0 containers: []
	W0806 08:36:18.600659   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:18.600668   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:18.600735   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:18.638074   79927 cri.go:89] found id: ""
	I0806 08:36:18.638104   79927 logs.go:276] 0 containers: []
	W0806 08:36:18.638114   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:18.638119   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:18.638181   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:18.679795   79927 cri.go:89] found id: ""
	I0806 08:36:18.679825   79927 logs.go:276] 0 containers: []
	W0806 08:36:18.679836   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:18.679844   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:18.679907   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:18.715535   79927 cri.go:89] found id: ""
	I0806 08:36:18.715568   79927 logs.go:276] 0 containers: []
	W0806 08:36:18.715577   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:18.715585   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:18.715650   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:18.752705   79927 cri.go:89] found id: ""
	I0806 08:36:18.752736   79927 logs.go:276] 0 containers: []
	W0806 08:36:18.752748   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:18.752759   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:18.752821   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:14.288048   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:16.288422   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:18.788928   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:15.112857   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:17.114585   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:18.468697   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:20.967985   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:18.791653   79927 cri.go:89] found id: ""
	I0806 08:36:18.791678   79927 logs.go:276] 0 containers: []
	W0806 08:36:18.791689   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:18.791696   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:18.791763   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:18.828328   79927 cri.go:89] found id: ""
	I0806 08:36:18.828357   79927 logs.go:276] 0 containers: []
	W0806 08:36:18.828378   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:18.828389   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:18.828408   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:18.888111   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:18.888148   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:18.902718   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:18.902750   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:18.986327   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:18.986352   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:18.986365   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:19.072858   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:19.072899   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:21.615497   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:21.631434   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:21.631492   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:21.671446   79927 cri.go:89] found id: ""
	I0806 08:36:21.671471   79927 logs.go:276] 0 containers: []
	W0806 08:36:21.671479   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:21.671484   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:21.671540   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:21.707598   79927 cri.go:89] found id: ""
	I0806 08:36:21.707624   79927 logs.go:276] 0 containers: []
	W0806 08:36:21.707634   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:21.707640   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:21.707686   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:21.753647   79927 cri.go:89] found id: ""
	I0806 08:36:21.753675   79927 logs.go:276] 0 containers: []
	W0806 08:36:21.753686   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:21.753692   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:21.753749   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:21.795284   79927 cri.go:89] found id: ""
	I0806 08:36:21.795311   79927 logs.go:276] 0 containers: []
	W0806 08:36:21.795321   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:21.795329   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:21.795388   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:21.832436   79927 cri.go:89] found id: ""
	I0806 08:36:21.832466   79927 logs.go:276] 0 containers: []
	W0806 08:36:21.832476   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:21.832483   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:21.832550   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:21.870295   79927 cri.go:89] found id: ""
	I0806 08:36:21.870327   79927 logs.go:276] 0 containers: []
	W0806 08:36:21.870338   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:21.870345   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:21.870412   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:21.908210   79927 cri.go:89] found id: ""
	I0806 08:36:21.908242   79927 logs.go:276] 0 containers: []
	W0806 08:36:21.908253   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:21.908261   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:21.908322   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:21.944055   79927 cri.go:89] found id: ""
	I0806 08:36:21.944084   79927 logs.go:276] 0 containers: []
	W0806 08:36:21.944095   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:21.944104   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:21.944125   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:22.000467   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:22.000506   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:22.014590   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:22.014629   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:22.083873   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:22.083907   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:22.083927   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:22.175993   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:22.176056   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:21.288435   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:23.788424   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:19.612021   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:21.612092   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:23.612557   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:22.968059   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:24.968111   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:24.720476   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:24.734469   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:24.734551   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:24.770663   79927 cri.go:89] found id: ""
	I0806 08:36:24.770692   79927 logs.go:276] 0 containers: []
	W0806 08:36:24.770705   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:24.770712   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:24.770768   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:24.807861   79927 cri.go:89] found id: ""
	I0806 08:36:24.807889   79927 logs.go:276] 0 containers: []
	W0806 08:36:24.807897   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:24.807908   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:24.807957   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:24.849827   79927 cri.go:89] found id: ""
	I0806 08:36:24.849857   79927 logs.go:276] 0 containers: []
	W0806 08:36:24.849869   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:24.849877   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:24.849937   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:24.885260   79927 cri.go:89] found id: ""
	I0806 08:36:24.885291   79927 logs.go:276] 0 containers: []
	W0806 08:36:24.885303   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:24.885315   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:24.885381   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:24.919918   79927 cri.go:89] found id: ""
	I0806 08:36:24.919949   79927 logs.go:276] 0 containers: []
	W0806 08:36:24.919957   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:24.919964   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:24.920026   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:24.956010   79927 cri.go:89] found id: ""
	I0806 08:36:24.956036   79927 logs.go:276] 0 containers: []
	W0806 08:36:24.956045   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:24.956050   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:24.956109   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:24.993126   79927 cri.go:89] found id: ""
	I0806 08:36:24.993154   79927 logs.go:276] 0 containers: []
	W0806 08:36:24.993165   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:24.993173   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:24.993240   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:25.026732   79927 cri.go:89] found id: ""
	I0806 08:36:25.026756   79927 logs.go:276] 0 containers: []
	W0806 08:36:25.026763   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:25.026771   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:25.026781   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:25.064076   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:25.064113   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:25.118820   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:25.118856   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:25.133640   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:25.133671   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:25.201686   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:25.201714   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:25.201729   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:27.801947   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:27.815472   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:27.815539   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:27.851073   79927 cri.go:89] found id: ""
	I0806 08:36:27.851103   79927 logs.go:276] 0 containers: []
	W0806 08:36:27.851119   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:27.851127   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:27.851184   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:27.886121   79927 cri.go:89] found id: ""
	I0806 08:36:27.886151   79927 logs.go:276] 0 containers: []
	W0806 08:36:27.886162   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:27.886169   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:27.886231   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:27.920950   79927 cri.go:89] found id: ""
	I0806 08:36:27.920979   79927 logs.go:276] 0 containers: []
	W0806 08:36:27.920990   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:27.920997   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:27.921055   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:27.956529   79927 cri.go:89] found id: ""
	I0806 08:36:27.956562   79927 logs.go:276] 0 containers: []
	W0806 08:36:27.956574   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:27.956581   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:27.956631   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:27.992272   79927 cri.go:89] found id: ""
	I0806 08:36:27.992298   79927 logs.go:276] 0 containers: []
	W0806 08:36:27.992306   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:27.992322   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:27.992405   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:28.027242   79927 cri.go:89] found id: ""
	I0806 08:36:28.027269   79927 logs.go:276] 0 containers: []
	W0806 08:36:28.027277   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:28.027283   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:28.027335   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:28.063043   79927 cri.go:89] found id: ""
	I0806 08:36:28.063068   79927 logs.go:276] 0 containers: []
	W0806 08:36:28.063076   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:28.063083   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:28.063148   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:28.099743   79927 cri.go:89] found id: ""
	I0806 08:36:28.099769   79927 logs.go:276] 0 containers: []
	W0806 08:36:28.099780   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:28.099791   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:28.099806   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:28.151354   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:28.151392   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:28.166704   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:28.166731   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:28.242621   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:28.242642   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:28.242656   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:28.324352   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:28.324405   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:25.789745   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:28.290656   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:26.112264   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:28.112769   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:27.466228   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:29.469998   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:30.869063   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:30.882309   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:30.882377   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:30.916264   79927 cri.go:89] found id: ""
	I0806 08:36:30.916296   79927 logs.go:276] 0 containers: []
	W0806 08:36:30.916306   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:30.916313   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:30.916390   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:30.951995   79927 cri.go:89] found id: ""
	I0806 08:36:30.952021   79927 logs.go:276] 0 containers: []
	W0806 08:36:30.952028   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:30.952034   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:30.952111   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:31.005229   79927 cri.go:89] found id: ""
	I0806 08:36:31.005270   79927 logs.go:276] 0 containers: []
	W0806 08:36:31.005281   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:31.005288   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:31.005350   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:31.040439   79927 cri.go:89] found id: ""
	I0806 08:36:31.040465   79927 logs.go:276] 0 containers: []
	W0806 08:36:31.040476   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:31.040482   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:31.040534   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:31.078880   79927 cri.go:89] found id: ""
	I0806 08:36:31.078910   79927 logs.go:276] 0 containers: []
	W0806 08:36:31.078920   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:31.078934   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:31.078996   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:31.115520   79927 cri.go:89] found id: ""
	I0806 08:36:31.115551   79927 logs.go:276] 0 containers: []
	W0806 08:36:31.115562   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:31.115569   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:31.115615   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:31.150450   79927 cri.go:89] found id: ""
	I0806 08:36:31.150483   79927 logs.go:276] 0 containers: []
	W0806 08:36:31.150497   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:31.150506   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:31.150565   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:31.187449   79927 cri.go:89] found id: ""
	I0806 08:36:31.187476   79927 logs.go:276] 0 containers: []
	W0806 08:36:31.187485   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:31.187493   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:31.187507   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:31.246232   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:31.246272   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:31.260691   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:31.260720   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:31.342356   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:31.342383   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:31.342401   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:31.426820   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:31.426865   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:30.787370   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:32.787711   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:30.611466   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:32.611550   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:31.968048   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:34.469062   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:33.974768   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:33.988377   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:33.988442   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:34.023284   79927 cri.go:89] found id: ""
	I0806 08:36:34.023316   79927 logs.go:276] 0 containers: []
	W0806 08:36:34.023328   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:34.023336   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:34.023397   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:34.058633   79927 cri.go:89] found id: ""
	I0806 08:36:34.058676   79927 logs.go:276] 0 containers: []
	W0806 08:36:34.058695   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:34.058704   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:34.058767   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:34.098343   79927 cri.go:89] found id: ""
	I0806 08:36:34.098379   79927 logs.go:276] 0 containers: []
	W0806 08:36:34.098390   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:34.098398   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:34.098471   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:34.134638   79927 cri.go:89] found id: ""
	I0806 08:36:34.134667   79927 logs.go:276] 0 containers: []
	W0806 08:36:34.134677   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:34.134683   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:34.134729   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:34.170118   79927 cri.go:89] found id: ""
	I0806 08:36:34.170144   79927 logs.go:276] 0 containers: []
	W0806 08:36:34.170153   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:34.170159   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:34.170207   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:34.204789   79927 cri.go:89] found id: ""
	I0806 08:36:34.204815   79927 logs.go:276] 0 containers: []
	W0806 08:36:34.204823   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:34.204829   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:34.204877   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:34.245170   79927 cri.go:89] found id: ""
	I0806 08:36:34.245194   79927 logs.go:276] 0 containers: []
	W0806 08:36:34.245203   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:34.245208   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:34.245264   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:34.280026   79927 cri.go:89] found id: ""
	I0806 08:36:34.280063   79927 logs.go:276] 0 containers: []
	W0806 08:36:34.280076   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:34.280086   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:34.280101   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:34.335363   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:34.335408   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:34.350201   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:34.350226   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:34.425205   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:34.425231   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:34.425244   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:34.506385   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:34.506419   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:37.047263   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:37.060179   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:37.060254   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:37.093168   79927 cri.go:89] found id: ""
	I0806 08:36:37.093201   79927 logs.go:276] 0 containers: []
	W0806 08:36:37.093209   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:37.093216   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:37.093264   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:37.131315   79927 cri.go:89] found id: ""
	I0806 08:36:37.131343   79927 logs.go:276] 0 containers: []
	W0806 08:36:37.131353   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:37.131360   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:37.131426   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:37.169799   79927 cri.go:89] found id: ""
	I0806 08:36:37.169829   79927 logs.go:276] 0 containers: []
	W0806 08:36:37.169839   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:37.169845   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:37.169898   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:37.205852   79927 cri.go:89] found id: ""
	I0806 08:36:37.205878   79927 logs.go:276] 0 containers: []
	W0806 08:36:37.205885   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:37.205891   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:37.205938   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:37.242469   79927 cri.go:89] found id: ""
	I0806 08:36:37.242493   79927 logs.go:276] 0 containers: []
	W0806 08:36:37.242501   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:37.242507   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:37.242554   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:37.278515   79927 cri.go:89] found id: ""
	I0806 08:36:37.278545   79927 logs.go:276] 0 containers: []
	W0806 08:36:37.278556   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:37.278563   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:37.278627   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:37.316157   79927 cri.go:89] found id: ""
	I0806 08:36:37.316181   79927 logs.go:276] 0 containers: []
	W0806 08:36:37.316189   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:37.316195   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:37.316249   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:37.353752   79927 cri.go:89] found id: ""
	I0806 08:36:37.353778   79927 logs.go:276] 0 containers: []
	W0806 08:36:37.353785   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:37.353796   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:37.353807   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:37.439435   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:37.439471   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:37.488866   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:37.488903   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:37.540031   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:37.540072   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:37.556109   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:37.556140   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:37.629023   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:35.288565   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:37.788336   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:35.113182   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:37.615523   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:36.967665   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:38.973884   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:41.467445   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:40.129579   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:40.143971   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:40.144050   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:40.180490   79927 cri.go:89] found id: ""
	I0806 08:36:40.180515   79927 logs.go:276] 0 containers: []
	W0806 08:36:40.180524   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:40.180530   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:40.180586   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:40.219812   79927 cri.go:89] found id: ""
	I0806 08:36:40.219860   79927 logs.go:276] 0 containers: []
	W0806 08:36:40.219869   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:40.219877   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:40.219939   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:40.256048   79927 cri.go:89] found id: ""
	I0806 08:36:40.256076   79927 logs.go:276] 0 containers: []
	W0806 08:36:40.256088   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:40.256106   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:40.256166   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:40.294693   79927 cri.go:89] found id: ""
	I0806 08:36:40.294720   79927 logs.go:276] 0 containers: []
	W0806 08:36:40.294729   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:40.294734   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:40.294792   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:40.331246   79927 cri.go:89] found id: ""
	I0806 08:36:40.331276   79927 logs.go:276] 0 containers: []
	W0806 08:36:40.331287   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:40.331295   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:40.331351   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:40.373039   79927 cri.go:89] found id: ""
	I0806 08:36:40.373068   79927 logs.go:276] 0 containers: []
	W0806 08:36:40.373095   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:40.373103   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:40.373163   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:40.408137   79927 cri.go:89] found id: ""
	I0806 08:36:40.408167   79927 logs.go:276] 0 containers: []
	W0806 08:36:40.408177   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:40.408184   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:40.408249   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:40.442996   79927 cri.go:89] found id: ""
	I0806 08:36:40.443035   79927 logs.go:276] 0 containers: []
	W0806 08:36:40.443043   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:40.443051   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:40.443063   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:40.499559   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:40.499599   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:40.514687   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:40.514716   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:40.586034   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:40.586059   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:40.586071   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:40.669756   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:40.669796   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:43.213555   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:43.229683   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:43.229751   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:43.267107   79927 cri.go:89] found id: ""
	I0806 08:36:43.267140   79927 logs.go:276] 0 containers: []
	W0806 08:36:43.267153   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:43.267161   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:43.267235   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:43.303405   79927 cri.go:89] found id: ""
	I0806 08:36:43.303435   79927 logs.go:276] 0 containers: []
	W0806 08:36:43.303443   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:43.303449   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:43.303509   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:43.338583   79927 cri.go:89] found id: ""
	I0806 08:36:43.338610   79927 logs.go:276] 0 containers: []
	W0806 08:36:43.338619   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:43.338628   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:43.338692   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:43.377413   79927 cri.go:89] found id: ""
	I0806 08:36:43.377441   79927 logs.go:276] 0 containers: []
	W0806 08:36:43.377449   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:43.377454   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:43.377501   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:43.413036   79927 cri.go:89] found id: ""
	I0806 08:36:43.413133   79927 logs.go:276] 0 containers: []
	W0806 08:36:43.413152   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:43.413160   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:43.413221   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:43.450216   79927 cri.go:89] found id: ""
	I0806 08:36:43.450245   79927 logs.go:276] 0 containers: []
	W0806 08:36:43.450253   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:43.450259   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:43.450305   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:43.485811   79927 cri.go:89] found id: ""
	I0806 08:36:43.485856   79927 logs.go:276] 0 containers: []
	W0806 08:36:43.485864   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:43.485870   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:43.485919   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:43.521586   79927 cri.go:89] found id: ""
	I0806 08:36:43.521617   79927 logs.go:276] 0 containers: []
	W0806 08:36:43.521628   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:43.521650   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:43.521673   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:43.561842   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:43.561889   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:43.617652   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:43.617683   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:43.631239   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:43.631267   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:43.707297   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:43.707317   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:43.707330   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:39.789019   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:42.287817   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:40.111420   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:42.112503   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:43.468602   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:45.974699   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:46.287309   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:46.301917   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:46.301991   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:46.335040   79927 cri.go:89] found id: ""
	I0806 08:36:46.335069   79927 logs.go:276] 0 containers: []
	W0806 08:36:46.335080   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:46.335088   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:46.335177   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:46.371902   79927 cri.go:89] found id: ""
	I0806 08:36:46.371930   79927 logs.go:276] 0 containers: []
	W0806 08:36:46.371939   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:46.371944   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:46.372000   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:46.411945   79927 cri.go:89] found id: ""
	I0806 08:36:46.412119   79927 logs.go:276] 0 containers: []
	W0806 08:36:46.412134   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:46.412145   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:46.412248   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:46.448578   79927 cri.go:89] found id: ""
	I0806 08:36:46.448612   79927 logs.go:276] 0 containers: []
	W0806 08:36:46.448624   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:46.448631   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:46.448691   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:46.492231   79927 cri.go:89] found id: ""
	I0806 08:36:46.492263   79927 logs.go:276] 0 containers: []
	W0806 08:36:46.492276   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:46.492283   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:46.492338   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:46.538502   79927 cri.go:89] found id: ""
	I0806 08:36:46.538528   79927 logs.go:276] 0 containers: []
	W0806 08:36:46.538547   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:46.538555   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:46.538610   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:46.577750   79927 cri.go:89] found id: ""
	I0806 08:36:46.577778   79927 logs.go:276] 0 containers: []
	W0806 08:36:46.577785   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:46.577790   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:46.577872   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:46.611717   79927 cri.go:89] found id: ""
	I0806 08:36:46.611742   79927 logs.go:276] 0 containers: []
	W0806 08:36:46.611750   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:46.611758   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:46.611770   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:46.629144   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:46.629178   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:46.714703   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:46.714723   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:46.714736   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:46.798978   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:46.799025   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:46.837373   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:46.837407   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:44.288387   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:46.288470   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:48.788762   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:44.611739   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:46.615083   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:49.113041   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:48.469024   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:50.967906   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:49.393231   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:49.407372   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:49.407436   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:49.444210   79927 cri.go:89] found id: ""
	I0806 08:36:49.444240   79927 logs.go:276] 0 containers: []
	W0806 08:36:49.444249   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:49.444254   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:49.444313   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:49.484165   79927 cri.go:89] found id: ""
	I0806 08:36:49.484190   79927 logs.go:276] 0 containers: []
	W0806 08:36:49.484200   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:49.484206   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:49.484278   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:49.519880   79927 cri.go:89] found id: ""
	I0806 08:36:49.519921   79927 logs.go:276] 0 containers: []
	W0806 08:36:49.519932   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:49.519939   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:49.519999   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:49.554954   79927 cri.go:89] found id: ""
	I0806 08:36:49.554979   79927 logs.go:276] 0 containers: []
	W0806 08:36:49.554988   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:49.554995   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:49.555045   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:49.593332   79927 cri.go:89] found id: ""
	I0806 08:36:49.593359   79927 logs.go:276] 0 containers: []
	W0806 08:36:49.593367   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:49.593373   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:49.593432   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:49.630736   79927 cri.go:89] found id: ""
	I0806 08:36:49.630766   79927 logs.go:276] 0 containers: []
	W0806 08:36:49.630774   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:49.630780   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:49.630839   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:49.667460   79927 cri.go:89] found id: ""
	I0806 08:36:49.667487   79927 logs.go:276] 0 containers: []
	W0806 08:36:49.667499   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:49.667505   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:49.667566   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:49.700161   79927 cri.go:89] found id: ""
	I0806 08:36:49.700190   79927 logs.go:276] 0 containers: []
	W0806 08:36:49.700199   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:49.700207   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:49.700218   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:49.786656   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:49.786704   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:49.825711   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:49.825745   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:49.879696   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:49.879731   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:49.893336   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:49.893369   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:49.969462   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:52.469939   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:52.483906   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:52.483974   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:52.523575   79927 cri.go:89] found id: ""
	I0806 08:36:52.523603   79927 logs.go:276] 0 containers: []
	W0806 08:36:52.523615   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:52.523622   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:52.523684   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:52.561590   79927 cri.go:89] found id: ""
	I0806 08:36:52.561620   79927 logs.go:276] 0 containers: []
	W0806 08:36:52.561630   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:52.561639   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:52.561698   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:52.598204   79927 cri.go:89] found id: ""
	I0806 08:36:52.598227   79927 logs.go:276] 0 containers: []
	W0806 08:36:52.598236   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:52.598241   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:52.598302   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:52.636993   79927 cri.go:89] found id: ""
	I0806 08:36:52.637019   79927 logs.go:276] 0 containers: []
	W0806 08:36:52.637027   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:52.637034   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:52.637085   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:52.675090   79927 cri.go:89] found id: ""
	I0806 08:36:52.675118   79927 logs.go:276] 0 containers: []
	W0806 08:36:52.675130   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:52.675138   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:52.675199   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:52.711045   79927 cri.go:89] found id: ""
	I0806 08:36:52.711072   79927 logs.go:276] 0 containers: []
	W0806 08:36:52.711081   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:52.711086   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:52.711151   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:52.749001   79927 cri.go:89] found id: ""
	I0806 08:36:52.749029   79927 logs.go:276] 0 containers: []
	W0806 08:36:52.749039   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:52.749046   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:52.749113   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:52.783692   79927 cri.go:89] found id: ""
	I0806 08:36:52.783720   79927 logs.go:276] 0 containers: []
	W0806 08:36:52.783737   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:52.783748   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:52.783762   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:52.863240   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:52.863281   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:52.902471   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:52.902496   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:52.953299   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:52.953341   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:52.968786   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:52.968818   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:53.045904   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:51.288280   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:53.289276   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:51.612945   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:54.112574   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:52.968486   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:55.468458   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:55.546407   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:55.560166   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:55.560243   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:55.592961   79927 cri.go:89] found id: ""
	I0806 08:36:55.592992   79927 logs.go:276] 0 containers: []
	W0806 08:36:55.593004   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:55.593012   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:55.593075   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:55.629546   79927 cri.go:89] found id: ""
	I0806 08:36:55.629568   79927 logs.go:276] 0 containers: []
	W0806 08:36:55.629578   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:55.629585   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:55.629639   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:55.668677   79927 cri.go:89] found id: ""
	I0806 08:36:55.668707   79927 logs.go:276] 0 containers: []
	W0806 08:36:55.668718   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:55.668726   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:55.668784   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:55.704141   79927 cri.go:89] found id: ""
	I0806 08:36:55.704173   79927 logs.go:276] 0 containers: []
	W0806 08:36:55.704183   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:55.704188   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:55.704257   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:55.740125   79927 cri.go:89] found id: ""
	I0806 08:36:55.740152   79927 logs.go:276] 0 containers: []
	W0806 08:36:55.740160   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:55.740166   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:55.740210   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:55.773709   79927 cri.go:89] found id: ""
	I0806 08:36:55.773739   79927 logs.go:276] 0 containers: []
	W0806 08:36:55.773750   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:55.773759   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:55.773827   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:55.813913   79927 cri.go:89] found id: ""
	I0806 08:36:55.813940   79927 logs.go:276] 0 containers: []
	W0806 08:36:55.813953   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:55.813959   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:55.814004   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:55.850807   79927 cri.go:89] found id: ""
	I0806 08:36:55.850841   79927 logs.go:276] 0 containers: []
	W0806 08:36:55.850853   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:55.850864   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:55.850878   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:55.908202   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:55.908247   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:55.922278   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:55.922311   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:55.996448   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:55.996476   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:55.996489   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:56.076612   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:56.076647   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:58.625610   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:58.640939   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:58.641008   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:58.675638   79927 cri.go:89] found id: ""
	I0806 08:36:58.675666   79927 logs.go:276] 0 containers: []
	W0806 08:36:58.675674   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:58.675680   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:58.675728   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:58.712020   79927 cri.go:89] found id: ""
	I0806 08:36:58.712055   79927 logs.go:276] 0 containers: []
	W0806 08:36:58.712065   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:58.712072   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:58.712126   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:58.749683   79927 cri.go:89] found id: ""
	I0806 08:36:58.749712   79927 logs.go:276] 0 containers: []
	W0806 08:36:58.749724   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:58.749731   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:58.749792   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:55.788883   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:58.288825   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:56.613051   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:59.113176   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:57.968553   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:59.968580   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:58.790005   79927 cri.go:89] found id: ""
	I0806 08:36:58.790026   79927 logs.go:276] 0 containers: []
	W0806 08:36:58.790033   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:58.790039   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:58.790089   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:58.828781   79927 cri.go:89] found id: ""
	I0806 08:36:58.828806   79927 logs.go:276] 0 containers: []
	W0806 08:36:58.828815   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:58.828821   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:58.828873   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:58.867039   79927 cri.go:89] found id: ""
	I0806 08:36:58.867067   79927 logs.go:276] 0 containers: []
	W0806 08:36:58.867075   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:58.867081   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:58.867125   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:58.901362   79927 cri.go:89] found id: ""
	I0806 08:36:58.901396   79927 logs.go:276] 0 containers: []
	W0806 08:36:58.901415   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:58.901423   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:58.901472   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:58.940351   79927 cri.go:89] found id: ""
	I0806 08:36:58.940388   79927 logs.go:276] 0 containers: []
	W0806 08:36:58.940399   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:58.940410   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:58.940425   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:58.995737   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:58.995776   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:59.009191   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:59.009222   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:59.085109   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:59.085133   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:59.085150   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:59.164778   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:59.164819   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:01.712282   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:01.725701   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:01.725759   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:01.762005   79927 cri.go:89] found id: ""
	I0806 08:37:01.762029   79927 logs.go:276] 0 containers: []
	W0806 08:37:01.762038   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:01.762044   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:01.762092   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:01.799277   79927 cri.go:89] found id: ""
	I0806 08:37:01.799304   79927 logs.go:276] 0 containers: []
	W0806 08:37:01.799316   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:01.799322   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:01.799391   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:01.845100   79927 cri.go:89] found id: ""
	I0806 08:37:01.845133   79927 logs.go:276] 0 containers: []
	W0806 08:37:01.845147   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:01.845154   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:01.845214   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:01.882019   79927 cri.go:89] found id: ""
	I0806 08:37:01.882050   79927 logs.go:276] 0 containers: []
	W0806 08:37:01.882061   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:01.882069   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:01.882160   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:01.919224   79927 cri.go:89] found id: ""
	I0806 08:37:01.919251   79927 logs.go:276] 0 containers: []
	W0806 08:37:01.919262   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:01.919269   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:01.919327   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:01.957300   79927 cri.go:89] found id: ""
	I0806 08:37:01.957327   79927 logs.go:276] 0 containers: []
	W0806 08:37:01.957335   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:01.957342   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:01.957389   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:02.000717   79927 cri.go:89] found id: ""
	I0806 08:37:02.000740   79927 logs.go:276] 0 containers: []
	W0806 08:37:02.000749   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:02.000754   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:02.000801   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:02.034710   79927 cri.go:89] found id: ""
	I0806 08:37:02.034739   79927 logs.go:276] 0 containers: []
	W0806 08:37:02.034753   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:02.034763   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:02.034776   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:02.090553   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:02.090593   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:02.104247   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:02.104276   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:02.181990   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:02.182015   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:02.182030   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:02.268127   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:02.268167   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:00.789157   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:03.288510   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:01.611956   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:04.111772   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:02.467718   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:04.968898   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:04.814144   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:04.829120   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:04.829194   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:04.873503   79927 cri.go:89] found id: ""
	I0806 08:37:04.873537   79927 logs.go:276] 0 containers: []
	W0806 08:37:04.873548   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:04.873557   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:04.873614   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:04.913878   79927 cri.go:89] found id: ""
	I0806 08:37:04.913910   79927 logs.go:276] 0 containers: []
	W0806 08:37:04.913918   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:04.913924   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:04.913984   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:04.950273   79927 cri.go:89] found id: ""
	I0806 08:37:04.950303   79927 logs.go:276] 0 containers: []
	W0806 08:37:04.950316   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:04.950323   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:04.950388   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:04.985108   79927 cri.go:89] found id: ""
	I0806 08:37:04.985135   79927 logs.go:276] 0 containers: []
	W0806 08:37:04.985145   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:04.985151   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:04.985215   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:05.021033   79927 cri.go:89] found id: ""
	I0806 08:37:05.021056   79927 logs.go:276] 0 containers: []
	W0806 08:37:05.021064   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:05.021069   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:05.021120   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:05.058416   79927 cri.go:89] found id: ""
	I0806 08:37:05.058440   79927 logs.go:276] 0 containers: []
	W0806 08:37:05.058448   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:05.058454   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:05.058510   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:05.093709   79927 cri.go:89] found id: ""
	I0806 08:37:05.093737   79927 logs.go:276] 0 containers: []
	W0806 08:37:05.093745   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:05.093751   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:05.093815   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:05.129391   79927 cri.go:89] found id: ""
	I0806 08:37:05.129416   79927 logs.go:276] 0 containers: []
	W0806 08:37:05.129424   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:05.129432   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:05.129444   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:05.172149   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:05.172176   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:05.224745   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:05.224781   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:05.238967   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:05.238997   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:05.314834   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:05.314868   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:05.314883   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:07.903732   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:07.918551   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:07.918619   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:07.966068   79927 cri.go:89] found id: ""
	I0806 08:37:07.966093   79927 logs.go:276] 0 containers: []
	W0806 08:37:07.966104   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:07.966112   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:07.966180   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:08.001557   79927 cri.go:89] found id: ""
	I0806 08:37:08.001586   79927 logs.go:276] 0 containers: []
	W0806 08:37:08.001595   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:08.001602   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:08.001663   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:08.034214   79927 cri.go:89] found id: ""
	I0806 08:37:08.034239   79927 logs.go:276] 0 containers: []
	W0806 08:37:08.034249   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:08.034256   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:08.034316   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:08.070645   79927 cri.go:89] found id: ""
	I0806 08:37:08.070673   79927 logs.go:276] 0 containers: []
	W0806 08:37:08.070683   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:08.070689   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:08.070761   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:08.110537   79927 cri.go:89] found id: ""
	I0806 08:37:08.110565   79927 logs.go:276] 0 containers: []
	W0806 08:37:08.110574   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:08.110581   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:08.110635   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:08.146179   79927 cri.go:89] found id: ""
	I0806 08:37:08.146203   79927 logs.go:276] 0 containers: []
	W0806 08:37:08.146211   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:08.146224   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:08.146272   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:08.182924   79927 cri.go:89] found id: ""
	I0806 08:37:08.182953   79927 logs.go:276] 0 containers: []
	W0806 08:37:08.182962   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:08.182968   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:08.183024   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:08.219547   79927 cri.go:89] found id: ""
	I0806 08:37:08.219570   79927 logs.go:276] 0 containers: []
	W0806 08:37:08.219576   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:08.219585   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:08.219599   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:08.298919   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:08.298956   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:08.338475   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:08.338512   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:08.390859   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:08.390898   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:08.404854   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:08.404891   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:08.473553   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:05.788534   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:08.288635   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:06.112204   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:08.612960   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:06.969444   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:09.468423   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:11.469280   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:10.973824   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:10.987720   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:10.987777   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:11.022637   79927 cri.go:89] found id: ""
	I0806 08:37:11.022667   79927 logs.go:276] 0 containers: []
	W0806 08:37:11.022677   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:11.022683   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:11.022739   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:11.062625   79927 cri.go:89] found id: ""
	I0806 08:37:11.062655   79927 logs.go:276] 0 containers: []
	W0806 08:37:11.062673   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:11.062681   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:11.062736   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:11.099906   79927 cri.go:89] found id: ""
	I0806 08:37:11.099937   79927 logs.go:276] 0 containers: []
	W0806 08:37:11.099945   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:11.099951   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:11.100019   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:11.139461   79927 cri.go:89] found id: ""
	I0806 08:37:11.139485   79927 logs.go:276] 0 containers: []
	W0806 08:37:11.139493   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:11.139498   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:11.139541   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:11.177916   79927 cri.go:89] found id: ""
	I0806 08:37:11.177940   79927 logs.go:276] 0 containers: []
	W0806 08:37:11.177948   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:11.177954   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:11.178007   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:11.216335   79927 cri.go:89] found id: ""
	I0806 08:37:11.216377   79927 logs.go:276] 0 containers: []
	W0806 08:37:11.216389   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:11.216398   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:11.216453   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:11.278457   79927 cri.go:89] found id: ""
	I0806 08:37:11.278479   79927 logs.go:276] 0 containers: []
	W0806 08:37:11.278487   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:11.278493   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:11.278541   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:11.316325   79927 cri.go:89] found id: ""
	I0806 08:37:11.316348   79927 logs.go:276] 0 containers: []
	W0806 08:37:11.316356   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:11.316381   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:11.316397   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:11.357435   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:11.357469   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:11.413151   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:11.413184   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:11.427149   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:11.427182   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:11.500691   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:11.500723   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:11.500738   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:10.787717   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:12.788263   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:11.113471   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:13.612216   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:13.968959   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:16.468099   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:14.077817   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:14.090888   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:14.090960   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:14.126454   79927 cri.go:89] found id: ""
	I0806 08:37:14.126476   79927 logs.go:276] 0 containers: []
	W0806 08:37:14.126484   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:14.126489   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:14.126535   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:14.163910   79927 cri.go:89] found id: ""
	I0806 08:37:14.163939   79927 logs.go:276] 0 containers: []
	W0806 08:37:14.163950   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:14.163957   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:14.164025   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:14.202918   79927 cri.go:89] found id: ""
	I0806 08:37:14.202942   79927 logs.go:276] 0 containers: []
	W0806 08:37:14.202950   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:14.202956   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:14.202999   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:14.240997   79927 cri.go:89] found id: ""
	I0806 08:37:14.241021   79927 logs.go:276] 0 containers: []
	W0806 08:37:14.241030   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:14.241036   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:14.241083   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:14.277424   79927 cri.go:89] found id: ""
	I0806 08:37:14.277448   79927 logs.go:276] 0 containers: []
	W0806 08:37:14.277465   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:14.277473   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:14.277545   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:14.317402   79927 cri.go:89] found id: ""
	I0806 08:37:14.317426   79927 logs.go:276] 0 containers: []
	W0806 08:37:14.317435   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:14.317440   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:14.317487   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:14.364310   79927 cri.go:89] found id: ""
	I0806 08:37:14.364341   79927 logs.go:276] 0 containers: []
	W0806 08:37:14.364351   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:14.364358   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:14.364445   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:14.398836   79927 cri.go:89] found id: ""
	I0806 08:37:14.398866   79927 logs.go:276] 0 containers: []
	W0806 08:37:14.398874   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:14.398891   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:14.398904   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:14.450550   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:14.450586   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:14.467190   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:14.467220   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:14.538786   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:14.538809   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:14.538824   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:14.615049   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:14.615080   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:17.155906   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:17.170401   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:17.170479   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:17.208276   79927 cri.go:89] found id: ""
	I0806 08:37:17.208311   79927 logs.go:276] 0 containers: []
	W0806 08:37:17.208322   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:17.208330   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:17.208403   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:17.258671   79927 cri.go:89] found id: ""
	I0806 08:37:17.258696   79927 logs.go:276] 0 containers: []
	W0806 08:37:17.258705   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:17.258711   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:17.258771   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:17.296999   79927 cri.go:89] found id: ""
	I0806 08:37:17.297027   79927 logs.go:276] 0 containers: []
	W0806 08:37:17.297035   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:17.297041   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:17.297098   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:17.335818   79927 cri.go:89] found id: ""
	I0806 08:37:17.335841   79927 logs.go:276] 0 containers: []
	W0806 08:37:17.335849   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:17.335855   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:17.335903   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:17.371650   79927 cri.go:89] found id: ""
	I0806 08:37:17.371674   79927 logs.go:276] 0 containers: []
	W0806 08:37:17.371682   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:17.371688   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:17.371737   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:17.408413   79927 cri.go:89] found id: ""
	I0806 08:37:17.408440   79927 logs.go:276] 0 containers: []
	W0806 08:37:17.408448   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:17.408454   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:17.408502   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:17.443877   79927 cri.go:89] found id: ""
	I0806 08:37:17.443909   79927 logs.go:276] 0 containers: []
	W0806 08:37:17.443920   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:17.443928   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:17.443988   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:17.484292   79927 cri.go:89] found id: ""
	I0806 08:37:17.484321   79927 logs.go:276] 0 containers: []
	W0806 08:37:17.484331   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:17.484341   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:17.484356   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:17.541365   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:17.541418   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:17.555961   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:17.555993   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:17.625904   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:17.625923   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:17.625941   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:17.708997   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:17.709045   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:14.788822   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:17.288567   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:16.112034   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:18.612595   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:18.468580   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:20.967605   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:20.250351   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:20.264560   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:20.264625   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:20.301216   79927 cri.go:89] found id: ""
	I0806 08:37:20.301244   79927 logs.go:276] 0 containers: []
	W0806 08:37:20.301252   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:20.301257   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:20.301317   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:20.345820   79927 cri.go:89] found id: ""
	I0806 08:37:20.345849   79927 logs.go:276] 0 containers: []
	W0806 08:37:20.345860   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:20.345867   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:20.345923   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:20.382333   79927 cri.go:89] found id: ""
	I0806 08:37:20.382357   79927 logs.go:276] 0 containers: []
	W0806 08:37:20.382365   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:20.382371   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:20.382426   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:20.420193   79927 cri.go:89] found id: ""
	I0806 08:37:20.420228   79927 logs.go:276] 0 containers: []
	W0806 08:37:20.420238   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:20.420249   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:20.420316   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:20.456200   79927 cri.go:89] found id: ""
	I0806 08:37:20.456227   79927 logs.go:276] 0 containers: []
	W0806 08:37:20.456235   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:20.456241   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:20.456298   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:20.493321   79927 cri.go:89] found id: ""
	I0806 08:37:20.493349   79927 logs.go:276] 0 containers: []
	W0806 08:37:20.493356   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:20.493362   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:20.493414   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:20.531007   79927 cri.go:89] found id: ""
	I0806 08:37:20.531034   79927 logs.go:276] 0 containers: []
	W0806 08:37:20.531044   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:20.531051   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:20.531118   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:20.572014   79927 cri.go:89] found id: ""
	I0806 08:37:20.572044   79927 logs.go:276] 0 containers: []
	W0806 08:37:20.572062   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:20.572071   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:20.572083   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:20.623368   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:20.623399   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:20.638941   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:20.638984   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:20.717287   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:20.717313   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:20.717324   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:20.800157   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:20.800192   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:23.342061   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:23.356954   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:23.357026   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:23.391415   79927 cri.go:89] found id: ""
	I0806 08:37:23.391445   79927 logs.go:276] 0 containers: []
	W0806 08:37:23.391456   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:23.391463   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:23.391526   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:23.426460   79927 cri.go:89] found id: ""
	I0806 08:37:23.426482   79927 logs.go:276] 0 containers: []
	W0806 08:37:23.426491   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:23.426496   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:23.426541   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:23.461078   79927 cri.go:89] found id: ""
	I0806 08:37:23.461111   79927 logs.go:276] 0 containers: []
	W0806 08:37:23.461122   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:23.461129   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:23.461183   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:23.496566   79927 cri.go:89] found id: ""
	I0806 08:37:23.496594   79927 logs.go:276] 0 containers: []
	W0806 08:37:23.496602   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:23.496607   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:23.496665   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:23.529097   79927 cri.go:89] found id: ""
	I0806 08:37:23.529123   79927 logs.go:276] 0 containers: []
	W0806 08:37:23.529134   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:23.529153   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:23.529243   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:23.563690   79927 cri.go:89] found id: ""
	I0806 08:37:23.563720   79927 logs.go:276] 0 containers: []
	W0806 08:37:23.563730   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:23.563737   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:23.563796   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:23.598787   79927 cri.go:89] found id: ""
	I0806 08:37:23.598861   79927 logs.go:276] 0 containers: []
	W0806 08:37:23.598875   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:23.598884   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:23.598942   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:23.655675   79927 cri.go:89] found id: ""
	I0806 08:37:23.655705   79927 logs.go:276] 0 containers: []
	W0806 08:37:23.655715   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:23.655723   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:23.655734   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:23.715853   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:23.715887   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:23.735805   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:23.735844   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 08:37:19.288768   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:21.788292   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:23.788808   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:20.612865   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:22.613432   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:22.968064   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:24.968171   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	W0806 08:37:23.815238   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:23.815261   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:23.815274   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:23.899436   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:23.899479   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:26.441837   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:26.456410   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:26.456472   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:26.497707   79927 cri.go:89] found id: ""
	I0806 08:37:26.497733   79927 logs.go:276] 0 containers: []
	W0806 08:37:26.497741   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:26.497746   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:26.497795   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:26.536101   79927 cri.go:89] found id: ""
	I0806 08:37:26.536126   79927 logs.go:276] 0 containers: []
	W0806 08:37:26.536133   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:26.536145   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:26.536190   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:26.573067   79927 cri.go:89] found id: ""
	I0806 08:37:26.573091   79927 logs.go:276] 0 containers: []
	W0806 08:37:26.573102   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:26.573110   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:26.573162   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:26.609820   79927 cri.go:89] found id: ""
	I0806 08:37:26.609846   79927 logs.go:276] 0 containers: []
	W0806 08:37:26.609854   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:26.609860   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:26.609914   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:26.645721   79927 cri.go:89] found id: ""
	I0806 08:37:26.645748   79927 logs.go:276] 0 containers: []
	W0806 08:37:26.645758   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:26.645764   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:26.645825   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:26.679941   79927 cri.go:89] found id: ""
	I0806 08:37:26.679988   79927 logs.go:276] 0 containers: []
	W0806 08:37:26.679999   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:26.680006   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:26.680068   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:26.717247   79927 cri.go:89] found id: ""
	I0806 08:37:26.717277   79927 logs.go:276] 0 containers: []
	W0806 08:37:26.717288   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:26.717295   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:26.717382   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:26.760157   79927 cri.go:89] found id: ""
	I0806 08:37:26.760184   79927 logs.go:276] 0 containers: []
	W0806 08:37:26.760194   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:26.760205   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:26.760221   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:26.816034   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:26.816071   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:26.830141   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:26.830170   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:26.900078   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:26.900103   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:26.900121   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:26.984460   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:26.984501   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:26.287692   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:28.288990   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:25.112666   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:27.611170   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:27.466925   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:29.469989   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:29.527184   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:29.541974   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:29.542052   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:29.576062   79927 cri.go:89] found id: ""
	I0806 08:37:29.576095   79927 logs.go:276] 0 containers: []
	W0806 08:37:29.576107   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:29.576114   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:29.576176   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:29.610841   79927 cri.go:89] found id: ""
	I0806 08:37:29.610865   79927 logs.go:276] 0 containers: []
	W0806 08:37:29.610881   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:29.610892   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:29.610948   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:29.650326   79927 cri.go:89] found id: ""
	I0806 08:37:29.650350   79927 logs.go:276] 0 containers: []
	W0806 08:37:29.650360   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:29.650367   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:29.650434   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:29.689382   79927 cri.go:89] found id: ""
	I0806 08:37:29.689418   79927 logs.go:276] 0 containers: []
	W0806 08:37:29.689429   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:29.689436   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:29.689495   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:29.725893   79927 cri.go:89] found id: ""
	I0806 08:37:29.725918   79927 logs.go:276] 0 containers: []
	W0806 08:37:29.725926   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:29.725932   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:29.725980   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:29.767354   79927 cri.go:89] found id: ""
	I0806 08:37:29.767379   79927 logs.go:276] 0 containers: []
	W0806 08:37:29.767386   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:29.767392   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:29.767441   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:29.811139   79927 cri.go:89] found id: ""
	I0806 08:37:29.811161   79927 logs.go:276] 0 containers: []
	W0806 08:37:29.811169   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:29.811174   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:29.811219   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:29.848252   79927 cri.go:89] found id: ""
	I0806 08:37:29.848285   79927 logs.go:276] 0 containers: []
	W0806 08:37:29.848294   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:29.848303   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:29.848315   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:29.899850   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:29.899884   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:29.913744   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:29.913774   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:29.988016   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:29.988039   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:29.988054   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:30.069701   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:30.069735   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:32.606958   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:32.622621   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:32.622684   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:32.658816   79927 cri.go:89] found id: ""
	I0806 08:37:32.658844   79927 logs.go:276] 0 containers: []
	W0806 08:37:32.658855   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:32.658862   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:32.658928   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:32.697734   79927 cri.go:89] found id: ""
	I0806 08:37:32.697769   79927 logs.go:276] 0 containers: []
	W0806 08:37:32.697780   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:32.697786   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:32.697868   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:32.735540   79927 cri.go:89] found id: ""
	I0806 08:37:32.735572   79927 logs.go:276] 0 containers: []
	W0806 08:37:32.735584   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:32.735591   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:32.735658   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:32.771319   79927 cri.go:89] found id: ""
	I0806 08:37:32.771343   79927 logs.go:276] 0 containers: []
	W0806 08:37:32.771357   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:32.771365   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:32.771421   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:32.808137   79927 cri.go:89] found id: ""
	I0806 08:37:32.808162   79927 logs.go:276] 0 containers: []
	W0806 08:37:32.808172   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:32.808179   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:32.808234   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:32.843307   79927 cri.go:89] found id: ""
	I0806 08:37:32.843333   79927 logs.go:276] 0 containers: []
	W0806 08:37:32.843343   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:32.843350   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:32.843404   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:32.883855   79927 cri.go:89] found id: ""
	I0806 08:37:32.883922   79927 logs.go:276] 0 containers: []
	W0806 08:37:32.883935   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:32.883944   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:32.884014   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:32.920965   79927 cri.go:89] found id: ""
	I0806 08:37:32.921008   79927 logs.go:276] 0 containers: []
	W0806 08:37:32.921019   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:32.921027   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:32.921039   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:32.974933   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:32.974966   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:32.989326   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:32.989352   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:33.058274   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:33.058302   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:33.058314   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:33.136632   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:33.136670   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:30.789771   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:33.289001   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:29.613189   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:32.111086   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:34.112398   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:31.973697   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:34.469358   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:35.680315   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:35.694438   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:35.694497   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:35.729541   79927 cri.go:89] found id: ""
	I0806 08:37:35.729569   79927 logs.go:276] 0 containers: []
	W0806 08:37:35.729580   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:35.729587   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:35.729650   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:35.765783   79927 cri.go:89] found id: ""
	I0806 08:37:35.765813   79927 logs.go:276] 0 containers: []
	W0806 08:37:35.765822   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:35.765827   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:35.765885   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:35.802975   79927 cri.go:89] found id: ""
	I0806 08:37:35.803002   79927 logs.go:276] 0 containers: []
	W0806 08:37:35.803010   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:35.803016   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:35.803062   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:35.838588   79927 cri.go:89] found id: ""
	I0806 08:37:35.838614   79927 logs.go:276] 0 containers: []
	W0806 08:37:35.838623   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:35.838629   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:35.838675   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:35.874413   79927 cri.go:89] found id: ""
	I0806 08:37:35.874442   79927 logs.go:276] 0 containers: []
	W0806 08:37:35.874452   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:35.874460   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:35.874528   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:35.913816   79927 cri.go:89] found id: ""
	I0806 08:37:35.913852   79927 logs.go:276] 0 containers: []
	W0806 08:37:35.913871   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:35.913885   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:35.913963   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:35.951719   79927 cri.go:89] found id: ""
	I0806 08:37:35.951750   79927 logs.go:276] 0 containers: []
	W0806 08:37:35.951761   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:35.951770   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:35.951828   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:35.987720   79927 cri.go:89] found id: ""
	I0806 08:37:35.987749   79927 logs.go:276] 0 containers: []
	W0806 08:37:35.987760   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:35.987769   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:35.987780   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:36.043300   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:36.043339   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:36.057138   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:36.057169   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:36.126382   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:36.126399   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:36.126415   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:36.205243   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:36.205284   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:38.747978   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:35.289346   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:37.788785   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:36.114392   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:38.612174   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:36.967654   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:38.969141   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:40.969752   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:38.762123   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:38.762190   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:38.805074   79927 cri.go:89] found id: ""
	I0806 08:37:38.805101   79927 logs.go:276] 0 containers: []
	W0806 08:37:38.805111   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:38.805117   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:38.805185   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:38.839305   79927 cri.go:89] found id: ""
	I0806 08:37:38.839330   79927 logs.go:276] 0 containers: []
	W0806 08:37:38.839338   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:38.839344   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:38.839406   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:38.874174   79927 cri.go:89] found id: ""
	I0806 08:37:38.874203   79927 logs.go:276] 0 containers: []
	W0806 08:37:38.874214   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:38.874221   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:38.874277   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:38.909607   79927 cri.go:89] found id: ""
	I0806 08:37:38.909630   79927 logs.go:276] 0 containers: []
	W0806 08:37:38.909639   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:38.909645   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:38.909702   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:38.945584   79927 cri.go:89] found id: ""
	I0806 08:37:38.945615   79927 logs.go:276] 0 containers: []
	W0806 08:37:38.945626   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:38.945634   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:38.945696   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:38.988993   79927 cri.go:89] found id: ""
	I0806 08:37:38.989022   79927 logs.go:276] 0 containers: []
	W0806 08:37:38.989033   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:38.989041   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:38.989100   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:39.028902   79927 cri.go:89] found id: ""
	I0806 08:37:39.028934   79927 logs.go:276] 0 containers: []
	W0806 08:37:39.028942   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:39.028948   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:39.029005   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:39.070163   79927 cri.go:89] found id: ""
	I0806 08:37:39.070192   79927 logs.go:276] 0 containers: []
	W0806 08:37:39.070202   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:39.070213   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:39.070229   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:39.124891   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:39.124927   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:39.139898   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:39.139926   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:39.216157   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:39.216181   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:39.216196   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:39.301089   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:39.301136   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:41.840165   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:41.855659   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:41.855792   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:41.892620   79927 cri.go:89] found id: ""
	I0806 08:37:41.892648   79927 logs.go:276] 0 containers: []
	W0806 08:37:41.892660   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:41.892668   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:41.892728   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:41.929064   79927 cri.go:89] found id: ""
	I0806 08:37:41.929097   79927 logs.go:276] 0 containers: []
	W0806 08:37:41.929107   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:41.929114   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:41.929165   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:41.968334   79927 cri.go:89] found id: ""
	I0806 08:37:41.968371   79927 logs.go:276] 0 containers: []
	W0806 08:37:41.968382   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:41.968389   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:41.968450   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:42.008566   79927 cri.go:89] found id: ""
	I0806 08:37:42.008587   79927 logs.go:276] 0 containers: []
	W0806 08:37:42.008595   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:42.008601   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:42.008649   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:42.044281   79927 cri.go:89] found id: ""
	I0806 08:37:42.044308   79927 logs.go:276] 0 containers: []
	W0806 08:37:42.044315   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:42.044321   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:42.044392   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:42.081987   79927 cri.go:89] found id: ""
	I0806 08:37:42.082012   79927 logs.go:276] 0 containers: []
	W0806 08:37:42.082020   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:42.082026   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:42.082072   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:42.120818   79927 cri.go:89] found id: ""
	I0806 08:37:42.120847   79927 logs.go:276] 0 containers: []
	W0806 08:37:42.120857   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:42.120865   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:42.120926   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:42.160685   79927 cri.go:89] found id: ""
	I0806 08:37:42.160719   79927 logs.go:276] 0 containers: []
	W0806 08:37:42.160731   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:42.160742   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:42.160759   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:42.236322   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:42.236344   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:42.236357   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:42.328488   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:42.328526   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:42.369520   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:42.369545   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:42.425449   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:42.425490   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:39.788990   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:41.789236   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:43.789318   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:40.614326   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:43.112061   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:43.469284   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:45.469725   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:44.941557   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:44.954984   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:44.955043   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:44.992186   79927 cri.go:89] found id: ""
	I0806 08:37:44.992216   79927 logs.go:276] 0 containers: []
	W0806 08:37:44.992228   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:44.992234   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:44.992285   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:45.027114   79927 cri.go:89] found id: ""
	I0806 08:37:45.027143   79927 logs.go:276] 0 containers: []
	W0806 08:37:45.027154   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:45.027161   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:45.027222   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:45.062629   79927 cri.go:89] found id: ""
	I0806 08:37:45.062660   79927 logs.go:276] 0 containers: []
	W0806 08:37:45.062672   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:45.062679   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:45.062741   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:45.097499   79927 cri.go:89] found id: ""
	I0806 08:37:45.097530   79927 logs.go:276] 0 containers: []
	W0806 08:37:45.097541   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:45.097548   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:45.097605   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:45.137130   79927 cri.go:89] found id: ""
	I0806 08:37:45.137162   79927 logs.go:276] 0 containers: []
	W0806 08:37:45.137184   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:45.137191   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:45.137264   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:45.170617   79927 cri.go:89] found id: ""
	I0806 08:37:45.170642   79927 logs.go:276] 0 containers: []
	W0806 08:37:45.170659   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:45.170667   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:45.170726   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:45.205729   79927 cri.go:89] found id: ""
	I0806 08:37:45.205758   79927 logs.go:276] 0 containers: []
	W0806 08:37:45.205768   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:45.205774   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:45.205827   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:45.242345   79927 cri.go:89] found id: ""
	I0806 08:37:45.242373   79927 logs.go:276] 0 containers: []
	W0806 08:37:45.242384   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:45.242397   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:45.242414   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:45.283778   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:45.283810   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:45.341421   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:45.341457   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:45.356581   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:45.356606   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:45.428512   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:45.428541   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:45.428559   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:48.011744   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:48.025452   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:48.025533   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:48.064315   79927 cri.go:89] found id: ""
	I0806 08:37:48.064339   79927 logs.go:276] 0 containers: []
	W0806 08:37:48.064348   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:48.064353   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:48.064430   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:48.103855   79927 cri.go:89] found id: ""
	I0806 08:37:48.103891   79927 logs.go:276] 0 containers: []
	W0806 08:37:48.103902   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:48.103909   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:48.103968   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:48.141402   79927 cri.go:89] found id: ""
	I0806 08:37:48.141427   79927 logs.go:276] 0 containers: []
	W0806 08:37:48.141435   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:48.141440   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:48.141491   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:48.177449   79927 cri.go:89] found id: ""
	I0806 08:37:48.177479   79927 logs.go:276] 0 containers: []
	W0806 08:37:48.177490   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:48.177497   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:48.177566   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:48.214613   79927 cri.go:89] found id: ""
	I0806 08:37:48.214637   79927 logs.go:276] 0 containers: []
	W0806 08:37:48.214644   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:48.214650   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:48.214695   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:48.254049   79927 cri.go:89] found id: ""
	I0806 08:37:48.254075   79927 logs.go:276] 0 containers: []
	W0806 08:37:48.254083   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:48.254089   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:48.254147   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:48.296206   79927 cri.go:89] found id: ""
	I0806 08:37:48.296230   79927 logs.go:276] 0 containers: []
	W0806 08:37:48.296240   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:48.296246   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:48.296306   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:48.337614   79927 cri.go:89] found id: ""
	I0806 08:37:48.337641   79927 logs.go:276] 0 containers: []
	W0806 08:37:48.337651   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:48.337662   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:48.337679   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:48.391932   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:48.391971   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:48.405422   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:48.405450   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:48.477374   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:48.477402   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:48.477419   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:48.559252   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:48.559288   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:46.287167   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:48.288652   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:45.113308   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:47.611780   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:47.968109   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:50.469370   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:51.104881   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:51.120113   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:51.120181   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:51.155703   79927 cri.go:89] found id: ""
	I0806 08:37:51.155733   79927 logs.go:276] 0 containers: []
	W0806 08:37:51.155741   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:51.155747   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:51.155792   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:51.191211   79927 cri.go:89] found id: ""
	I0806 08:37:51.191240   79927 logs.go:276] 0 containers: []
	W0806 08:37:51.191250   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:51.191257   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:51.191324   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:51.226740   79927 cri.go:89] found id: ""
	I0806 08:37:51.226771   79927 logs.go:276] 0 containers: []
	W0806 08:37:51.226783   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:51.226790   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:51.226852   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:51.269101   79927 cri.go:89] found id: ""
	I0806 08:37:51.269132   79927 logs.go:276] 0 containers: []
	W0806 08:37:51.269141   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:51.269146   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:51.269196   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:51.307176   79927 cri.go:89] found id: ""
	I0806 08:37:51.307206   79927 logs.go:276] 0 containers: []
	W0806 08:37:51.307216   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:51.307224   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:51.307290   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:51.342824   79927 cri.go:89] found id: ""
	I0806 08:37:51.342854   79927 logs.go:276] 0 containers: []
	W0806 08:37:51.342864   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:51.342871   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:51.342935   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:51.383309   79927 cri.go:89] found id: ""
	I0806 08:37:51.383337   79927 logs.go:276] 0 containers: []
	W0806 08:37:51.383348   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:51.383355   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:51.383420   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:51.424851   79927 cri.go:89] found id: ""
	I0806 08:37:51.424896   79927 logs.go:276] 0 containers: []
	W0806 08:37:51.424908   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:51.424918   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:51.424950   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:51.507286   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:51.507307   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:51.507318   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:51.590512   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:51.590556   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:51.640453   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:51.640482   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:51.694699   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:51.694740   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:50.788163   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:52.788396   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:49.616347   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:52.112323   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:54.112911   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:52.470121   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:54.473264   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:54.211106   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:54.225858   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:54.225936   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:54.267219   79927 cri.go:89] found id: ""
	I0806 08:37:54.267247   79927 logs.go:276] 0 containers: []
	W0806 08:37:54.267258   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:54.267266   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:54.267323   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:54.306734   79927 cri.go:89] found id: ""
	I0806 08:37:54.306763   79927 logs.go:276] 0 containers: []
	W0806 08:37:54.306774   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:54.306781   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:54.306839   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:54.342257   79927 cri.go:89] found id: ""
	I0806 08:37:54.342294   79927 logs.go:276] 0 containers: []
	W0806 08:37:54.342306   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:54.342314   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:54.342383   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:54.380661   79927 cri.go:89] found id: ""
	I0806 08:37:54.380685   79927 logs.go:276] 0 containers: []
	W0806 08:37:54.380693   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:54.380699   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:54.380743   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:54.416503   79927 cri.go:89] found id: ""
	I0806 08:37:54.416525   79927 logs.go:276] 0 containers: []
	W0806 08:37:54.416533   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:54.416539   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:54.416589   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:54.453470   79927 cri.go:89] found id: ""
	I0806 08:37:54.453508   79927 logs.go:276] 0 containers: []
	W0806 08:37:54.453520   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:54.453528   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:54.453591   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:54.497630   79927 cri.go:89] found id: ""
	I0806 08:37:54.497671   79927 logs.go:276] 0 containers: []
	W0806 08:37:54.497682   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:54.497691   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:54.497750   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:54.536516   79927 cri.go:89] found id: ""
	I0806 08:37:54.536546   79927 logs.go:276] 0 containers: []
	W0806 08:37:54.536557   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:54.536568   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:54.536587   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:54.592997   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:54.593032   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:54.607568   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:54.607598   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:54.679369   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:54.679402   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:54.679423   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:54.760001   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:54.760041   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:57.305405   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:57.319427   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:57.319493   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:57.357465   79927 cri.go:89] found id: ""
	I0806 08:37:57.357493   79927 logs.go:276] 0 containers: []
	W0806 08:37:57.357504   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:57.357513   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:57.357571   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:57.390131   79927 cri.go:89] found id: ""
	I0806 08:37:57.390155   79927 logs.go:276] 0 containers: []
	W0806 08:37:57.390165   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:57.390172   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:57.390238   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:57.428571   79927 cri.go:89] found id: ""
	I0806 08:37:57.428600   79927 logs.go:276] 0 containers: []
	W0806 08:37:57.428608   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:57.428613   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:57.428675   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:57.462688   79927 cri.go:89] found id: ""
	I0806 08:37:57.462717   79927 logs.go:276] 0 containers: []
	W0806 08:37:57.462727   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:57.462733   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:57.462791   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:57.525008   79927 cri.go:89] found id: ""
	I0806 08:37:57.525033   79927 logs.go:276] 0 containers: []
	W0806 08:37:57.525041   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:57.525046   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:57.525108   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:57.565992   79927 cri.go:89] found id: ""
	I0806 08:37:57.566024   79927 logs.go:276] 0 containers: []
	W0806 08:37:57.566035   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:57.566045   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:57.566123   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:57.610170   79927 cri.go:89] found id: ""
	I0806 08:37:57.610198   79927 logs.go:276] 0 containers: []
	W0806 08:37:57.610208   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:57.610215   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:57.610262   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:57.647054   79927 cri.go:89] found id: ""
	I0806 08:37:57.647085   79927 logs.go:276] 0 containers: []
	W0806 08:37:57.647096   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:57.647106   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:57.647121   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:57.698177   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:57.698214   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:57.712244   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:57.712271   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:57.781683   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:57.781707   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:57.781723   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:57.861097   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:57.861133   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:54.789102   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:57.287564   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:56.613662   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:59.112248   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:56.967534   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:59.469122   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:00.401186   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:00.414546   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:00.414602   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:00.454163   79927 cri.go:89] found id: ""
	I0806 08:38:00.454190   79927 logs.go:276] 0 containers: []
	W0806 08:38:00.454201   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:00.454208   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:00.454269   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:00.497427   79927 cri.go:89] found id: ""
	I0806 08:38:00.497455   79927 logs.go:276] 0 containers: []
	W0806 08:38:00.497466   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:00.497473   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:00.497538   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:00.537590   79927 cri.go:89] found id: ""
	I0806 08:38:00.537619   79927 logs.go:276] 0 containers: []
	W0806 08:38:00.537627   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:00.537633   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:00.537695   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:00.572337   79927 cri.go:89] found id: ""
	I0806 08:38:00.572383   79927 logs.go:276] 0 containers: []
	W0806 08:38:00.572396   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:00.572403   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:00.572460   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:00.607729   79927 cri.go:89] found id: ""
	I0806 08:38:00.607772   79927 logs.go:276] 0 containers: []
	W0806 08:38:00.607783   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:00.607791   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:00.607852   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:00.644777   79927 cri.go:89] found id: ""
	I0806 08:38:00.644843   79927 logs.go:276] 0 containers: []
	W0806 08:38:00.644856   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:00.644864   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:00.644935   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:00.684698   79927 cri.go:89] found id: ""
	I0806 08:38:00.684727   79927 logs.go:276] 0 containers: []
	W0806 08:38:00.684739   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:00.684747   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:00.684863   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:00.718472   79927 cri.go:89] found id: ""
	I0806 08:38:00.718506   79927 logs.go:276] 0 containers: []
	W0806 08:38:00.718518   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:00.718529   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:00.718546   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:00.770265   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:00.770309   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:00.787438   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:00.787469   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:00.854538   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:00.854558   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:00.854570   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:00.936810   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:00.936847   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:03.477109   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:03.494379   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:03.494439   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:03.536166   79927 cri.go:89] found id: ""
	I0806 08:38:03.536193   79927 logs.go:276] 0 containers: []
	W0806 08:38:03.536202   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:03.536208   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:03.536255   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:03.577497   79927 cri.go:89] found id: ""
	I0806 08:38:03.577522   79927 logs.go:276] 0 containers: []
	W0806 08:38:03.577531   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:03.577536   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:03.577585   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:03.614534   79927 cri.go:89] found id: ""
	I0806 08:38:03.614560   79927 logs.go:276] 0 containers: []
	W0806 08:38:03.614569   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:03.614576   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:03.614634   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:03.655824   79927 cri.go:89] found id: ""
	I0806 08:38:03.655860   79927 logs.go:276] 0 containers: []
	W0806 08:38:03.655871   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:03.655878   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:03.655952   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:03.699843   79927 cri.go:89] found id: ""
	I0806 08:38:03.699875   79927 logs.go:276] 0 containers: []
	W0806 08:38:03.699888   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:03.699896   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:03.699957   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:03.740131   79927 cri.go:89] found id: ""
	I0806 08:38:03.740159   79927 logs.go:276] 0 containers: []
	W0806 08:38:03.740168   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:03.740174   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:03.740223   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:59.787634   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:01.788942   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:03.790134   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:01.112582   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:03.113004   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:01.972073   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:04.468917   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:03.774103   79927 cri.go:89] found id: ""
	I0806 08:38:03.774149   79927 logs.go:276] 0 containers: []
	W0806 08:38:03.774163   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:03.774170   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:03.774230   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:03.814051   79927 cri.go:89] found id: ""
	I0806 08:38:03.814075   79927 logs.go:276] 0 containers: []
	W0806 08:38:03.814083   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:03.814091   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:03.814102   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:03.854869   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:03.854898   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:03.908284   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:03.908318   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:03.923265   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:03.923303   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:04.001785   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:04.001806   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:04.001819   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:06.583809   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:06.596764   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:06.596827   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:06.636611   79927 cri.go:89] found id: ""
	I0806 08:38:06.636639   79927 logs.go:276] 0 containers: []
	W0806 08:38:06.636650   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:06.636657   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:06.636707   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:06.675392   79927 cri.go:89] found id: ""
	I0806 08:38:06.675419   79927 logs.go:276] 0 containers: []
	W0806 08:38:06.675430   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:06.675438   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:06.675492   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:06.711270   79927 cri.go:89] found id: ""
	I0806 08:38:06.711302   79927 logs.go:276] 0 containers: []
	W0806 08:38:06.711313   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:06.711320   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:06.711388   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:06.747252   79927 cri.go:89] found id: ""
	I0806 08:38:06.747280   79927 logs.go:276] 0 containers: []
	W0806 08:38:06.747291   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:06.747297   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:06.747360   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:06.781999   79927 cri.go:89] found id: ""
	I0806 08:38:06.782028   79927 logs.go:276] 0 containers: []
	W0806 08:38:06.782038   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:06.782043   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:06.782095   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:06.818810   79927 cri.go:89] found id: ""
	I0806 08:38:06.818847   79927 logs.go:276] 0 containers: []
	W0806 08:38:06.818860   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:06.818869   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:06.818935   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:06.857937   79927 cri.go:89] found id: ""
	I0806 08:38:06.857972   79927 logs.go:276] 0 containers: []
	W0806 08:38:06.857983   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:06.857989   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:06.858049   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:06.897920   79927 cri.go:89] found id: ""
	I0806 08:38:06.897949   79927 logs.go:276] 0 containers: []
	W0806 08:38:06.897957   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:06.897965   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:06.897979   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:06.912199   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:06.912228   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:06.983129   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:06.983157   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:06.983173   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:07.067783   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:07.067853   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:07.107756   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:07.107787   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:06.286856   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:08.287070   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:05.113664   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:07.611129   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:06.973447   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:09.468163   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:11.469075   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:09.657958   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:09.670726   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:09.670807   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:09.709184   79927 cri.go:89] found id: ""
	I0806 08:38:09.709220   79927 logs.go:276] 0 containers: []
	W0806 08:38:09.709232   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:09.709239   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:09.709299   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:09.747032   79927 cri.go:89] found id: ""
	I0806 08:38:09.747061   79927 logs.go:276] 0 containers: []
	W0806 08:38:09.747073   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:09.747080   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:09.747141   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:09.785134   79927 cri.go:89] found id: ""
	I0806 08:38:09.785170   79927 logs.go:276] 0 containers: []
	W0806 08:38:09.785180   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:09.785188   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:09.785248   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:09.825136   79927 cri.go:89] found id: ""
	I0806 08:38:09.825168   79927 logs.go:276] 0 containers: []
	W0806 08:38:09.825179   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:09.825186   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:09.825249   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:09.865292   79927 cri.go:89] found id: ""
	I0806 08:38:09.865327   79927 logs.go:276] 0 containers: []
	W0806 08:38:09.865336   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:09.865345   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:09.865415   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:09.911145   79927 cri.go:89] found id: ""
	I0806 08:38:09.911172   79927 logs.go:276] 0 containers: []
	W0806 08:38:09.911183   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:09.911190   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:09.911247   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:09.961259   79927 cri.go:89] found id: ""
	I0806 08:38:09.961289   79927 logs.go:276] 0 containers: []
	W0806 08:38:09.961299   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:09.961307   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:09.961364   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:10.009572   79927 cri.go:89] found id: ""
	I0806 08:38:10.009594   79927 logs.go:276] 0 containers: []
	W0806 08:38:10.009602   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:10.009610   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:10.009623   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:10.063733   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:10.063770   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:10.080004   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:10.080044   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:10.158792   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:10.158823   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:10.158840   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:10.242126   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:10.242162   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:12.781497   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:12.796715   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:12.796804   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:12.831223   79927 cri.go:89] found id: ""
	I0806 08:38:12.831254   79927 logs.go:276] 0 containers: []
	W0806 08:38:12.831265   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:12.831273   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:12.831332   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:12.872202   79927 cri.go:89] found id: ""
	I0806 08:38:12.872224   79927 logs.go:276] 0 containers: []
	W0806 08:38:12.872231   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:12.872236   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:12.872284   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:12.913457   79927 cri.go:89] found id: ""
	I0806 08:38:12.913486   79927 logs.go:276] 0 containers: []
	W0806 08:38:12.913497   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:12.913505   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:12.913563   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:12.950382   79927 cri.go:89] found id: ""
	I0806 08:38:12.950413   79927 logs.go:276] 0 containers: []
	W0806 08:38:12.950424   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:12.950432   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:12.950497   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:12.988956   79927 cri.go:89] found id: ""
	I0806 08:38:12.988984   79927 logs.go:276] 0 containers: []
	W0806 08:38:12.988992   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:12.988998   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:12.989054   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:13.025308   79927 cri.go:89] found id: ""
	I0806 08:38:13.025334   79927 logs.go:276] 0 containers: []
	W0806 08:38:13.025342   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:13.025348   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:13.025401   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:13.060156   79927 cri.go:89] found id: ""
	I0806 08:38:13.060184   79927 logs.go:276] 0 containers: []
	W0806 08:38:13.060196   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:13.060204   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:13.060268   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:13.096302   79927 cri.go:89] found id: ""
	I0806 08:38:13.096331   79927 logs.go:276] 0 containers: []
	W0806 08:38:13.096344   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:13.096360   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:13.096387   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:13.149567   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:13.149603   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:13.164680   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:13.164715   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:13.236695   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:13.236718   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:13.236733   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:13.323043   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:13.323081   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:10.288582   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:12.788556   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:09.611677   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:12.111364   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:14.112265   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:13.469940   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:15.969381   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:15.867128   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:15.885504   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:15.885579   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:15.929060   79927 cri.go:89] found id: ""
	I0806 08:38:15.929087   79927 logs.go:276] 0 containers: []
	W0806 08:38:15.929098   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:15.929106   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:15.929157   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:15.996919   79927 cri.go:89] found id: ""
	I0806 08:38:15.996946   79927 logs.go:276] 0 containers: []
	W0806 08:38:15.996954   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:15.996959   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:15.997013   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:16.033299   79927 cri.go:89] found id: ""
	I0806 08:38:16.033323   79927 logs.go:276] 0 containers: []
	W0806 08:38:16.033331   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:16.033338   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:16.033399   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:16.070754   79927 cri.go:89] found id: ""
	I0806 08:38:16.070781   79927 logs.go:276] 0 containers: []
	W0806 08:38:16.070789   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:16.070795   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:16.070853   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:16.107906   79927 cri.go:89] found id: ""
	I0806 08:38:16.107931   79927 logs.go:276] 0 containers: []
	W0806 08:38:16.107943   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:16.107951   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:16.108011   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:16.145814   79927 cri.go:89] found id: ""
	I0806 08:38:16.145844   79927 logs.go:276] 0 containers: []
	W0806 08:38:16.145853   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:16.145861   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:16.145921   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:16.183752   79927 cri.go:89] found id: ""
	I0806 08:38:16.183782   79927 logs.go:276] 0 containers: []
	W0806 08:38:16.183793   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:16.183828   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:16.183890   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:16.220973   79927 cri.go:89] found id: ""
	I0806 08:38:16.221000   79927 logs.go:276] 0 containers: []
	W0806 08:38:16.221010   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:16.221020   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:16.221034   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:16.274514   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:16.274555   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:16.291387   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:16.291417   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:16.365737   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:16.365769   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:16.365785   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:16.446578   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:16.446617   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:14.789440   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:17.288354   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:16.112334   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:18.612150   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:18.468700   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:20.469520   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:18.989406   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:19.003113   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:19.003174   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:19.038794   79927 cri.go:89] found id: ""
	I0806 08:38:19.038821   79927 logs.go:276] 0 containers: []
	W0806 08:38:19.038832   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:19.038839   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:19.038899   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:19.072521   79927 cri.go:89] found id: ""
	I0806 08:38:19.072546   79927 logs.go:276] 0 containers: []
	W0806 08:38:19.072554   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:19.072561   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:19.072606   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:19.105949   79927 cri.go:89] found id: ""
	I0806 08:38:19.105982   79927 logs.go:276] 0 containers: []
	W0806 08:38:19.105994   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:19.106002   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:19.106053   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:19.141277   79927 cri.go:89] found id: ""
	I0806 08:38:19.141303   79927 logs.go:276] 0 containers: []
	W0806 08:38:19.141311   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:19.141317   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:19.141373   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:19.179144   79927 cri.go:89] found id: ""
	I0806 08:38:19.179173   79927 logs.go:276] 0 containers: []
	W0806 08:38:19.179184   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:19.179191   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:19.179252   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:19.218970   79927 cri.go:89] found id: ""
	I0806 08:38:19.218998   79927 logs.go:276] 0 containers: []
	W0806 08:38:19.219010   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:19.219022   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:19.219084   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:19.257623   79927 cri.go:89] found id: ""
	I0806 08:38:19.257652   79927 logs.go:276] 0 containers: []
	W0806 08:38:19.257663   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:19.257671   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:19.257740   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:19.294222   79927 cri.go:89] found id: ""
	I0806 08:38:19.294250   79927 logs.go:276] 0 containers: []
	W0806 08:38:19.294258   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:19.294267   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:19.294279   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:19.374327   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:19.374352   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:19.374368   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:19.458570   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:19.458604   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:19.504329   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:19.504382   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:19.554199   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:19.554237   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:22.069574   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:22.082919   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:22.082981   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:22.115952   79927 cri.go:89] found id: ""
	I0806 08:38:22.115974   79927 logs.go:276] 0 containers: []
	W0806 08:38:22.115981   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:22.115987   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:22.116031   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:22.158322   79927 cri.go:89] found id: ""
	I0806 08:38:22.158349   79927 logs.go:276] 0 containers: []
	W0806 08:38:22.158359   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:22.158366   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:22.158425   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:22.203375   79927 cri.go:89] found id: ""
	I0806 08:38:22.203403   79927 logs.go:276] 0 containers: []
	W0806 08:38:22.203412   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:22.203418   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:22.203473   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:22.240808   79927 cri.go:89] found id: ""
	I0806 08:38:22.240837   79927 logs.go:276] 0 containers: []
	W0806 08:38:22.240849   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:22.240857   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:22.240913   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:22.277885   79927 cri.go:89] found id: ""
	I0806 08:38:22.277914   79927 logs.go:276] 0 containers: []
	W0806 08:38:22.277925   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:22.277932   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:22.277993   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:22.314990   79927 cri.go:89] found id: ""
	I0806 08:38:22.315020   79927 logs.go:276] 0 containers: []
	W0806 08:38:22.315031   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:22.315039   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:22.315098   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:22.352159   79927 cri.go:89] found id: ""
	I0806 08:38:22.352189   79927 logs.go:276] 0 containers: []
	W0806 08:38:22.352198   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:22.352204   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:22.352267   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:22.387349   79927 cri.go:89] found id: ""
	I0806 08:38:22.387382   79927 logs.go:276] 0 containers: []
	W0806 08:38:22.387393   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:22.387404   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:22.387416   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:22.425834   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:22.425871   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:22.479680   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:22.479712   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:22.495104   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:22.495140   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:22.567013   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:22.567030   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:22.567043   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:19.288896   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:21.291455   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:23.789282   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:21.111745   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:23.111513   79219 pod_ready.go:81] duration metric: took 4m0.005888372s for pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace to be "Ready" ...
	E0806 08:38:23.111545   79219 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0806 08:38:23.111555   79219 pod_ready.go:38] duration metric: took 4m6.006750463s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 08:38:23.111579   79219 api_server.go:52] waiting for apiserver process to appear ...
	I0806 08:38:23.111610   79219 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:23.111669   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:23.182880   79219 cri.go:89] found id: "f7edb6bece3347d88a263663a1d6549165dafdcb67ad723494b937766cba6da6"
	I0806 08:38:23.182907   79219 cri.go:89] found id: ""
	I0806 08:38:23.182918   79219 logs.go:276] 1 containers: [f7edb6bece3347d88a263663a1d6549165dafdcb67ad723494b937766cba6da6]
	I0806 08:38:23.182974   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:23.188039   79219 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:23.188103   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:23.235090   79219 cri.go:89] found id: "df2efdb62a4af10f89a1b37599d4258ac3b72660353795f8c2141f3ef70c1f65"
	I0806 08:38:23.235113   79219 cri.go:89] found id: ""
	I0806 08:38:23.235135   79219 logs.go:276] 1 containers: [df2efdb62a4af10f89a1b37599d4258ac3b72660353795f8c2141f3ef70c1f65]
	I0806 08:38:23.235195   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:23.240260   79219 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:23.240335   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:23.280619   79219 cri.go:89] found id: "9e1c91bc3a920b6f7db4477b8219f13fa092c68ff53abe2390325a68dad253e5"
	I0806 08:38:23.280645   79219 cri.go:89] found id: ""
	I0806 08:38:23.280655   79219 logs.go:276] 1 containers: [9e1c91bc3a920b6f7db4477b8219f13fa092c68ff53abe2390325a68dad253e5]
	I0806 08:38:23.280715   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:23.286155   79219 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:23.286234   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:23.329583   79219 cri.go:89] found id: "a25c5660a3a2b7226bf308b44307a3912efa7a9a73e79ec63f83133b80a44683"
	I0806 08:38:23.329609   79219 cri.go:89] found id: ""
	I0806 08:38:23.329618   79219 logs.go:276] 1 containers: [a25c5660a3a2b7226bf308b44307a3912efa7a9a73e79ec63f83133b80a44683]
	I0806 08:38:23.329678   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:23.334342   79219 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:23.334407   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:23.372955   79219 cri.go:89] found id: "612fc6a8f2c1208097a6d6287c5fecb20026608a278af5d0d4efc9662e065eb9"
	I0806 08:38:23.372974   79219 cri.go:89] found id: ""
	I0806 08:38:23.372981   79219 logs.go:276] 1 containers: [612fc6a8f2c1208097a6d6287c5fecb20026608a278af5d0d4efc9662e065eb9]
	I0806 08:38:23.373037   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:23.377637   79219 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:23.377692   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:23.416165   79219 cri.go:89] found id: "d8651a7ed6615e826b950eab003841c2ec872b48160f207e77dd32a684125a23"
	I0806 08:38:23.416188   79219 cri.go:89] found id: ""
	I0806 08:38:23.416197   79219 logs.go:276] 1 containers: [d8651a7ed6615e826b950eab003841c2ec872b48160f207e77dd32a684125a23]
	I0806 08:38:23.416248   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:23.420970   79219 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:23.421020   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:23.459247   79219 cri.go:89] found id: ""
	I0806 08:38:23.459277   79219 logs.go:276] 0 containers: []
	W0806 08:38:23.459287   79219 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:23.459295   79219 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0806 08:38:23.459354   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0806 08:38:23.503329   79219 cri.go:89] found id: "367eb6487187a37acafb1fa74364a2d9ec5d9241888ed2e871fb14f2af161975"
	I0806 08:38:23.503354   79219 cri.go:89] found id: "78abe2efa92d2d6c9171ed65bd4d941693be3207554f9318a7cefb91544eeb20"
	I0806 08:38:23.503360   79219 cri.go:89] found id: ""
	I0806 08:38:23.503369   79219 logs.go:276] 2 containers: [367eb6487187a37acafb1fa74364a2d9ec5d9241888ed2e871fb14f2af161975 78abe2efa92d2d6c9171ed65bd4d941693be3207554f9318a7cefb91544eeb20]
	I0806 08:38:23.503431   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:23.508375   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:23.512355   79219 logs.go:123] Gathering logs for kube-scheduler [a25c5660a3a2b7226bf308b44307a3912efa7a9a73e79ec63f83133b80a44683] ...
	I0806 08:38:23.512401   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a25c5660a3a2b7226bf308b44307a3912efa7a9a73e79ec63f83133b80a44683"
	I0806 08:38:23.548545   79219 logs.go:123] Gathering logs for kube-controller-manager [d8651a7ed6615e826b950eab003841c2ec872b48160f207e77dd32a684125a23] ...
	I0806 08:38:23.548574   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8651a7ed6615e826b950eab003841c2ec872b48160f207e77dd32a684125a23"
	I0806 08:38:23.607970   79219 logs.go:123] Gathering logs for storage-provisioner [78abe2efa92d2d6c9171ed65bd4d941693be3207554f9318a7cefb91544eeb20] ...
	I0806 08:38:23.608004   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78abe2efa92d2d6c9171ed65bd4d941693be3207554f9318a7cefb91544eeb20"
	I0806 08:38:23.646553   79219 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:23.646582   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:24.180521   79219 logs.go:123] Gathering logs for container status ...
	I0806 08:38:24.180558   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:24.228240   79219 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:24.228273   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:24.285975   79219 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:24.286007   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:24.301117   79219 logs.go:123] Gathering logs for etcd [df2efdb62a4af10f89a1b37599d4258ac3b72660353795f8c2141f3ef70c1f65] ...
	I0806 08:38:24.301152   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df2efdb62a4af10f89a1b37599d4258ac3b72660353795f8c2141f3ef70c1f65"
	I0806 08:38:22.968463   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:25.469138   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:25.146383   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:25.160013   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:25.160078   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:25.200981   79927 cri.go:89] found id: ""
	I0806 08:38:25.201006   79927 logs.go:276] 0 containers: []
	W0806 08:38:25.201018   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:25.201026   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:25.201091   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:25.239852   79927 cri.go:89] found id: ""
	I0806 08:38:25.239895   79927 logs.go:276] 0 containers: []
	W0806 08:38:25.239907   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:25.239914   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:25.239975   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:25.275521   79927 cri.go:89] found id: ""
	I0806 08:38:25.275549   79927 logs.go:276] 0 containers: []
	W0806 08:38:25.275561   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:25.275568   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:25.275630   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:25.311921   79927 cri.go:89] found id: ""
	I0806 08:38:25.311952   79927 logs.go:276] 0 containers: []
	W0806 08:38:25.311962   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:25.311969   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:25.312036   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:25.353047   79927 cri.go:89] found id: ""
	I0806 08:38:25.353081   79927 logs.go:276] 0 containers: []
	W0806 08:38:25.353093   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:25.353102   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:25.353177   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:25.389442   79927 cri.go:89] found id: ""
	I0806 08:38:25.389470   79927 logs.go:276] 0 containers: []
	W0806 08:38:25.389479   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:25.389485   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:25.389529   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:25.426435   79927 cri.go:89] found id: ""
	I0806 08:38:25.426464   79927 logs.go:276] 0 containers: []
	W0806 08:38:25.426474   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:25.426481   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:25.426541   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:25.460095   79927 cri.go:89] found id: ""
	I0806 08:38:25.460132   79927 logs.go:276] 0 containers: []
	W0806 08:38:25.460157   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:25.460169   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:25.460192   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:25.511363   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:25.511398   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:25.525953   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:25.525980   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:25.598690   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:25.598711   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:25.598723   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:25.674996   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:25.675034   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:28.216533   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:28.230080   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:28.230171   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:28.269239   79927 cri.go:89] found id: ""
	I0806 08:38:28.269274   79927 logs.go:276] 0 containers: []
	W0806 08:38:28.269288   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:28.269296   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:28.269361   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:28.311773   79927 cri.go:89] found id: ""
	I0806 08:38:28.311809   79927 logs.go:276] 0 containers: []
	W0806 08:38:28.311820   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:28.311828   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:28.311896   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:28.351492   79927 cri.go:89] found id: ""
	I0806 08:38:28.351522   79927 logs.go:276] 0 containers: []
	W0806 08:38:28.351530   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:28.351535   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:28.351581   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:28.389039   79927 cri.go:89] found id: ""
	I0806 08:38:28.389067   79927 logs.go:276] 0 containers: []
	W0806 08:38:28.389079   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:28.389086   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:28.389175   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:28.432249   79927 cri.go:89] found id: ""
	I0806 08:38:28.432277   79927 logs.go:276] 0 containers: []
	W0806 08:38:28.432287   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:28.432296   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:28.432354   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:28.478277   79927 cri.go:89] found id: ""
	I0806 08:38:28.478303   79927 logs.go:276] 0 containers: []
	W0806 08:38:28.478313   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:28.478321   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:28.478380   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:28.516167   79927 cri.go:89] found id: ""
	I0806 08:38:28.516199   79927 logs.go:276] 0 containers: []
	W0806 08:38:28.516211   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:28.516219   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:28.516286   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:28.555493   79927 cri.go:89] found id: ""
	I0806 08:38:28.555523   79927 logs.go:276] 0 containers: []
	W0806 08:38:28.555534   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:28.555545   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:28.555560   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:28.627101   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:28.627124   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:28.627147   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:28.716655   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:28.716709   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:28.755469   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:28.755510   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:26.286925   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:28.288654   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:24.355686   79219 logs.go:123] Gathering logs for coredns [9e1c91bc3a920b6f7db4477b8219f13fa092c68ff53abe2390325a68dad253e5] ...
	I0806 08:38:24.355729   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e1c91bc3a920b6f7db4477b8219f13fa092c68ff53abe2390325a68dad253e5"
	I0806 08:38:24.396350   79219 logs.go:123] Gathering logs for kube-proxy [612fc6a8f2c1208097a6d6287c5fecb20026608a278af5d0d4efc9662e065eb9] ...
	I0806 08:38:24.396402   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 612fc6a8f2c1208097a6d6287c5fecb20026608a278af5d0d4efc9662e065eb9"
	I0806 08:38:24.435066   79219 logs.go:123] Gathering logs for storage-provisioner [367eb6487187a37acafb1fa74364a2d9ec5d9241888ed2e871fb14f2af161975] ...
	I0806 08:38:24.435098   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 367eb6487187a37acafb1fa74364a2d9ec5d9241888ed2e871fb14f2af161975"
	I0806 08:38:24.474159   79219 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:24.474188   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 08:38:24.601081   79219 logs.go:123] Gathering logs for kube-apiserver [f7edb6bece3347d88a263663a1d6549165dafdcb67ad723494b937766cba6da6] ...
	I0806 08:38:24.601112   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7edb6bece3347d88a263663a1d6549165dafdcb67ad723494b937766cba6da6"
	I0806 08:38:27.170209   79219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:27.187573   79219 api_server.go:72] duration metric: took 4m15.285938182s to wait for apiserver process to appear ...
	I0806 08:38:27.187597   79219 api_server.go:88] waiting for apiserver healthz status ...
	I0806 08:38:27.187626   79219 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:27.187673   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:27.224204   79219 cri.go:89] found id: "f7edb6bece3347d88a263663a1d6549165dafdcb67ad723494b937766cba6da6"
	I0806 08:38:27.224227   79219 cri.go:89] found id: ""
	I0806 08:38:27.224237   79219 logs.go:276] 1 containers: [f7edb6bece3347d88a263663a1d6549165dafdcb67ad723494b937766cba6da6]
	I0806 08:38:27.224284   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:27.228883   79219 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:27.228957   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:27.275914   79219 cri.go:89] found id: "df2efdb62a4af10f89a1b37599d4258ac3b72660353795f8c2141f3ef70c1f65"
	I0806 08:38:27.275939   79219 cri.go:89] found id: ""
	I0806 08:38:27.275948   79219 logs.go:276] 1 containers: [df2efdb62a4af10f89a1b37599d4258ac3b72660353795f8c2141f3ef70c1f65]
	I0806 08:38:27.276009   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:27.280979   79219 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:27.281037   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:27.326292   79219 cri.go:89] found id: "9e1c91bc3a920b6f7db4477b8219f13fa092c68ff53abe2390325a68dad253e5"
	I0806 08:38:27.326312   79219 cri.go:89] found id: ""
	I0806 08:38:27.326319   79219 logs.go:276] 1 containers: [9e1c91bc3a920b6f7db4477b8219f13fa092c68ff53abe2390325a68dad253e5]
	I0806 08:38:27.326365   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:27.331249   79219 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:27.331326   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:27.374229   79219 cri.go:89] found id: "a25c5660a3a2b7226bf308b44307a3912efa7a9a73e79ec63f83133b80a44683"
	I0806 08:38:27.374250   79219 cri.go:89] found id: ""
	I0806 08:38:27.374257   79219 logs.go:276] 1 containers: [a25c5660a3a2b7226bf308b44307a3912efa7a9a73e79ec63f83133b80a44683]
	I0806 08:38:27.374300   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:27.378875   79219 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:27.378957   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:27.416159   79219 cri.go:89] found id: "612fc6a8f2c1208097a6d6287c5fecb20026608a278af5d0d4efc9662e065eb9"
	I0806 08:38:27.416183   79219 cri.go:89] found id: ""
	I0806 08:38:27.416192   79219 logs.go:276] 1 containers: [612fc6a8f2c1208097a6d6287c5fecb20026608a278af5d0d4efc9662e065eb9]
	I0806 08:38:27.416243   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:27.420463   79219 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:27.420531   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:27.461673   79219 cri.go:89] found id: "d8651a7ed6615e826b950eab003841c2ec872b48160f207e77dd32a684125a23"
	I0806 08:38:27.461700   79219 cri.go:89] found id: ""
	I0806 08:38:27.461710   79219 logs.go:276] 1 containers: [d8651a7ed6615e826b950eab003841c2ec872b48160f207e77dd32a684125a23]
	I0806 08:38:27.461762   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:27.466404   79219 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:27.466474   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:27.507684   79219 cri.go:89] found id: ""
	I0806 08:38:27.507708   79219 logs.go:276] 0 containers: []
	W0806 08:38:27.507716   79219 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:27.507722   79219 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0806 08:38:27.507787   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0806 08:38:27.547812   79219 cri.go:89] found id: "367eb6487187a37acafb1fa74364a2d9ec5d9241888ed2e871fb14f2af161975"
	I0806 08:38:27.547844   79219 cri.go:89] found id: "78abe2efa92d2d6c9171ed65bd4d941693be3207554f9318a7cefb91544eeb20"
	I0806 08:38:27.547850   79219 cri.go:89] found id: ""
	I0806 08:38:27.547859   79219 logs.go:276] 2 containers: [367eb6487187a37acafb1fa74364a2d9ec5d9241888ed2e871fb14f2af161975 78abe2efa92d2d6c9171ed65bd4d941693be3207554f9318a7cefb91544eeb20]
	I0806 08:38:27.547929   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:27.552639   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:27.556993   79219 logs.go:123] Gathering logs for kube-proxy [612fc6a8f2c1208097a6d6287c5fecb20026608a278af5d0d4efc9662e065eb9] ...
	I0806 08:38:27.557022   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 612fc6a8f2c1208097a6d6287c5fecb20026608a278af5d0d4efc9662e065eb9"
	I0806 08:38:27.599317   79219 logs.go:123] Gathering logs for storage-provisioner [78abe2efa92d2d6c9171ed65bd4d941693be3207554f9318a7cefb91544eeb20] ...
	I0806 08:38:27.599345   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78abe2efa92d2d6c9171ed65bd4d941693be3207554f9318a7cefb91544eeb20"
	I0806 08:38:27.643812   79219 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:27.643845   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:28.119873   79219 logs.go:123] Gathering logs for container status ...
	I0806 08:38:28.119909   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:28.160698   79219 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:28.160744   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 08:38:28.275895   79219 logs.go:123] Gathering logs for coredns [9e1c91bc3a920b6f7db4477b8219f13fa092c68ff53abe2390325a68dad253e5] ...
	I0806 08:38:28.275927   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e1c91bc3a920b6f7db4477b8219f13fa092c68ff53abe2390325a68dad253e5"
	I0806 08:38:28.324525   79219 logs.go:123] Gathering logs for kube-scheduler [a25c5660a3a2b7226bf308b44307a3912efa7a9a73e79ec63f83133b80a44683] ...
	I0806 08:38:28.324559   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a25c5660a3a2b7226bf308b44307a3912efa7a9a73e79ec63f83133b80a44683"
	I0806 08:38:28.372860   79219 logs.go:123] Gathering logs for etcd [df2efdb62a4af10f89a1b37599d4258ac3b72660353795f8c2141f3ef70c1f65] ...
	I0806 08:38:28.372891   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df2efdb62a4af10f89a1b37599d4258ac3b72660353795f8c2141f3ef70c1f65"
	I0806 08:38:28.438909   79219 logs.go:123] Gathering logs for kube-controller-manager [d8651a7ed6615e826b950eab003841c2ec872b48160f207e77dd32a684125a23] ...
	I0806 08:38:28.438943   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8651a7ed6615e826b950eab003841c2ec872b48160f207e77dd32a684125a23"
	I0806 08:38:28.514238   79219 logs.go:123] Gathering logs for storage-provisioner [367eb6487187a37acafb1fa74364a2d9ec5d9241888ed2e871fb14f2af161975] ...
	I0806 08:38:28.514279   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 367eb6487187a37acafb1fa74364a2d9ec5d9241888ed2e871fb14f2af161975"
	I0806 08:38:28.552420   79219 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:28.552451   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:28.605414   79219 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:28.605452   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:28.621364   79219 logs.go:123] Gathering logs for kube-apiserver [f7edb6bece3347d88a263663a1d6549165dafdcb67ad723494b937766cba6da6] ...
	I0806 08:38:28.621404   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7edb6bece3347d88a263663a1d6549165dafdcb67ad723494b937766cba6da6"
	I0806 08:38:27.469521   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:29.967734   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:28.812203   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:28.812237   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:31.327103   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:31.340745   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:31.340826   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:31.382470   79927 cri.go:89] found id: ""
	I0806 08:38:31.382503   79927 logs.go:276] 0 containers: []
	W0806 08:38:31.382515   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:31.382523   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:31.382593   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:31.423319   79927 cri.go:89] found id: ""
	I0806 08:38:31.423343   79927 logs.go:276] 0 containers: []
	W0806 08:38:31.423353   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:31.423359   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:31.423423   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:31.463073   79927 cri.go:89] found id: ""
	I0806 08:38:31.463099   79927 logs.go:276] 0 containers: []
	W0806 08:38:31.463108   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:31.463115   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:31.463165   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:31.499169   79927 cri.go:89] found id: ""
	I0806 08:38:31.499209   79927 logs.go:276] 0 containers: []
	W0806 08:38:31.499220   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:31.499228   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:31.499284   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:31.535030   79927 cri.go:89] found id: ""
	I0806 08:38:31.535060   79927 logs.go:276] 0 containers: []
	W0806 08:38:31.535071   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:31.535079   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:31.535145   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:31.575872   79927 cri.go:89] found id: ""
	I0806 08:38:31.575894   79927 logs.go:276] 0 containers: []
	W0806 08:38:31.575902   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:31.575907   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:31.575952   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:31.614766   79927 cri.go:89] found id: ""
	I0806 08:38:31.614787   79927 logs.go:276] 0 containers: []
	W0806 08:38:31.614797   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:31.614804   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:31.614860   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:31.655929   79927 cri.go:89] found id: ""
	I0806 08:38:31.655954   79927 logs.go:276] 0 containers: []
	W0806 08:38:31.655964   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:31.655992   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:31.656009   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:31.728563   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:31.728611   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:31.745445   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:31.745479   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:31.827965   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:31.827998   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:31.828014   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:31.935051   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:31.935093   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:30.788021   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:32.788768   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:31.179830   79219 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8443/healthz ...
	I0806 08:38:31.184262   79219 api_server.go:279] https://192.168.39.190:8443/healthz returned 200:
	ok
	I0806 08:38:31.185245   79219 api_server.go:141] control plane version: v1.30.3
	I0806 08:38:31.185267   79219 api_server.go:131] duration metric: took 3.997664057s to wait for apiserver health ...
	I0806 08:38:31.185274   79219 system_pods.go:43] waiting for kube-system pods to appear ...
	I0806 08:38:31.185297   79219 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:31.185339   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:31.222960   79219 cri.go:89] found id: "f7edb6bece3347d88a263663a1d6549165dafdcb67ad723494b937766cba6da6"
	I0806 08:38:31.222986   79219 cri.go:89] found id: ""
	I0806 08:38:31.222993   79219 logs.go:276] 1 containers: [f7edb6bece3347d88a263663a1d6549165dafdcb67ad723494b937766cba6da6]
	I0806 08:38:31.223042   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:31.227448   79219 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:31.227514   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:31.265326   79219 cri.go:89] found id: "df2efdb62a4af10f89a1b37599d4258ac3b72660353795f8c2141f3ef70c1f65"
	I0806 08:38:31.265349   79219 cri.go:89] found id: ""
	I0806 08:38:31.265356   79219 logs.go:276] 1 containers: [df2efdb62a4af10f89a1b37599d4258ac3b72660353795f8c2141f3ef70c1f65]
	I0806 08:38:31.265402   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:31.269672   79219 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:31.269732   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:31.310202   79219 cri.go:89] found id: "9e1c91bc3a920b6f7db4477b8219f13fa092c68ff53abe2390325a68dad253e5"
	I0806 08:38:31.310228   79219 cri.go:89] found id: ""
	I0806 08:38:31.310237   79219 logs.go:276] 1 containers: [9e1c91bc3a920b6f7db4477b8219f13fa092c68ff53abe2390325a68dad253e5]
	I0806 08:38:31.310283   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:31.314486   79219 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:31.314540   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:31.352612   79219 cri.go:89] found id: "a25c5660a3a2b7226bf308b44307a3912efa7a9a73e79ec63f83133b80a44683"
	I0806 08:38:31.352637   79219 cri.go:89] found id: ""
	I0806 08:38:31.352647   79219 logs.go:276] 1 containers: [a25c5660a3a2b7226bf308b44307a3912efa7a9a73e79ec63f83133b80a44683]
	I0806 08:38:31.352707   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:31.358230   79219 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:31.358300   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:31.402435   79219 cri.go:89] found id: "612fc6a8f2c1208097a6d6287c5fecb20026608a278af5d0d4efc9662e065eb9"
	I0806 08:38:31.402462   79219 cri.go:89] found id: ""
	I0806 08:38:31.402472   79219 logs.go:276] 1 containers: [612fc6a8f2c1208097a6d6287c5fecb20026608a278af5d0d4efc9662e065eb9]
	I0806 08:38:31.402530   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:31.407723   79219 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:31.407793   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:31.455484   79219 cri.go:89] found id: "d8651a7ed6615e826b950eab003841c2ec872b48160f207e77dd32a684125a23"
	I0806 08:38:31.455511   79219 cri.go:89] found id: ""
	I0806 08:38:31.455520   79219 logs.go:276] 1 containers: [d8651a7ed6615e826b950eab003841c2ec872b48160f207e77dd32a684125a23]
	I0806 08:38:31.455582   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:31.461155   79219 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:31.461231   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:31.501317   79219 cri.go:89] found id: ""
	I0806 08:38:31.501342   79219 logs.go:276] 0 containers: []
	W0806 08:38:31.501352   79219 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:31.501359   79219 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0806 08:38:31.501418   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0806 08:38:31.552712   79219 cri.go:89] found id: "367eb6487187a37acafb1fa74364a2d9ec5d9241888ed2e871fb14f2af161975"
	I0806 08:38:31.552742   79219 cri.go:89] found id: "78abe2efa92d2d6c9171ed65bd4d941693be3207554f9318a7cefb91544eeb20"
	I0806 08:38:31.552748   79219 cri.go:89] found id: ""
	I0806 08:38:31.552758   79219 logs.go:276] 2 containers: [367eb6487187a37acafb1fa74364a2d9ec5d9241888ed2e871fb14f2af161975 78abe2efa92d2d6c9171ed65bd4d941693be3207554f9318a7cefb91544eeb20]
	I0806 08:38:31.552820   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:31.561505   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:31.566613   79219 logs.go:123] Gathering logs for storage-provisioner [367eb6487187a37acafb1fa74364a2d9ec5d9241888ed2e871fb14f2af161975] ...
	I0806 08:38:31.566640   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 367eb6487187a37acafb1fa74364a2d9ec5d9241888ed2e871fb14f2af161975"
	I0806 08:38:31.613033   79219 logs.go:123] Gathering logs for storage-provisioner [78abe2efa92d2d6c9171ed65bd4d941693be3207554f9318a7cefb91544eeb20] ...
	I0806 08:38:31.613073   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78abe2efa92d2d6c9171ed65bd4d941693be3207554f9318a7cefb91544eeb20"
	I0806 08:38:31.655353   79219 logs.go:123] Gathering logs for kube-scheduler [a25c5660a3a2b7226bf308b44307a3912efa7a9a73e79ec63f83133b80a44683] ...
	I0806 08:38:31.655389   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a25c5660a3a2b7226bf308b44307a3912efa7a9a73e79ec63f83133b80a44683"
	I0806 08:38:31.703934   79219 logs.go:123] Gathering logs for kube-controller-manager [d8651a7ed6615e826b950eab003841c2ec872b48160f207e77dd32a684125a23] ...
	I0806 08:38:31.703960   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8651a7ed6615e826b950eab003841c2ec872b48160f207e77dd32a684125a23"
	I0806 08:38:31.768880   79219 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:31.768916   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 08:38:31.896045   79219 logs.go:123] Gathering logs for kube-apiserver [f7edb6bece3347d88a263663a1d6549165dafdcb67ad723494b937766cba6da6] ...
	I0806 08:38:31.896079   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7edb6bece3347d88a263663a1d6549165dafdcb67ad723494b937766cba6da6"
	I0806 08:38:31.966646   79219 logs.go:123] Gathering logs for etcd [df2efdb62a4af10f89a1b37599d4258ac3b72660353795f8c2141f3ef70c1f65] ...
	I0806 08:38:31.966682   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df2efdb62a4af10f89a1b37599d4258ac3b72660353795f8c2141f3ef70c1f65"
	I0806 08:38:32.029471   79219 logs.go:123] Gathering logs for coredns [9e1c91bc3a920b6f7db4477b8219f13fa092c68ff53abe2390325a68dad253e5] ...
	I0806 08:38:32.029509   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e1c91bc3a920b6f7db4477b8219f13fa092c68ff53abe2390325a68dad253e5"
	I0806 08:38:32.071406   79219 logs.go:123] Gathering logs for kube-proxy [612fc6a8f2c1208097a6d6287c5fecb20026608a278af5d0d4efc9662e065eb9] ...
	I0806 08:38:32.071435   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 612fc6a8f2c1208097a6d6287c5fecb20026608a278af5d0d4efc9662e065eb9"
	I0806 08:38:32.122333   79219 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:32.122368   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:32.478494   79219 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:32.478540   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:32.530649   79219 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:32.530696   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:32.545866   79219 logs.go:123] Gathering logs for container status ...
	I0806 08:38:32.545895   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:35.101370   79219 system_pods.go:59] 8 kube-system pods found
	I0806 08:38:35.101397   79219 system_pods.go:61] "coredns-7db6d8ff4d-4xxwm" [eeb47dea-32e9-458f-9ad9-2af6d4e68cc0] Running
	I0806 08:38:35.101402   79219 system_pods.go:61] "etcd-embed-certs-324808" [1e0a82cd-6704-4da3-8992-15c68a7aaf70] Running
	I0806 08:38:35.101405   79219 system_pods.go:61] "kube-apiserver-embed-certs-324808" [a33c54d9-1fc5-4243-8fe9-d535c40908c3] Running
	I0806 08:38:35.101409   79219 system_pods.go:61] "kube-controller-manager-embed-certs-324808" [b1c519b8-9c96-4c87-90f3-cc277cfe7763] Running
	I0806 08:38:35.101412   79219 system_pods.go:61] "kube-proxy-8lbzv" [0c45e6c0-9b7f-4c48-bff5-38d4a5949bec] Running
	I0806 08:38:35.101415   79219 system_pods.go:61] "kube-scheduler-embed-certs-324808" [78c2827f-341a-4441-bf7e-eff947fc3ea8] Running
	I0806 08:38:35.101421   79219 system_pods.go:61] "metrics-server-569cc877fc-8xb9k" [7a5fb326-11a7-48fa-bcde-a22026a1dccb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0806 08:38:35.101424   79219 system_pods.go:61] "storage-provisioner" [15c7d89a-e994-4cca-931c-f646157a15e8] Running
	I0806 08:38:35.101431   79219 system_pods.go:74] duration metric: took 3.916151772s to wait for pod list to return data ...
	I0806 08:38:35.101438   79219 default_sa.go:34] waiting for default service account to be created ...
	I0806 08:38:35.103518   79219 default_sa.go:45] found service account: "default"
	I0806 08:38:35.103536   79219 default_sa.go:55] duration metric: took 2.092582ms for default service account to be created ...
	I0806 08:38:35.103546   79219 system_pods.go:116] waiting for k8s-apps to be running ...
	I0806 08:38:35.108383   79219 system_pods.go:86] 8 kube-system pods found
	I0806 08:38:35.108409   79219 system_pods.go:89] "coredns-7db6d8ff4d-4xxwm" [eeb47dea-32e9-458f-9ad9-2af6d4e68cc0] Running
	I0806 08:38:35.108417   79219 system_pods.go:89] "etcd-embed-certs-324808" [1e0a82cd-6704-4da3-8992-15c68a7aaf70] Running
	I0806 08:38:35.108423   79219 system_pods.go:89] "kube-apiserver-embed-certs-324808" [a33c54d9-1fc5-4243-8fe9-d535c40908c3] Running
	I0806 08:38:35.108427   79219 system_pods.go:89] "kube-controller-manager-embed-certs-324808" [b1c519b8-9c96-4c87-90f3-cc277cfe7763] Running
	I0806 08:38:35.108431   79219 system_pods.go:89] "kube-proxy-8lbzv" [0c45e6c0-9b7f-4c48-bff5-38d4a5949bec] Running
	I0806 08:38:35.108436   79219 system_pods.go:89] "kube-scheduler-embed-certs-324808" [78c2827f-341a-4441-bf7e-eff947fc3ea8] Running
	I0806 08:38:35.108442   79219 system_pods.go:89] "metrics-server-569cc877fc-8xb9k" [7a5fb326-11a7-48fa-bcde-a22026a1dccb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0806 08:38:35.108447   79219 system_pods.go:89] "storage-provisioner" [15c7d89a-e994-4cca-931c-f646157a15e8] Running
	I0806 08:38:35.108453   79219 system_pods.go:126] duration metric: took 4.901954ms to wait for k8s-apps to be running ...
	I0806 08:38:35.108463   79219 system_svc.go:44] waiting for kubelet service to be running ....
	I0806 08:38:35.108505   79219 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 08:38:35.126018   79219 system_svc.go:56] duration metric: took 17.549117ms WaitForService to wait for kubelet
	I0806 08:38:35.126045   79219 kubeadm.go:582] duration metric: took 4m23.224415251s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0806 08:38:35.126065   79219 node_conditions.go:102] verifying NodePressure condition ...
	I0806 08:38:35.129214   79219 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0806 08:38:35.129231   79219 node_conditions.go:123] node cpu capacity is 2
	I0806 08:38:35.129241   79219 node_conditions.go:105] duration metric: took 3.172893ms to run NodePressure ...
	I0806 08:38:35.129252   79219 start.go:241] waiting for startup goroutines ...
	I0806 08:38:35.129259   79219 start.go:246] waiting for cluster config update ...
	I0806 08:38:35.129273   79219 start.go:255] writing updated cluster config ...
	I0806 08:38:35.129540   79219 ssh_runner.go:195] Run: rm -f paused
	I0806 08:38:35.178651   79219 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0806 08:38:35.180921   79219 out.go:177] * Done! kubectl is now configured to use "embed-certs-324808" cluster and "default" namespace by default
	I0806 08:38:31.968170   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:34.468911   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:34.478890   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:34.493573   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:34.493650   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:34.529054   79927 cri.go:89] found id: ""
	I0806 08:38:34.529083   79927 logs.go:276] 0 containers: []
	W0806 08:38:34.529094   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:34.529101   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:34.529171   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:34.570214   79927 cri.go:89] found id: ""
	I0806 08:38:34.570245   79927 logs.go:276] 0 containers: []
	W0806 08:38:34.570256   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:34.570264   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:34.570325   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:34.607198   79927 cri.go:89] found id: ""
	I0806 08:38:34.607232   79927 logs.go:276] 0 containers: []
	W0806 08:38:34.607243   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:34.607250   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:34.607296   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:34.646238   79927 cri.go:89] found id: ""
	I0806 08:38:34.646266   79927 logs.go:276] 0 containers: []
	W0806 08:38:34.646278   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:34.646287   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:34.646354   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:34.682137   79927 cri.go:89] found id: ""
	I0806 08:38:34.682161   79927 logs.go:276] 0 containers: []
	W0806 08:38:34.682169   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:34.682175   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:34.682227   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:34.721211   79927 cri.go:89] found id: ""
	I0806 08:38:34.721238   79927 logs.go:276] 0 containers: []
	W0806 08:38:34.721246   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:34.721252   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:34.721306   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:34.759693   79927 cri.go:89] found id: ""
	I0806 08:38:34.759728   79927 logs.go:276] 0 containers: []
	W0806 08:38:34.759737   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:34.759743   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:34.759793   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:34.799224   79927 cri.go:89] found id: ""
	I0806 08:38:34.799246   79927 logs.go:276] 0 containers: []
	W0806 08:38:34.799254   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:34.799262   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:34.799273   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:34.852682   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:34.852727   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:34.866456   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:34.866494   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:34.933029   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:34.933055   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:34.933076   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:35.017215   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:35.017253   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:37.567662   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:37.582220   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:37.582284   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:37.622112   79927 cri.go:89] found id: ""
	I0806 08:38:37.622145   79927 logs.go:276] 0 containers: []
	W0806 08:38:37.622153   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:37.622164   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:37.622228   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:37.657767   79927 cri.go:89] found id: ""
	I0806 08:38:37.657791   79927 logs.go:276] 0 containers: []
	W0806 08:38:37.657798   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:37.657804   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:37.657849   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:37.691442   79927 cri.go:89] found id: ""
	I0806 08:38:37.691471   79927 logs.go:276] 0 containers: []
	W0806 08:38:37.691480   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:37.691485   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:37.691539   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:37.725810   79927 cri.go:89] found id: ""
	I0806 08:38:37.725837   79927 logs.go:276] 0 containers: []
	W0806 08:38:37.725859   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:37.725866   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:37.725922   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:37.764708   79927 cri.go:89] found id: ""
	I0806 08:38:37.764737   79927 logs.go:276] 0 containers: []
	W0806 08:38:37.764748   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:37.764755   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:37.764818   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:37.802915   79927 cri.go:89] found id: ""
	I0806 08:38:37.802939   79927 logs.go:276] 0 containers: []
	W0806 08:38:37.802947   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:37.802953   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:37.803008   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:37.838492   79927 cri.go:89] found id: ""
	I0806 08:38:37.838525   79927 logs.go:276] 0 containers: []
	W0806 08:38:37.838536   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:37.838544   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:37.838607   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:37.873937   79927 cri.go:89] found id: ""
	I0806 08:38:37.873969   79927 logs.go:276] 0 containers: []
	W0806 08:38:37.873979   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:37.873990   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:37.874005   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:37.912169   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:37.912193   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:37.968269   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:37.968305   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:37.983414   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:37.983440   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:38.055031   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:38.055049   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:38.055062   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:35.289081   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:37.789220   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:36.968687   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:39.468051   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:41.468488   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:40.635449   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:40.649962   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:40.650045   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:40.688975   79927 cri.go:89] found id: ""
	I0806 08:38:40.689006   79927 logs.go:276] 0 containers: []
	W0806 08:38:40.689017   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:40.689029   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:40.689093   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:40.726740   79927 cri.go:89] found id: ""
	I0806 08:38:40.726770   79927 logs.go:276] 0 containers: []
	W0806 08:38:40.726780   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:40.726788   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:40.726886   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:40.767402   79927 cri.go:89] found id: ""
	I0806 08:38:40.767432   79927 logs.go:276] 0 containers: []
	W0806 08:38:40.767445   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:40.767453   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:40.767510   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:40.809994   79927 cri.go:89] found id: ""
	I0806 08:38:40.810023   79927 logs.go:276] 0 containers: []
	W0806 08:38:40.810033   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:40.810042   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:40.810102   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:40.845624   79927 cri.go:89] found id: ""
	I0806 08:38:40.845653   79927 logs.go:276] 0 containers: []
	W0806 08:38:40.845662   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:40.845668   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:40.845715   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:40.883830   79927 cri.go:89] found id: ""
	I0806 08:38:40.883856   79927 logs.go:276] 0 containers: []
	W0806 08:38:40.883871   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:40.883879   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:40.883937   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:40.919930   79927 cri.go:89] found id: ""
	I0806 08:38:40.919958   79927 logs.go:276] 0 containers: []
	W0806 08:38:40.919966   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:40.919971   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:40.920018   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:40.954563   79927 cri.go:89] found id: ""
	I0806 08:38:40.954594   79927 logs.go:276] 0 containers: []
	W0806 08:38:40.954605   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:40.954616   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:40.954629   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:41.008780   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:41.008813   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:41.022292   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:41.022321   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:41.090241   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:41.090273   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:41.090294   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:41.171739   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:41.171779   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:43.712410   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:43.726676   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:43.726747   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:40.288192   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:42.289039   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:42.789083   79614 pod_ready.go:81] duration metric: took 4m0.00771712s for pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace to be "Ready" ...
	E0806 08:38:42.789107   79614 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0806 08:38:42.789117   79614 pod_ready.go:38] duration metric: took 4m5.553596655s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 08:38:42.789135   79614 api_server.go:52] waiting for apiserver process to appear ...
	I0806 08:38:42.789163   79614 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:42.789215   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:42.837092   79614 cri.go:89] found id: "49843be03d16addb301ffd26e517770ff29b00f049efb524a2139b8b2153325c"
	I0806 08:38:42.837119   79614 cri.go:89] found id: ""
	I0806 08:38:42.837128   79614 logs.go:276] 1 containers: [49843be03d16addb301ffd26e517770ff29b00f049efb524a2139b8b2153325c]
	I0806 08:38:42.837188   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:42.842270   79614 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:42.842334   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:42.878018   79614 cri.go:89] found id: "b5fa6fe8f1db6b59f47ea637515efb2864e0408ce1df0e6851b3bda0e240f5d3"
	I0806 08:38:42.878044   79614 cri.go:89] found id: ""
	I0806 08:38:42.878053   79614 logs.go:276] 1 containers: [b5fa6fe8f1db6b59f47ea637515efb2864e0408ce1df0e6851b3bda0e240f5d3]
	I0806 08:38:42.878115   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:42.882437   79614 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:42.882509   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:42.925635   79614 cri.go:89] found id: "59f63eeae644f7c8944cdb7b1d05d9e49fe84eb28e9ccbad2d2dd79846d7eb52"
	I0806 08:38:42.925655   79614 cri.go:89] found id: ""
	I0806 08:38:42.925662   79614 logs.go:276] 1 containers: [59f63eeae644f7c8944cdb7b1d05d9e49fe84eb28e9ccbad2d2dd79846d7eb52]
	I0806 08:38:42.925704   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:42.930525   79614 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:42.930583   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:42.972716   79614 cri.go:89] found id: "e596abcad3dc13d27548701f515beebba051e04a9e20f8ce1d018731d880ff1c"
	I0806 08:38:42.972736   79614 cri.go:89] found id: ""
	I0806 08:38:42.972744   79614 logs.go:276] 1 containers: [e596abcad3dc13d27548701f515beebba051e04a9e20f8ce1d018731d880ff1c]
	I0806 08:38:42.972791   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:42.977840   79614 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:42.977894   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:43.017316   79614 cri.go:89] found id: "75990046864b4a4db9f8c02f44de4eefbb0cf45584880b053bce4ca0b677ce4e"
	I0806 08:38:43.017339   79614 cri.go:89] found id: ""
	I0806 08:38:43.017346   79614 logs.go:276] 1 containers: [75990046864b4a4db9f8c02f44de4eefbb0cf45584880b053bce4ca0b677ce4e]
	I0806 08:38:43.017396   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:43.022341   79614 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:43.022416   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:43.067113   79614 cri.go:89] found id: "05bbffa4c989e42bd020ebe6b943459ec1d7b31a6c5e5ba9a37056da15542658"
	I0806 08:38:43.067145   79614 cri.go:89] found id: ""
	I0806 08:38:43.067155   79614 logs.go:276] 1 containers: [05bbffa4c989e42bd020ebe6b943459ec1d7b31a6c5e5ba9a37056da15542658]
	I0806 08:38:43.067211   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:43.071674   79614 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:43.071748   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:43.114123   79614 cri.go:89] found id: ""
	I0806 08:38:43.114146   79614 logs.go:276] 0 containers: []
	W0806 08:38:43.114154   79614 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:43.114159   79614 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0806 08:38:43.114211   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0806 08:38:43.155927   79614 cri.go:89] found id: "e668e4a5fa59e279bc227b6110983cc4660b639d5b1e1291ae60d361f682d82e"
	I0806 08:38:43.155955   79614 cri.go:89] found id: "abd677ac30faba3fef9e9d562601ad0012d7f1b48852e08b8f2684efda6b0f2e"
	I0806 08:38:43.155961   79614 cri.go:89] found id: ""
	I0806 08:38:43.155969   79614 logs.go:276] 2 containers: [e668e4a5fa59e279bc227b6110983cc4660b639d5b1e1291ae60d361f682d82e abd677ac30faba3fef9e9d562601ad0012d7f1b48852e08b8f2684efda6b0f2e]
	I0806 08:38:43.156017   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:43.165735   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:43.171834   79614 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:43.171861   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:43.232588   79614 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:43.232628   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:43.248033   79614 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:43.248080   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 08:38:43.383729   79614 logs.go:123] Gathering logs for etcd [b5fa6fe8f1db6b59f47ea637515efb2864e0408ce1df0e6851b3bda0e240f5d3] ...
	I0806 08:38:43.383760   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5fa6fe8f1db6b59f47ea637515efb2864e0408ce1df0e6851b3bda0e240f5d3"
	I0806 08:38:43.428082   79614 logs.go:123] Gathering logs for coredns [59f63eeae644f7c8944cdb7b1d05d9e49fe84eb28e9ccbad2d2dd79846d7eb52] ...
	I0806 08:38:43.428116   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59f63eeae644f7c8944cdb7b1d05d9e49fe84eb28e9ccbad2d2dd79846d7eb52"
	I0806 08:38:43.467054   79614 logs.go:123] Gathering logs for storage-provisioner [e668e4a5fa59e279bc227b6110983cc4660b639d5b1e1291ae60d361f682d82e] ...
	I0806 08:38:43.467081   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e668e4a5fa59e279bc227b6110983cc4660b639d5b1e1291ae60d361f682d82e"
	I0806 08:38:43.502900   79614 logs.go:123] Gathering logs for storage-provisioner [abd677ac30faba3fef9e9d562601ad0012d7f1b48852e08b8f2684efda6b0f2e] ...
	I0806 08:38:43.502928   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abd677ac30faba3fef9e9d562601ad0012d7f1b48852e08b8f2684efda6b0f2e"
	I0806 08:38:43.539338   79614 logs.go:123] Gathering logs for kube-apiserver [49843be03d16addb301ffd26e517770ff29b00f049efb524a2139b8b2153325c] ...
	I0806 08:38:43.539369   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49843be03d16addb301ffd26e517770ff29b00f049efb524a2139b8b2153325c"
	I0806 08:38:43.585813   79614 logs.go:123] Gathering logs for kube-scheduler [e596abcad3dc13d27548701f515beebba051e04a9e20f8ce1d018731d880ff1c] ...
	I0806 08:38:43.585855   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e596abcad3dc13d27548701f515beebba051e04a9e20f8ce1d018731d880ff1c"
	I0806 08:38:43.627141   79614 logs.go:123] Gathering logs for kube-proxy [75990046864b4a4db9f8c02f44de4eefbb0cf45584880b053bce4ca0b677ce4e] ...
	I0806 08:38:43.627173   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75990046864b4a4db9f8c02f44de4eefbb0cf45584880b053bce4ca0b677ce4e"
	I0806 08:38:43.663754   79614 logs.go:123] Gathering logs for kube-controller-manager [05bbffa4c989e42bd020ebe6b943459ec1d7b31a6c5e5ba9a37056da15542658] ...
	I0806 08:38:43.663783   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 05bbffa4c989e42bd020ebe6b943459ec1d7b31a6c5e5ba9a37056da15542658"
	I0806 08:38:43.717594   79614 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:43.717631   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:43.469134   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:45.967796   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:43.765343   79927 cri.go:89] found id: ""
	I0806 08:38:43.765379   79927 logs.go:276] 0 containers: []
	W0806 08:38:43.765390   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:43.765398   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:43.765457   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:43.803813   79927 cri.go:89] found id: ""
	I0806 08:38:43.803846   79927 logs.go:276] 0 containers: []
	W0806 08:38:43.803859   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:43.803868   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:43.803933   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:43.841307   79927 cri.go:89] found id: ""
	I0806 08:38:43.841339   79927 logs.go:276] 0 containers: []
	W0806 08:38:43.841349   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:43.841356   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:43.841418   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:43.876488   79927 cri.go:89] found id: ""
	I0806 08:38:43.876521   79927 logs.go:276] 0 containers: []
	W0806 08:38:43.876534   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:43.876544   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:43.876606   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:43.910972   79927 cri.go:89] found id: ""
	I0806 08:38:43.911000   79927 logs.go:276] 0 containers: []
	W0806 08:38:43.911010   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:43.911017   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:43.911077   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:43.945541   79927 cri.go:89] found id: ""
	I0806 08:38:43.945573   79927 logs.go:276] 0 containers: []
	W0806 08:38:43.945584   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:43.945592   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:43.945655   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:43.981366   79927 cri.go:89] found id: ""
	I0806 08:38:43.981399   79927 logs.go:276] 0 containers: []
	W0806 08:38:43.981408   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:43.981413   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:43.981463   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:44.015349   79927 cri.go:89] found id: ""
	I0806 08:38:44.015375   79927 logs.go:276] 0 containers: []
	W0806 08:38:44.015384   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:44.015394   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:44.015408   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:44.068089   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:44.068129   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:44.082496   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:44.082524   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:44.157765   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:44.157788   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:44.157805   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:44.242842   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:44.242885   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:46.789280   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:46.803490   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:46.803562   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:46.844164   79927 cri.go:89] found id: ""
	I0806 08:38:46.844192   79927 logs.go:276] 0 containers: []
	W0806 08:38:46.844200   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:46.844206   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:46.844247   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:46.880968   79927 cri.go:89] found id: ""
	I0806 08:38:46.881003   79927 logs.go:276] 0 containers: []
	W0806 08:38:46.881015   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:46.881022   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:46.881082   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:46.919291   79927 cri.go:89] found id: ""
	I0806 08:38:46.919311   79927 logs.go:276] 0 containers: []
	W0806 08:38:46.919319   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:46.919324   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:46.919369   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:46.960591   79927 cri.go:89] found id: ""
	I0806 08:38:46.960616   79927 logs.go:276] 0 containers: []
	W0806 08:38:46.960626   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:46.960632   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:46.960681   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:46.995913   79927 cri.go:89] found id: ""
	I0806 08:38:46.995943   79927 logs.go:276] 0 containers: []
	W0806 08:38:46.995954   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:46.995961   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:46.996045   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:47.035185   79927 cri.go:89] found id: ""
	I0806 08:38:47.035213   79927 logs.go:276] 0 containers: []
	W0806 08:38:47.035225   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:47.035236   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:47.035292   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:47.079605   79927 cri.go:89] found id: ""
	I0806 08:38:47.079634   79927 logs.go:276] 0 containers: []
	W0806 08:38:47.079646   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:47.079653   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:47.079701   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:47.118819   79927 cri.go:89] found id: ""
	I0806 08:38:47.118849   79927 logs.go:276] 0 containers: []
	W0806 08:38:47.118861   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:47.118873   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:47.118888   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:47.181357   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:47.181394   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:47.197676   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:47.197707   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:47.270304   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:47.270331   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:47.270348   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:47.354145   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:47.354176   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:44.291978   79614 logs.go:123] Gathering logs for container status ...
	I0806 08:38:44.292026   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:46.843357   79614 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:46.860793   79614 api_server.go:72] duration metric: took 4m14.882891742s to wait for apiserver process to appear ...
	I0806 08:38:46.860820   79614 api_server.go:88] waiting for apiserver healthz status ...
	I0806 08:38:46.860858   79614 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:46.860923   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:46.909044   79614 cri.go:89] found id: "49843be03d16addb301ffd26e517770ff29b00f049efb524a2139b8b2153325c"
	I0806 08:38:46.909068   79614 cri.go:89] found id: ""
	I0806 08:38:46.909076   79614 logs.go:276] 1 containers: [49843be03d16addb301ffd26e517770ff29b00f049efb524a2139b8b2153325c]
	I0806 08:38:46.909134   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:46.913443   79614 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:46.913522   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:46.955246   79614 cri.go:89] found id: "b5fa6fe8f1db6b59f47ea637515efb2864e0408ce1df0e6851b3bda0e240f5d3"
	I0806 08:38:46.955273   79614 cri.go:89] found id: ""
	I0806 08:38:46.955281   79614 logs.go:276] 1 containers: [b5fa6fe8f1db6b59f47ea637515efb2864e0408ce1df0e6851b3bda0e240f5d3]
	I0806 08:38:46.955347   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:46.959914   79614 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:46.959994   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:47.001639   79614 cri.go:89] found id: "59f63eeae644f7c8944cdb7b1d05d9e49fe84eb28e9ccbad2d2dd79846d7eb52"
	I0806 08:38:47.001665   79614 cri.go:89] found id: ""
	I0806 08:38:47.001673   79614 logs.go:276] 1 containers: [59f63eeae644f7c8944cdb7b1d05d9e49fe84eb28e9ccbad2d2dd79846d7eb52]
	I0806 08:38:47.001717   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:47.006260   79614 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:47.006344   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:47.049226   79614 cri.go:89] found id: "e596abcad3dc13d27548701f515beebba051e04a9e20f8ce1d018731d880ff1c"
	I0806 08:38:47.049252   79614 cri.go:89] found id: ""
	I0806 08:38:47.049261   79614 logs.go:276] 1 containers: [e596abcad3dc13d27548701f515beebba051e04a9e20f8ce1d018731d880ff1c]
	I0806 08:38:47.049317   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:47.053544   79614 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:47.053626   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:47.095805   79614 cri.go:89] found id: "75990046864b4a4db9f8c02f44de4eefbb0cf45584880b053bce4ca0b677ce4e"
	I0806 08:38:47.095832   79614 cri.go:89] found id: ""
	I0806 08:38:47.095841   79614 logs.go:276] 1 containers: [75990046864b4a4db9f8c02f44de4eefbb0cf45584880b053bce4ca0b677ce4e]
	I0806 08:38:47.095895   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:47.100272   79614 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:47.100329   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:47.146108   79614 cri.go:89] found id: "05bbffa4c989e42bd020ebe6b943459ec1d7b31a6c5e5ba9a37056da15542658"
	I0806 08:38:47.146132   79614 cri.go:89] found id: ""
	I0806 08:38:47.146142   79614 logs.go:276] 1 containers: [05bbffa4c989e42bd020ebe6b943459ec1d7b31a6c5e5ba9a37056da15542658]
	I0806 08:38:47.146200   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:47.150808   79614 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:47.150863   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:47.194122   79614 cri.go:89] found id: ""
	I0806 08:38:47.194157   79614 logs.go:276] 0 containers: []
	W0806 08:38:47.194169   79614 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:47.194177   79614 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0806 08:38:47.194241   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0806 08:38:47.237872   79614 cri.go:89] found id: "e668e4a5fa59e279bc227b6110983cc4660b639d5b1e1291ae60d361f682d82e"
	I0806 08:38:47.237906   79614 cri.go:89] found id: "abd677ac30faba3fef9e9d562601ad0012d7f1b48852e08b8f2684efda6b0f2e"
	I0806 08:38:47.237915   79614 cri.go:89] found id: ""
	I0806 08:38:47.237925   79614 logs.go:276] 2 containers: [e668e4a5fa59e279bc227b6110983cc4660b639d5b1e1291ae60d361f682d82e abd677ac30faba3fef9e9d562601ad0012d7f1b48852e08b8f2684efda6b0f2e]
	I0806 08:38:47.237988   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:47.242528   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:47.247317   79614 logs.go:123] Gathering logs for kube-proxy [75990046864b4a4db9f8c02f44de4eefbb0cf45584880b053bce4ca0b677ce4e] ...
	I0806 08:38:47.247341   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75990046864b4a4db9f8c02f44de4eefbb0cf45584880b053bce4ca0b677ce4e"
	I0806 08:38:47.289612   79614 logs.go:123] Gathering logs for kube-controller-manager [05bbffa4c989e42bd020ebe6b943459ec1d7b31a6c5e5ba9a37056da15542658] ...
	I0806 08:38:47.289644   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 05bbffa4c989e42bd020ebe6b943459ec1d7b31a6c5e5ba9a37056da15542658"
	I0806 08:38:47.345619   79614 logs.go:123] Gathering logs for storage-provisioner [e668e4a5fa59e279bc227b6110983cc4660b639d5b1e1291ae60d361f682d82e] ...
	I0806 08:38:47.345653   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e668e4a5fa59e279bc227b6110983cc4660b639d5b1e1291ae60d361f682d82e"
	I0806 08:38:47.401281   79614 logs.go:123] Gathering logs for storage-provisioner [abd677ac30faba3fef9e9d562601ad0012d7f1b48852e08b8f2684efda6b0f2e] ...
	I0806 08:38:47.401306   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abd677ac30faba3fef9e9d562601ad0012d7f1b48852e08b8f2684efda6b0f2e"
	I0806 08:38:47.438335   79614 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:47.438364   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 08:38:47.537334   79614 logs.go:123] Gathering logs for etcd [b5fa6fe8f1db6b59f47ea637515efb2864e0408ce1df0e6851b3bda0e240f5d3] ...
	I0806 08:38:47.537360   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5fa6fe8f1db6b59f47ea637515efb2864e0408ce1df0e6851b3bda0e240f5d3"
	I0806 08:38:47.592101   79614 logs.go:123] Gathering logs for coredns [59f63eeae644f7c8944cdb7b1d05d9e49fe84eb28e9ccbad2d2dd79846d7eb52] ...
	I0806 08:38:47.592132   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59f63eeae644f7c8944cdb7b1d05d9e49fe84eb28e9ccbad2d2dd79846d7eb52"
	I0806 08:38:47.626714   79614 logs.go:123] Gathering logs for kube-scheduler [e596abcad3dc13d27548701f515beebba051e04a9e20f8ce1d018731d880ff1c] ...
	I0806 08:38:47.626738   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e596abcad3dc13d27548701f515beebba051e04a9e20f8ce1d018731d880ff1c"
	I0806 08:38:47.663232   79614 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:47.663262   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:48.133308   79614 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:48.133344   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:48.189669   79614 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:48.189709   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:48.208530   79614 logs.go:123] Gathering logs for kube-apiserver [49843be03d16addb301ffd26e517770ff29b00f049efb524a2139b8b2153325c] ...
	I0806 08:38:48.208557   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49843be03d16addb301ffd26e517770ff29b00f049efb524a2139b8b2153325c"
	I0806 08:38:48.264499   79614 logs.go:123] Gathering logs for container status ...
	I0806 08:38:48.264535   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:49.901481   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:49.914862   79927 kubeadm.go:597] duration metric: took 4m3.80396481s to restartPrimaryControlPlane
	W0806 08:38:49.914926   79927 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0806 08:38:49.914952   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0806 08:38:50.394829   79927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 08:38:50.409425   79927 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0806 08:38:50.419349   79927 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0806 08:38:50.429688   79927 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0806 08:38:50.429711   79927 kubeadm.go:157] found existing configuration files:
	
	I0806 08:38:50.429756   79927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0806 08:38:50.440055   79927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0806 08:38:50.440126   79927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0806 08:38:50.450734   79927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0806 08:38:50.461860   79927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0806 08:38:50.461943   79927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0806 08:38:50.473324   79927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0806 08:38:50.483381   79927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0806 08:38:50.483440   79927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0806 08:38:50.493247   79927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0806 08:38:50.502800   79927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0806 08:38:50.502861   79927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0806 08:38:50.512897   79927 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0806 08:38:50.585658   79927 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0806 08:38:50.585795   79927 kubeadm.go:310] [preflight] Running pre-flight checks
	I0806 08:38:50.740936   79927 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0806 08:38:50.741079   79927 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0806 08:38:50.741200   79927 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0806 08:38:50.938086   79927 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0806 08:38:48.468228   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:50.968702   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:50.940205   79927 out.go:204]   - Generating certificates and keys ...
	I0806 08:38:50.940315   79927 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0806 08:38:50.940428   79927 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0806 08:38:50.940543   79927 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0806 08:38:50.940621   79927 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0806 08:38:50.940723   79927 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0806 08:38:50.940800   79927 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0806 08:38:50.940881   79927 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0806 08:38:50.941150   79927 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0806 08:38:50.941954   79927 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0806 08:38:50.942765   79927 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0806 08:38:50.942910   79927 kubeadm.go:310] [certs] Using the existing "sa" key
	I0806 08:38:50.942993   79927 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0806 08:38:51.039558   79927 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0806 08:38:51.265243   79927 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0806 08:38:51.348501   79927 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0806 08:38:51.509558   79927 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0806 08:38:51.530725   79927 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0806 08:38:51.534108   79927 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0806 08:38:51.534297   79927 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0806 08:38:51.680149   79927 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0806 08:38:51.682195   79927 out.go:204]   - Booting up control plane ...
	I0806 08:38:51.682331   79927 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0806 08:38:51.693736   79927 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0806 08:38:51.695315   79927 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0806 08:38:51.697107   79927 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0806 08:38:51.700906   79927 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0806 08:38:50.822580   79614 api_server.go:253] Checking apiserver healthz at https://192.168.50.235:8444/healthz ...
	I0806 08:38:50.827121   79614 api_server.go:279] https://192.168.50.235:8444/healthz returned 200:
	ok
	I0806 08:38:50.828327   79614 api_server.go:141] control plane version: v1.30.3
	I0806 08:38:50.828355   79614 api_server.go:131] duration metric: took 3.967524972s to wait for apiserver health ...
	I0806 08:38:50.828381   79614 system_pods.go:43] waiting for kube-system pods to appear ...
	I0806 08:38:50.828406   79614 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:50.828460   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:50.875518   79614 cri.go:89] found id: "49843be03d16addb301ffd26e517770ff29b00f049efb524a2139b8b2153325c"
	I0806 08:38:50.875545   79614 cri.go:89] found id: ""
	I0806 08:38:50.875555   79614 logs.go:276] 1 containers: [49843be03d16addb301ffd26e517770ff29b00f049efb524a2139b8b2153325c]
	I0806 08:38:50.875611   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:50.881260   79614 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:50.881321   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:50.923252   79614 cri.go:89] found id: "b5fa6fe8f1db6b59f47ea637515efb2864e0408ce1df0e6851b3bda0e240f5d3"
	I0806 08:38:50.923281   79614 cri.go:89] found id: ""
	I0806 08:38:50.923292   79614 logs.go:276] 1 containers: [b5fa6fe8f1db6b59f47ea637515efb2864e0408ce1df0e6851b3bda0e240f5d3]
	I0806 08:38:50.923350   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:50.928115   79614 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:50.928176   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:50.971427   79614 cri.go:89] found id: "59f63eeae644f7c8944cdb7b1d05d9e49fe84eb28e9ccbad2d2dd79846d7eb52"
	I0806 08:38:50.971453   79614 cri.go:89] found id: ""
	I0806 08:38:50.971463   79614 logs.go:276] 1 containers: [59f63eeae644f7c8944cdb7b1d05d9e49fe84eb28e9ccbad2d2dd79846d7eb52]
	I0806 08:38:50.971520   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:50.975665   79614 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:50.975789   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:51.014601   79614 cri.go:89] found id: "e596abcad3dc13d27548701f515beebba051e04a9e20f8ce1d018731d880ff1c"
	I0806 08:38:51.014633   79614 cri.go:89] found id: ""
	I0806 08:38:51.014644   79614 logs.go:276] 1 containers: [e596abcad3dc13d27548701f515beebba051e04a9e20f8ce1d018731d880ff1c]
	I0806 08:38:51.014722   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:51.019197   79614 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:51.019266   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:51.057063   79614 cri.go:89] found id: "75990046864b4a4db9f8c02f44de4eefbb0cf45584880b053bce4ca0b677ce4e"
	I0806 08:38:51.057089   79614 cri.go:89] found id: ""
	I0806 08:38:51.057098   79614 logs.go:276] 1 containers: [75990046864b4a4db9f8c02f44de4eefbb0cf45584880b053bce4ca0b677ce4e]
	I0806 08:38:51.057159   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:51.061504   79614 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:51.061571   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:51.106496   79614 cri.go:89] found id: "05bbffa4c989e42bd020ebe6b943459ec1d7b31a6c5e5ba9a37056da15542658"
	I0806 08:38:51.106523   79614 cri.go:89] found id: ""
	I0806 08:38:51.106532   79614 logs.go:276] 1 containers: [05bbffa4c989e42bd020ebe6b943459ec1d7b31a6c5e5ba9a37056da15542658]
	I0806 08:38:51.106590   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:51.111040   79614 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:51.111109   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:51.156042   79614 cri.go:89] found id: ""
	I0806 08:38:51.156072   79614 logs.go:276] 0 containers: []
	W0806 08:38:51.156083   79614 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:51.156090   79614 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0806 08:38:51.156152   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0806 08:38:51.208712   79614 cri.go:89] found id: "e668e4a5fa59e279bc227b6110983cc4660b639d5b1e1291ae60d361f682d82e"
	I0806 08:38:51.208733   79614 cri.go:89] found id: "abd677ac30faba3fef9e9d562601ad0012d7f1b48852e08b8f2684efda6b0f2e"
	I0806 08:38:51.208737   79614 cri.go:89] found id: ""
	I0806 08:38:51.208744   79614 logs.go:276] 2 containers: [e668e4a5fa59e279bc227b6110983cc4660b639d5b1e1291ae60d361f682d82e abd677ac30faba3fef9e9d562601ad0012d7f1b48852e08b8f2684efda6b0f2e]
	I0806 08:38:51.208796   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:51.213508   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:51.219082   79614 logs.go:123] Gathering logs for etcd [b5fa6fe8f1db6b59f47ea637515efb2864e0408ce1df0e6851b3bda0e240f5d3] ...
	I0806 08:38:51.219104   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5fa6fe8f1db6b59f47ea637515efb2864e0408ce1df0e6851b3bda0e240f5d3"
	I0806 08:38:51.266352   79614 logs.go:123] Gathering logs for storage-provisioner [e668e4a5fa59e279bc227b6110983cc4660b639d5b1e1291ae60d361f682d82e] ...
	I0806 08:38:51.266380   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e668e4a5fa59e279bc227b6110983cc4660b639d5b1e1291ae60d361f682d82e"
	I0806 08:38:51.315128   79614 logs.go:123] Gathering logs for storage-provisioner [abd677ac30faba3fef9e9d562601ad0012d7f1b48852e08b8f2684efda6b0f2e] ...
	I0806 08:38:51.315168   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abd677ac30faba3fef9e9d562601ad0012d7f1b48852e08b8f2684efda6b0f2e"
	I0806 08:38:51.357872   79614 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:51.357914   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:51.768288   79614 logs.go:123] Gathering logs for container status ...
	I0806 08:38:51.768328   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:51.825304   79614 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:51.825341   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:51.888532   79614 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:51.888567   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 08:38:51.998271   79614 logs.go:123] Gathering logs for coredns [59f63eeae644f7c8944cdb7b1d05d9e49fe84eb28e9ccbad2d2dd79846d7eb52] ...
	I0806 08:38:51.998307   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59f63eeae644f7c8944cdb7b1d05d9e49fe84eb28e9ccbad2d2dd79846d7eb52"
	I0806 08:38:52.034549   79614 logs.go:123] Gathering logs for kube-scheduler [e596abcad3dc13d27548701f515beebba051e04a9e20f8ce1d018731d880ff1c] ...
	I0806 08:38:52.034574   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e596abcad3dc13d27548701f515beebba051e04a9e20f8ce1d018731d880ff1c"
	I0806 08:38:52.079289   79614 logs.go:123] Gathering logs for kube-proxy [75990046864b4a4db9f8c02f44de4eefbb0cf45584880b053bce4ca0b677ce4e] ...
	I0806 08:38:52.079317   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75990046864b4a4db9f8c02f44de4eefbb0cf45584880b053bce4ca0b677ce4e"
	I0806 08:38:52.124134   79614 logs.go:123] Gathering logs for kube-controller-manager [05bbffa4c989e42bd020ebe6b943459ec1d7b31a6c5e5ba9a37056da15542658] ...
	I0806 08:38:52.124163   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 05bbffa4c989e42bd020ebe6b943459ec1d7b31a6c5e5ba9a37056da15542658"
	I0806 08:38:52.191877   79614 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:52.191914   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:52.210712   79614 logs.go:123] Gathering logs for kube-apiserver [49843be03d16addb301ffd26e517770ff29b00f049efb524a2139b8b2153325c] ...
	I0806 08:38:52.210740   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49843be03d16addb301ffd26e517770ff29b00f049efb524a2139b8b2153325c"
	I0806 08:38:54.770330   79614 system_pods.go:59] 8 kube-system pods found
	I0806 08:38:54.770356   79614 system_pods.go:61] "coredns-7db6d8ff4d-gx8tt" [48f83ba8-a238-4baf-94af-88370fefcb02] Running
	I0806 08:38:54.770361   79614 system_pods.go:61] "etcd-default-k8s-diff-port-990751" [bf342ba4-1e11-40bf-80fd-02b402043ecf] Running
	I0806 08:38:54.770365   79614 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-990751" [11ee42c4-c9e9-41cf-ace2-deac0f30153a] Running
	I0806 08:38:54.770369   79614 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-990751" [f17fca30-0afc-4ed8-a8cc-cb64e69723f3] Running
	I0806 08:38:54.770373   79614 system_pods.go:61] "kube-proxy-lpf88" [866c10f0-2c44-4c03-b460-ff2194bd1ec6] Running
	I0806 08:38:54.770377   79614 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-990751" [2e591ea7-07bd-464f-bf0e-bc24bfcca016] Running
	I0806 08:38:54.770383   79614 system_pods.go:61] "metrics-server-569cc877fc-hnxd4" [f369ac70-49b7-448a-9852-89232fb66da9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0806 08:38:54.770388   79614 system_pods.go:61] "storage-provisioner" [0faff3fc-d34b-418e-972b-95a2e36b577a] Running
	I0806 08:38:54.770395   79614 system_pods.go:74] duration metric: took 3.942006529s to wait for pod list to return data ...
	I0806 08:38:54.770401   79614 default_sa.go:34] waiting for default service account to be created ...
	I0806 08:38:54.773935   79614 default_sa.go:45] found service account: "default"
	I0806 08:38:54.773956   79614 default_sa.go:55] duration metric: took 3.549307ms for default service account to be created ...
	I0806 08:38:54.773963   79614 system_pods.go:116] waiting for k8s-apps to be running ...
	I0806 08:38:54.779748   79614 system_pods.go:86] 8 kube-system pods found
	I0806 08:38:54.779773   79614 system_pods.go:89] "coredns-7db6d8ff4d-gx8tt" [48f83ba8-a238-4baf-94af-88370fefcb02] Running
	I0806 08:38:54.779780   79614 system_pods.go:89] "etcd-default-k8s-diff-port-990751" [bf342ba4-1e11-40bf-80fd-02b402043ecf] Running
	I0806 08:38:54.779787   79614 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-990751" [11ee42c4-c9e9-41cf-ace2-deac0f30153a] Running
	I0806 08:38:54.779794   79614 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-990751" [f17fca30-0afc-4ed8-a8cc-cb64e69723f3] Running
	I0806 08:38:54.779801   79614 system_pods.go:89] "kube-proxy-lpf88" [866c10f0-2c44-4c03-b460-ff2194bd1ec6] Running
	I0806 08:38:54.779807   79614 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-990751" [2e591ea7-07bd-464f-bf0e-bc24bfcca016] Running
	I0806 08:38:54.779818   79614 system_pods.go:89] "metrics-server-569cc877fc-hnxd4" [f369ac70-49b7-448a-9852-89232fb66da9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0806 08:38:54.779825   79614 system_pods.go:89] "storage-provisioner" [0faff3fc-d34b-418e-972b-95a2e36b577a] Running
	I0806 08:38:54.779834   79614 system_pods.go:126] duration metric: took 5.864748ms to wait for k8s-apps to be running ...
	I0806 08:38:54.779842   79614 system_svc.go:44] waiting for kubelet service to be running ....
	I0806 08:38:54.779898   79614 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 08:38:54.796325   79614 system_svc.go:56] duration metric: took 16.472895ms WaitForService to wait for kubelet
	I0806 08:38:54.796353   79614 kubeadm.go:582] duration metric: took 4m22.818456744s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0806 08:38:54.796397   79614 node_conditions.go:102] verifying NodePressure condition ...
	I0806 08:38:54.800536   79614 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0806 08:38:54.800564   79614 node_conditions.go:123] node cpu capacity is 2
	I0806 08:38:54.800577   79614 node_conditions.go:105] duration metric: took 4.173499ms to run NodePressure ...
	I0806 08:38:54.800591   79614 start.go:241] waiting for startup goroutines ...
	I0806 08:38:54.800600   79614 start.go:246] waiting for cluster config update ...
	I0806 08:38:54.800613   79614 start.go:255] writing updated cluster config ...
	I0806 08:38:54.800936   79614 ssh_runner.go:195] Run: rm -f paused
	I0806 08:38:54.850703   79614 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0806 08:38:54.852771   79614 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-990751" cluster and "default" namespace by default
	I0806 08:38:53.469241   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:55.968335   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:58.469143   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:39:00.470785   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:39:02.967983   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:39:05.468164   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:39:07.469643   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:39:09.971774   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:39:12.469590   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:39:14.968935   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:39:17.468202   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:39:19.968322   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:39:21.968455   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:39:24.467291   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:39:26.475005   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:39:28.968164   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:39:31.467876   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:39:31.701723   79927 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0806 08:39:31.702058   79927 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0806 08:39:31.702242   79927 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0806 08:39:33.468293   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:39:34.961921   78922 pod_ready.go:81] duration metric: took 4m0.000413299s for pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace to be "Ready" ...
	E0806 08:39:34.961944   78922 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace to be "Ready" (will not retry!)
	I0806 08:39:34.961967   78922 pod_ready.go:38] duration metric: took 4m12.510868918s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 08:39:34.962010   78922 kubeadm.go:597] duration metric: took 4m19.878821661s to restartPrimaryControlPlane
	W0806 08:39:34.962072   78922 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0806 08:39:34.962105   78922 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0806 08:39:36.702773   79927 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0806 08:39:36.703067   79927 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0806 08:39:46.703330   79927 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0806 08:39:46.703594   79927 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0806 08:40:01.383913   78922 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.421778806s)
	I0806 08:40:01.383994   78922 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 08:40:01.409726   78922 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0806 08:40:01.432284   78922 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0806 08:40:01.445162   78922 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0806 08:40:01.445187   78922 kubeadm.go:157] found existing configuration files:
	
	I0806 08:40:01.445242   78922 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0806 08:40:01.466287   78922 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0806 08:40:01.466344   78922 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0806 08:40:01.481355   78922 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0806 08:40:01.499933   78922 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0806 08:40:01.500003   78922 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0806 08:40:01.518094   78922 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0806 08:40:01.527345   78922 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0806 08:40:01.527408   78922 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0806 08:40:01.539328   78922 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0806 08:40:01.549752   78922 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0806 08:40:01.549863   78922 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0806 08:40:01.561309   78922 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0806 08:40:01.609660   78922 kubeadm.go:310] W0806 08:40:01.587147    2983 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0806 08:40:01.610476   78922 kubeadm.go:310] W0806 08:40:01.588016    2983 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0806 08:40:01.749335   78922 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0806 08:40:06.704889   79927 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0806 08:40:06.705136   79927 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0806 08:40:10.616781   78922 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-rc.0
	I0806 08:40:10.616868   78922 kubeadm.go:310] [preflight] Running pre-flight checks
	I0806 08:40:10.616966   78922 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0806 08:40:10.617110   78922 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0806 08:40:10.617234   78922 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0806 08:40:10.617332   78922 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0806 08:40:10.618729   78922 out.go:204]   - Generating certificates and keys ...
	I0806 08:40:10.618805   78922 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0806 08:40:10.618886   78922 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0806 08:40:10.618961   78922 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0806 08:40:10.619025   78922 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0806 08:40:10.619127   78922 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0806 08:40:10.619200   78922 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0806 08:40:10.619289   78922 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0806 08:40:10.619373   78922 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0806 08:40:10.619485   78922 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0806 08:40:10.619596   78922 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0806 08:40:10.619644   78922 kubeadm.go:310] [certs] Using the existing "sa" key
	I0806 08:40:10.619721   78922 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0806 08:40:10.619787   78922 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0806 08:40:10.619871   78922 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0806 08:40:10.619916   78922 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0806 08:40:10.619968   78922 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0806 08:40:10.620028   78922 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0806 08:40:10.620097   78922 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0806 08:40:10.620153   78922 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0806 08:40:10.621538   78922 out.go:204]   - Booting up control plane ...
	I0806 08:40:10.621632   78922 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0806 08:40:10.621739   78922 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0806 08:40:10.621802   78922 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0806 08:40:10.621898   78922 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0806 08:40:10.621979   78922 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0806 08:40:10.622011   78922 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0806 08:40:10.622117   78922 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0806 08:40:10.622225   78922 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0806 08:40:10.622291   78922 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.639662ms
	I0806 08:40:10.622351   78922 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0806 08:40:10.622399   78922 kubeadm.go:310] [api-check] The API server is healthy after 5.503393689s
	I0806 08:40:10.622505   78922 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0806 08:40:10.622639   78922 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0806 08:40:10.622692   78922 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0806 08:40:10.622843   78922 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-703340 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0806 08:40:10.622895   78922 kubeadm.go:310] [bootstrap-token] Using token: 5r8726.myu0qo9tvx4sxjd3
	I0806 08:40:10.624254   78922 out.go:204]   - Configuring RBAC rules ...
	I0806 08:40:10.624381   78922 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0806 08:40:10.624463   78922 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0806 08:40:10.624613   78922 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0806 08:40:10.624813   78922 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0806 08:40:10.624990   78922 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0806 08:40:10.625114   78922 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0806 08:40:10.625253   78922 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0806 08:40:10.625312   78922 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0806 08:40:10.625378   78922 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0806 08:40:10.625387   78922 kubeadm.go:310] 
	I0806 08:40:10.625464   78922 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0806 08:40:10.625472   78922 kubeadm.go:310] 
	I0806 08:40:10.625580   78922 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0806 08:40:10.625595   78922 kubeadm.go:310] 
	I0806 08:40:10.625615   78922 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0806 08:40:10.625668   78922 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0806 08:40:10.625708   78922 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0806 08:40:10.625712   78922 kubeadm.go:310] 
	I0806 08:40:10.625822   78922 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0806 08:40:10.625846   78922 kubeadm.go:310] 
	I0806 08:40:10.625891   78922 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0806 08:40:10.625898   78922 kubeadm.go:310] 
	I0806 08:40:10.625940   78922 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0806 08:40:10.626007   78922 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0806 08:40:10.626068   78922 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0806 08:40:10.626074   78922 kubeadm.go:310] 
	I0806 08:40:10.626173   78922 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0806 08:40:10.626272   78922 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0806 08:40:10.626282   78922 kubeadm.go:310] 
	I0806 08:40:10.626381   78922 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 5r8726.myu0qo9tvx4sxjd3 \
	I0806 08:40:10.626499   78922 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d36e5c1dfe989c7d561c809d7c06e0fc13926ac8f6d664cc5d6865ea5f79931b \
	I0806 08:40:10.626529   78922 kubeadm.go:310] 	--control-plane 
	I0806 08:40:10.626538   78922 kubeadm.go:310] 
	I0806 08:40:10.626639   78922 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0806 08:40:10.626647   78922 kubeadm.go:310] 
	I0806 08:40:10.626745   78922 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 5r8726.myu0qo9tvx4sxjd3 \
	I0806 08:40:10.626893   78922 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d36e5c1dfe989c7d561c809d7c06e0fc13926ac8f6d664cc5d6865ea5f79931b 
	I0806 08:40:10.626920   78922 cni.go:84] Creating CNI manager for ""
	I0806 08:40:10.626929   78922 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0806 08:40:10.628682   78922 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0806 08:40:10.630096   78922 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0806 08:40:10.642492   78922 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0806 08:40:10.667673   78922 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0806 08:40:10.667763   78922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:40:10.667833   78922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-703340 minikube.k8s.io/updated_at=2024_08_06T08_40_10_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=e92cb06692f5ea1ba801d10d148e5e92e807f9c8 minikube.k8s.io/name=no-preload-703340 minikube.k8s.io/primary=true
	I0806 08:40:10.705177   78922 ops.go:34] apiserver oom_adj: -16
	I0806 08:40:10.901976   78922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:40:11.401990   78922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:40:11.902906   78922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:40:12.402461   78922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:40:12.902909   78922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:40:13.402018   78922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:40:13.902098   78922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:40:14.402869   78922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:40:14.902034   78922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:40:14.999571   78922 kubeadm.go:1113] duration metric: took 4.331874752s to wait for elevateKubeSystemPrivileges
	I0806 08:40:14.999615   78922 kubeadm.go:394] duration metric: took 4m59.975766448s to StartCluster
	I0806 08:40:14.999636   78922 settings.go:142] acquiring lock: {Name:mkb0bfbcae3b4205ef43487a9de84cfd8afd2e5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:40:14.999722   78922 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19370-13057/kubeconfig
	I0806 08:40:15.001429   78922 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/kubeconfig: {Name:mk5c066914acf8e36556b29792f75b98076ca34a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:40:15.001696   78922 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.126 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0806 08:40:15.001753   78922 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0806 08:40:15.001864   78922 addons.go:69] Setting storage-provisioner=true in profile "no-preload-703340"
	I0806 08:40:15.001909   78922 addons.go:234] Setting addon storage-provisioner=true in "no-preload-703340"
	W0806 08:40:15.001921   78922 addons.go:243] addon storage-provisioner should already be in state true
	I0806 08:40:15.001919   78922 addons.go:69] Setting default-storageclass=true in profile "no-preload-703340"
	I0806 08:40:15.001945   78922 addons.go:69] Setting metrics-server=true in profile "no-preload-703340"
	I0806 08:40:15.001968   78922 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-703340"
	I0806 08:40:15.001984   78922 addons.go:234] Setting addon metrics-server=true in "no-preload-703340"
	W0806 08:40:15.001997   78922 addons.go:243] addon metrics-server should already be in state true
	I0806 08:40:15.001958   78922 host.go:66] Checking if "no-preload-703340" exists ...
	I0806 08:40:15.002030   78922 host.go:66] Checking if "no-preload-703340" exists ...
	I0806 08:40:15.001918   78922 config.go:182] Loaded profile config "no-preload-703340": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-rc.0
	I0806 08:40:15.002284   78922 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:40:15.002295   78922 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:40:15.002310   78922 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:40:15.002319   78922 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:40:15.002354   78922 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:40:15.002312   78922 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:40:15.003644   78922 out.go:177] * Verifying Kubernetes components...
	I0806 08:40:15.005157   78922 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 08:40:15.018061   78922 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35339
	I0806 08:40:15.018548   78922 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:40:15.018866   78922 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44375
	I0806 08:40:15.019201   78922 main.go:141] libmachine: Using API Version  1
	I0806 08:40:15.019226   78922 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:40:15.019270   78922 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:40:15.019582   78922 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:40:15.019743   78922 main.go:141] libmachine: Using API Version  1
	I0806 08:40:15.019748   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetState
	I0806 08:40:15.019765   78922 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:40:15.020089   78922 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:40:15.020617   78922 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:40:15.020664   78922 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:40:15.020815   78922 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32773
	I0806 08:40:15.021303   78922 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:40:15.021775   78922 main.go:141] libmachine: Using API Version  1
	I0806 08:40:15.021796   78922 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:40:15.022125   78922 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:40:15.022549   78922 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:40:15.022577   78922 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:40:15.023653   78922 addons.go:234] Setting addon default-storageclass=true in "no-preload-703340"
	W0806 08:40:15.023678   78922 addons.go:243] addon default-storageclass should already be in state true
	I0806 08:40:15.023708   78922 host.go:66] Checking if "no-preload-703340" exists ...
	I0806 08:40:15.024066   78922 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:40:15.024120   78922 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:40:15.036138   78922 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44447
	I0806 08:40:15.036586   78922 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:40:15.037132   78922 main.go:141] libmachine: Using API Version  1
	I0806 08:40:15.037153   78922 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:40:15.037609   78922 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:40:15.037921   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetState
	I0806 08:40:15.038640   78922 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42341
	I0806 08:40:15.038777   78922 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34793
	I0806 08:40:15.039003   78922 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:40:15.039098   78922 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:40:15.039473   78922 main.go:141] libmachine: Using API Version  1
	I0806 08:40:15.039489   78922 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:40:15.039607   78922 main.go:141] libmachine: Using API Version  1
	I0806 08:40:15.039623   78922 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:40:15.039820   78922 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:40:15.040041   78922 main.go:141] libmachine: (no-preload-703340) Calling .DriverName
	I0806 08:40:15.040181   78922 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:40:15.040414   78922 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:40:15.040456   78922 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:40:15.040477   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetState
	I0806 08:40:15.042137   78922 main.go:141] libmachine: (no-preload-703340) Calling .DriverName
	I0806 08:40:15.042223   78922 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0806 08:40:15.043522   78922 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 08:40:15.043585   78922 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0806 08:40:15.043601   78922 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0806 08:40:15.043619   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHHostname
	I0806 08:40:15.044812   78922 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0806 08:40:15.044829   78922 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0806 08:40:15.044854   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHHostname
	I0806 08:40:15.047376   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:40:15.047978   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:40:15.048001   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:40:15.048228   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:40:15.048269   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHPort
	I0806 08:40:15.048485   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHKeyPath
	I0806 08:40:15.048617   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHUsername
	I0806 08:40:15.048667   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:40:15.048688   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:40:15.048800   78922 sshutil.go:53] new ssh client: &{IP:192.168.72.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/no-preload-703340/id_rsa Username:docker}
	I0806 08:40:15.049107   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHPort
	I0806 08:40:15.049267   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHKeyPath
	I0806 08:40:15.049403   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHUsername
	I0806 08:40:15.049518   78922 sshutil.go:53] new ssh client: &{IP:192.168.72.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/no-preload-703340/id_rsa Username:docker}
	I0806 08:40:15.059440   78922 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33377
	I0806 08:40:15.059910   78922 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:40:15.060423   78922 main.go:141] libmachine: Using API Version  1
	I0806 08:40:15.060441   78922 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:40:15.060741   78922 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:40:15.060922   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetState
	I0806 08:40:15.062303   78922 main.go:141] libmachine: (no-preload-703340) Calling .DriverName
	I0806 08:40:15.062491   78922 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0806 08:40:15.062503   78922 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0806 08:40:15.062516   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHHostname
	I0806 08:40:15.065588   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:40:15.066008   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:40:15.066024   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:40:15.066147   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHPort
	I0806 08:40:15.066302   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHKeyPath
	I0806 08:40:15.066407   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHUsername
	I0806 08:40:15.066500   78922 sshutil.go:53] new ssh client: &{IP:192.168.72.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/no-preload-703340/id_rsa Username:docker}
	I0806 08:40:15.209763   78922 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 08:40:15.231626   78922 node_ready.go:35] waiting up to 6m0s for node "no-preload-703340" to be "Ready" ...
	I0806 08:40:15.239924   78922 node_ready.go:49] node "no-preload-703340" has status "Ready":"True"
	I0806 08:40:15.239952   78922 node_ready.go:38] duration metric: took 8.292674ms for node "no-preload-703340" to be "Ready" ...
	I0806 08:40:15.239964   78922 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 08:40:15.247043   78922 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-9dt4q" in "kube-system" namespace to be "Ready" ...
	I0806 08:40:15.320295   78922 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0806 08:40:15.343811   78922 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0806 08:40:15.343838   78922 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0806 08:40:15.367466   78922 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0806 08:40:15.392170   78922 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0806 08:40:15.392200   78922 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0806 08:40:15.440711   78922 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0806 08:40:15.440741   78922 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0806 08:40:15.483982   78922 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0806 08:40:16.105714   78922 main.go:141] libmachine: Making call to close driver server
	I0806 08:40:16.105739   78922 main.go:141] libmachine: (no-preload-703340) Calling .Close
	I0806 08:40:16.105768   78922 main.go:141] libmachine: Making call to close driver server
	I0806 08:40:16.105789   78922 main.go:141] libmachine: (no-preload-703340) Calling .Close
	I0806 08:40:16.106051   78922 main.go:141] libmachine: (no-preload-703340) DBG | Closing plugin on server side
	I0806 08:40:16.106086   78922 main.go:141] libmachine: (no-preload-703340) DBG | Closing plugin on server side
	I0806 08:40:16.106097   78922 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:40:16.106113   78922 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:40:16.106133   78922 main.go:141] libmachine: Making call to close driver server
	I0806 08:40:16.106144   78922 main.go:141] libmachine: (no-preload-703340) Calling .Close
	I0806 08:40:16.106176   78922 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:40:16.106254   78922 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:40:16.106280   78922 main.go:141] libmachine: Making call to close driver server
	I0806 08:40:16.106287   78922 main.go:141] libmachine: (no-preload-703340) Calling .Close
	I0806 08:40:16.107761   78922 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:40:16.107779   78922 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:40:16.107767   78922 main.go:141] libmachine: (no-preload-703340) DBG | Closing plugin on server side
	I0806 08:40:16.107805   78922 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:40:16.107820   78922 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:40:16.107874   78922 main.go:141] libmachine: (no-preload-703340) DBG | Closing plugin on server side
	I0806 08:40:16.133295   78922 main.go:141] libmachine: Making call to close driver server
	I0806 08:40:16.133318   78922 main.go:141] libmachine: (no-preload-703340) Calling .Close
	I0806 08:40:16.133642   78922 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:40:16.133660   78922 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:40:16.133680   78922 main.go:141] libmachine: (no-preload-703340) DBG | Closing plugin on server side
	I0806 08:40:16.628763   78922 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.144732374s)
	I0806 08:40:16.628864   78922 main.go:141] libmachine: Making call to close driver server
	I0806 08:40:16.628891   78922 main.go:141] libmachine: (no-preload-703340) Calling .Close
	I0806 08:40:16.629348   78922 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:40:16.629375   78922 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:40:16.629385   78922 main.go:141] libmachine: Making call to close driver server
	I0806 08:40:16.629396   78922 main.go:141] libmachine: (no-preload-703340) Calling .Close
	I0806 08:40:16.629406   78922 main.go:141] libmachine: (no-preload-703340) DBG | Closing plugin on server side
	I0806 08:40:16.629673   78922 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:40:16.629690   78922 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:40:16.629702   78922 addons.go:475] Verifying addon metrics-server=true in "no-preload-703340"
	I0806 08:40:16.631683   78922 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0806 08:40:16.632723   78922 addons.go:510] duration metric: took 1.630970284s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0806 08:40:17.264829   78922 pod_ready.go:102] pod "coredns-6f6b679f8f-9dt4q" in "kube-system" namespace has status "Ready":"False"
	I0806 08:40:19.754652   78922 pod_ready.go:102] pod "coredns-6f6b679f8f-9dt4q" in "kube-system" namespace has status "Ready":"False"
	I0806 08:40:21.753116   78922 pod_ready.go:92] pod "coredns-6f6b679f8f-9dt4q" in "kube-system" namespace has status "Ready":"True"
	I0806 08:40:21.753139   78922 pod_ready.go:81] duration metric: took 6.506065964s for pod "coredns-6f6b679f8f-9dt4q" in "kube-system" namespace to be "Ready" ...
	I0806 08:40:21.753149   78922 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-mpfwj" in "kube-system" namespace to be "Ready" ...
	I0806 08:40:21.757521   78922 pod_ready.go:92] pod "coredns-6f6b679f8f-mpfwj" in "kube-system" namespace has status "Ready":"True"
	I0806 08:40:21.757542   78922 pod_ready.go:81] duration metric: took 4.3867ms for pod "coredns-6f6b679f8f-mpfwj" in "kube-system" namespace to be "Ready" ...
	I0806 08:40:21.757553   78922 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-703340" in "kube-system" namespace to be "Ready" ...
	I0806 08:40:21.762161   78922 pod_ready.go:92] pod "etcd-no-preload-703340" in "kube-system" namespace has status "Ready":"True"
	I0806 08:40:21.762179   78922 pod_ready.go:81] duration metric: took 4.619377ms for pod "etcd-no-preload-703340" in "kube-system" namespace to be "Ready" ...
	I0806 08:40:21.762188   78922 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-703340" in "kube-system" namespace to be "Ready" ...
	I0806 08:40:21.766394   78922 pod_ready.go:92] pod "kube-apiserver-no-preload-703340" in "kube-system" namespace has status "Ready":"True"
	I0806 08:40:21.766411   78922 pod_ready.go:81] duration metric: took 4.216869ms for pod "kube-apiserver-no-preload-703340" in "kube-system" namespace to be "Ready" ...
	I0806 08:40:21.766421   78922 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-703340" in "kube-system" namespace to be "Ready" ...
	I0806 08:40:21.770569   78922 pod_ready.go:92] pod "kube-controller-manager-no-preload-703340" in "kube-system" namespace has status "Ready":"True"
	I0806 08:40:21.770592   78922 pod_ready.go:81] duration metric: took 4.160997ms for pod "kube-controller-manager-no-preload-703340" in "kube-system" namespace to be "Ready" ...
	I0806 08:40:21.770600   78922 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zsv9x" in "kube-system" namespace to be "Ready" ...
	I0806 08:40:22.151795   78922 pod_ready.go:92] pod "kube-proxy-zsv9x" in "kube-system" namespace has status "Ready":"True"
	I0806 08:40:22.151819   78922 pod_ready.go:81] duration metric: took 381.212951ms for pod "kube-proxy-zsv9x" in "kube-system" namespace to be "Ready" ...
	I0806 08:40:22.151829   78922 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-703340" in "kube-system" namespace to be "Ready" ...
	I0806 08:40:22.550798   78922 pod_ready.go:92] pod "kube-scheduler-no-preload-703340" in "kube-system" namespace has status "Ready":"True"
	I0806 08:40:22.550821   78922 pod_ready.go:81] duration metric: took 398.985533ms for pod "kube-scheduler-no-preload-703340" in "kube-system" namespace to be "Ready" ...
	I0806 08:40:22.550830   78922 pod_ready.go:38] duration metric: took 7.310852309s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 08:40:22.550842   78922 api_server.go:52] waiting for apiserver process to appear ...
	I0806 08:40:22.550888   78922 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:40:22.567410   78922 api_server.go:72] duration metric: took 7.565683505s to wait for apiserver process to appear ...
	I0806 08:40:22.567441   78922 api_server.go:88] waiting for apiserver healthz status ...
	I0806 08:40:22.567464   78922 api_server.go:253] Checking apiserver healthz at https://192.168.72.126:8443/healthz ...
	I0806 08:40:22.572514   78922 api_server.go:279] https://192.168.72.126:8443/healthz returned 200:
	ok
	I0806 08:40:22.573572   78922 api_server.go:141] control plane version: v1.31.0-rc.0
	I0806 08:40:22.573596   78922 api_server.go:131] duration metric: took 6.147703ms to wait for apiserver health ...
	I0806 08:40:22.573606   78922 system_pods.go:43] waiting for kube-system pods to appear ...
	I0806 08:40:22.754910   78922 system_pods.go:59] 9 kube-system pods found
	I0806 08:40:22.754937   78922 system_pods.go:61] "coredns-6f6b679f8f-9dt4q" [a62f9c28-2e06-4072-b459-3a879cba65d6] Running
	I0806 08:40:22.754942   78922 system_pods.go:61] "coredns-6f6b679f8f-mpfwj" [9da40ee8-73c2-4479-a2aa-32a2dce7e21d] Running
	I0806 08:40:22.754945   78922 system_pods.go:61] "etcd-no-preload-703340" [2b89a18a-69f6-429b-8616-b84bf9e666b4] Running
	I0806 08:40:22.754950   78922 system_pods.go:61] "kube-apiserver-no-preload-703340" [8b863a60-03ad-4d18-855a-12d29dc63612] Running
	I0806 08:40:22.754953   78922 system_pods.go:61] "kube-controller-manager-no-preload-703340" [8490ec19-da74-4c39-9a69-17918506ea10] Running
	I0806 08:40:22.754956   78922 system_pods.go:61] "kube-proxy-zsv9x" [2859a7f2-f880-4be4-806e-163eb9caf535] Running
	I0806 08:40:22.754959   78922 system_pods.go:61] "kube-scheduler-no-preload-703340" [e003ccd3-48e7-4e4e-9f05-166003935575] Running
	I0806 08:40:22.754964   78922 system_pods.go:61] "metrics-server-6867b74b74-tqnjm" [4ab2bdd5-bd70-445a-84ac-6a45bdaf06a7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0806 08:40:22.754972   78922 system_pods.go:61] "storage-provisioner" [5497ddbc-8ed5-4b42-80e8-81aa7c1b8fa7] Running
	I0806 08:40:22.754980   78922 system_pods.go:74] duration metric: took 181.367063ms to wait for pod list to return data ...
	I0806 08:40:22.754990   78922 default_sa.go:34] waiting for default service account to be created ...
	I0806 08:40:22.951215   78922 default_sa.go:45] found service account: "default"
	I0806 08:40:22.951242   78922 default_sa.go:55] duration metric: took 196.244443ms for default service account to be created ...
	I0806 08:40:22.951250   78922 system_pods.go:116] waiting for k8s-apps to be running ...
	I0806 08:40:23.154822   78922 system_pods.go:86] 9 kube-system pods found
	I0806 08:40:23.154852   78922 system_pods.go:89] "coredns-6f6b679f8f-9dt4q" [a62f9c28-2e06-4072-b459-3a879cba65d6] Running
	I0806 08:40:23.154858   78922 system_pods.go:89] "coredns-6f6b679f8f-mpfwj" [9da40ee8-73c2-4479-a2aa-32a2dce7e21d] Running
	I0806 08:40:23.154862   78922 system_pods.go:89] "etcd-no-preload-703340" [2b89a18a-69f6-429b-8616-b84bf9e666b4] Running
	I0806 08:40:23.154865   78922 system_pods.go:89] "kube-apiserver-no-preload-703340" [8b863a60-03ad-4d18-855a-12d29dc63612] Running
	I0806 08:40:23.154870   78922 system_pods.go:89] "kube-controller-manager-no-preload-703340" [8490ec19-da74-4c39-9a69-17918506ea10] Running
	I0806 08:40:23.154875   78922 system_pods.go:89] "kube-proxy-zsv9x" [2859a7f2-f880-4be4-806e-163eb9caf535] Running
	I0806 08:40:23.154879   78922 system_pods.go:89] "kube-scheduler-no-preload-703340" [e003ccd3-48e7-4e4e-9f05-166003935575] Running
	I0806 08:40:23.154886   78922 system_pods.go:89] "metrics-server-6867b74b74-tqnjm" [4ab2bdd5-bd70-445a-84ac-6a45bdaf06a7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0806 08:40:23.154890   78922 system_pods.go:89] "storage-provisioner" [5497ddbc-8ed5-4b42-80e8-81aa7c1b8fa7] Running
	I0806 08:40:23.154898   78922 system_pods.go:126] duration metric: took 203.642884ms to wait for k8s-apps to be running ...
	I0806 08:40:23.154905   78922 system_svc.go:44] waiting for kubelet service to be running ....
	I0806 08:40:23.154948   78922 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 08:40:23.170577   78922 system_svc.go:56] duration metric: took 15.661926ms WaitForService to wait for kubelet
	I0806 08:40:23.170609   78922 kubeadm.go:582] duration metric: took 8.168885934s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0806 08:40:23.170631   78922 node_conditions.go:102] verifying NodePressure condition ...
	I0806 08:40:23.352727   78922 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0806 08:40:23.352753   78922 node_conditions.go:123] node cpu capacity is 2
	I0806 08:40:23.352763   78922 node_conditions.go:105] duration metric: took 182.12736ms to run NodePressure ...
	I0806 08:40:23.352774   78922 start.go:241] waiting for startup goroutines ...
	I0806 08:40:23.352780   78922 start.go:246] waiting for cluster config update ...
	I0806 08:40:23.352790   78922 start.go:255] writing updated cluster config ...
	I0806 08:40:23.353072   78922 ssh_runner.go:195] Run: rm -f paused
	I0806 08:40:23.401609   78922 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-rc.0 (minor skew: 1)
	I0806 08:40:23.403639   78922 out.go:177] * Done! kubectl is now configured to use "no-preload-703340" cluster and "default" namespace by default
	I0806 08:40:46.706828   79927 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0806 08:40:46.707130   79927 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0806 08:40:46.707156   79927 kubeadm.go:310] 
	I0806 08:40:46.707218   79927 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0806 08:40:46.707278   79927 kubeadm.go:310] 		timed out waiting for the condition
	I0806 08:40:46.707304   79927 kubeadm.go:310] 
	I0806 08:40:46.707381   79927 kubeadm.go:310] 	This error is likely caused by:
	I0806 08:40:46.707439   79927 kubeadm.go:310] 		- The kubelet is not running
	I0806 08:40:46.707569   79927 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0806 08:40:46.707580   79927 kubeadm.go:310] 
	I0806 08:40:46.707705   79927 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0806 08:40:46.707756   79927 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0806 08:40:46.707805   79927 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0806 08:40:46.707814   79927 kubeadm.go:310] 
	I0806 08:40:46.707974   79927 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0806 08:40:46.708118   79927 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0806 08:40:46.708130   79927 kubeadm.go:310] 
	I0806 08:40:46.708297   79927 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0806 08:40:46.708448   79927 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0806 08:40:46.708577   79927 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0806 08:40:46.708673   79927 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0806 08:40:46.708684   79927 kubeadm.go:310] 
	I0806 08:40:46.709441   79927 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0806 08:40:46.709593   79927 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0806 08:40:46.709791   79927 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0806 08:40:46.709862   79927 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0806 08:40:46.709928   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0806 08:40:47.169443   79927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 08:40:47.187784   79927 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0806 08:40:47.199239   79927 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0806 08:40:47.199267   79927 kubeadm.go:157] found existing configuration files:
	
	I0806 08:40:47.199325   79927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0806 08:40:47.209294   79927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0806 08:40:47.209361   79927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0806 08:40:47.219483   79927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0806 08:40:47.229674   79927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0806 08:40:47.229732   79927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0806 08:40:47.240729   79927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0806 08:40:47.250964   79927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0806 08:40:47.251018   79927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0806 08:40:47.261322   79927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0806 08:40:47.271411   79927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0806 08:40:47.271502   79927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0806 08:40:47.281986   79927 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0806 08:40:47.514409   79927 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0806 08:42:43.760446   79927 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0806 08:42:43.760586   79927 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0806 08:42:43.762233   79927 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0806 08:42:43.762301   79927 kubeadm.go:310] [preflight] Running pre-flight checks
	I0806 08:42:43.762391   79927 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0806 08:42:43.762504   79927 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0806 08:42:43.762602   79927 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0806 08:42:43.762653   79927 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0806 08:42:43.764435   79927 out.go:204]   - Generating certificates and keys ...
	I0806 08:42:43.764523   79927 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0806 08:42:43.764587   79927 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0806 08:42:43.764684   79927 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0806 08:42:43.764761   79927 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0806 08:42:43.764833   79927 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0806 08:42:43.764905   79927 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0806 08:42:43.765002   79927 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0806 08:42:43.765100   79927 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0806 08:42:43.765202   79927 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0806 08:42:43.765338   79927 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0806 08:42:43.765400   79927 kubeadm.go:310] [certs] Using the existing "sa" key
	I0806 08:42:43.765516   79927 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0806 08:42:43.765601   79927 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0806 08:42:43.765680   79927 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0806 08:42:43.765774   79927 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0806 08:42:43.765854   79927 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0806 08:42:43.766002   79927 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0806 08:42:43.766133   79927 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0806 08:42:43.766194   79927 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0806 08:42:43.766273   79927 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0806 08:42:43.767861   79927 out.go:204]   - Booting up control plane ...
	I0806 08:42:43.767957   79927 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0806 08:42:43.768051   79927 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0806 08:42:43.768136   79927 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0806 08:42:43.768232   79927 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0806 08:42:43.768487   79927 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0806 08:42:43.768559   79927 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0806 08:42:43.768635   79927 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0806 08:42:43.768862   79927 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0806 08:42:43.768945   79927 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0806 08:42:43.769124   79927 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0806 08:42:43.769195   79927 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0806 08:42:43.769379   79927 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0806 08:42:43.769463   79927 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0806 08:42:43.769721   79927 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0806 08:42:43.769823   79927 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0806 08:42:43.770089   79927 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0806 08:42:43.770102   79927 kubeadm.go:310] 
	I0806 08:42:43.770161   79927 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0806 08:42:43.770223   79927 kubeadm.go:310] 		timed out waiting for the condition
	I0806 08:42:43.770234   79927 kubeadm.go:310] 
	I0806 08:42:43.770274   79927 kubeadm.go:310] 	This error is likely caused by:
	I0806 08:42:43.770303   79927 kubeadm.go:310] 		- The kubelet is not running
	I0806 08:42:43.770389   79927 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0806 08:42:43.770399   79927 kubeadm.go:310] 
	I0806 08:42:43.770487   79927 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0806 08:42:43.770519   79927 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0806 08:42:43.770547   79927 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0806 08:42:43.770554   79927 kubeadm.go:310] 
	I0806 08:42:43.770657   79927 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0806 08:42:43.770756   79927 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0806 08:42:43.770769   79927 kubeadm.go:310] 
	I0806 08:42:43.770932   79927 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0806 08:42:43.771079   79927 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0806 08:42:43.771178   79927 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0806 08:42:43.771266   79927 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0806 08:42:43.771315   79927 kubeadm.go:310] 
	I0806 08:42:43.771336   79927 kubeadm.go:394] duration metric: took 7m57.719144545s to StartCluster
	I0806 08:42:43.771381   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:42:43.771451   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:42:43.817981   79927 cri.go:89] found id: ""
	I0806 08:42:43.818010   79927 logs.go:276] 0 containers: []
	W0806 08:42:43.818021   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:42:43.818028   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:42:43.818087   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:42:43.855481   79927 cri.go:89] found id: ""
	I0806 08:42:43.855506   79927 logs.go:276] 0 containers: []
	W0806 08:42:43.855517   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:42:43.855524   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:42:43.855584   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:42:43.893110   79927 cri.go:89] found id: ""
	I0806 08:42:43.893141   79927 logs.go:276] 0 containers: []
	W0806 08:42:43.893152   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:42:43.893172   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:42:43.893255   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:42:43.929785   79927 cri.go:89] found id: ""
	I0806 08:42:43.929844   79927 logs.go:276] 0 containers: []
	W0806 08:42:43.929853   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:42:43.929858   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:42:43.929921   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:42:43.970501   79927 cri.go:89] found id: ""
	I0806 08:42:43.970533   79927 logs.go:276] 0 containers: []
	W0806 08:42:43.970541   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:42:43.970546   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:42:43.970590   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:42:44.007344   79927 cri.go:89] found id: ""
	I0806 08:42:44.007373   79927 logs.go:276] 0 containers: []
	W0806 08:42:44.007382   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:42:44.007388   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:42:44.007434   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:42:44.043724   79927 cri.go:89] found id: ""
	I0806 08:42:44.043747   79927 logs.go:276] 0 containers: []
	W0806 08:42:44.043755   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:42:44.043761   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:42:44.043810   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:42:44.084642   79927 cri.go:89] found id: ""
	I0806 08:42:44.084682   79927 logs.go:276] 0 containers: []
	W0806 08:42:44.084694   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:42:44.084709   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:42:44.084724   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:42:44.157081   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:42:44.157125   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:42:44.175703   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:42:44.175746   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:42:44.275665   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:42:44.275699   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:42:44.275717   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:42:44.385031   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:42:44.385076   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0806 08:42:44.425573   79927 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0806 08:42:44.425616   79927 out.go:239] * 
	W0806 08:42:44.425683   79927 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0806 08:42:44.425713   79927 out.go:239] * 
	W0806 08:42:44.426633   79927 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0806 08:42:44.429961   79927 out.go:177] 
	W0806 08:42:44.431352   79927 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0806 08:42:44.431398   79927 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0806 08:42:44.431414   79927 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0806 08:42:44.432963   79927 out.go:177] 
	
	
	==> CRI-O <==
	Aug 06 08:49:25 no-preload-703340 crio[736]: time="2024-08-06 08:49:25.521984913Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722934165521963467,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=92d26944-62e4-4565-80e8-61e751543974 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 08:49:25 no-preload-703340 crio[736]: time="2024-08-06 08:49:25.522649146Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f794aef9-7213-4ddb-8cbe-5b60c8cd24c9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:49:25 no-preload-703340 crio[736]: time="2024-08-06 08:49:25.522716753Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f794aef9-7213-4ddb-8cbe-5b60c8cd24c9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:49:25 no-preload-703340 crio[736]: time="2024-08-06 08:49:25.522914415Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bf2aa151d2bbbede89e9f9a6b18d232dc742f0ad4eed4af7b5ea89c55a07e5c3,PodSandboxId:2b37f216b0b7fb7d0e89be845a308953cace561fe743a959fddf0d0b431c9d01,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722933617063651584,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-9dt4q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a62f9c28-2e06-4072-b459-3a879cba65d6,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f6da332003bcdaa2beb8516e28bfe2efc402ea7d4a6a89443466dc4cd736391,PodSandboxId:8f44a08dd23f18c04924921c4899ce5b9fcaeefd99e63c9c2a39ad7eceab3cd4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722933616941294769,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 5497ddbc-8ed5-4b42-80e8-81aa7c1b8fa7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:444b1faae9279d26715d6c59922c216fd88e63eae27e052f1fc54d9df1c8e527,PodSandboxId:d0f2cea9718124e2190ee70e5f76291f3c583cffa143980ebad3cbd1c17b0eee,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722933616481159333,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mpfwj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d
a40ee8-73c2-4479-a2aa-32a2dce7e21d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5bb6477358a380e3f783c71af8369ac4a7486080f96c787d216d599bd79305e,PodSandboxId:d8468c5168848d0133f92b7c86bce34d6b90c819fb18142728d15df87f81f572,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,State:CONTAINER_RUNNING,CreatedAt:
1722933615374717381,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zsv9x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2859a7f2-f880-4be4-806e-163eb9caf535,},Annotations:map[string]string{io.kubernetes.container.hash: 16faf14d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6d85f228745f04a5dd5a2f2e6364404974369765bf98a1724000807552e8908,PodSandboxId:ea26f246b9ce0ade860b50e92c703bf25bfcd033909ab854efc3ef2d4d554c52,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1722933604266305532,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-703340,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7024dbf08cd3cd6cf045756262139cce,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db34d02027a316f2243ff3c556e335b6a7a27646bdca17424ca44a19186ce1b1,PodSandboxId:b3cb276472800f4d6a9a8ca1ca2ee35e89197a05a4f86b8688b7183ed1e93bb5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_RUNNING,CreatedAt:1722933604308066021,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-703340,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 901f74e47f1d9dec93c609776ac15986,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e08881e193ba0b198a7a54b31142c5ab098dd8d790e6497956fed1247f2b42cf,PodSandboxId:3f46aa29d26fafd3f000eacce54ec236bc50b344b499ed9f2c75c305cd64ceca,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_RUNNING,CreatedAt:1722933604236782799,Labels:map[string]string{io.kubernetes.container.name: k
ube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-703340,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 894abcb80f3ffdf09387a41fe5cd0fa5,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99d3e8306f574ba7867e4cd33f71afc4878be2a8d207bd0b7386fe63a6aa953a,PodSandboxId:e01e4089952e966eae8624a82c9856c5782be7081af7e47674d10a649c326695,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_RUNNING,CreatedAt:1722933604230051050,Labels:map[string]string{io.kubernetes.container.na
me: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-703340,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1eff7ba90ca6a95ec01f7820a7d1aee0,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a6840a59bdd44f77956af229841972da859383e685eea8895d93ba43633026e,PodSandboxId:0a16086bf612650c192041fbfc646877dcea2212beb723c7a8ba5da928b0c62e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_EXITED,CreatedAt:1722933317212369718,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-703340,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 901f74e47f1d9dec93c609776ac15986,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f794aef9-7213-4ddb-8cbe-5b60c8cd24c9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:49:25 no-preload-703340 crio[736]: time="2024-08-06 08:49:25.559985224Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d205e867-56b4-4341-8116-54611d09c248 name=/runtime.v1.RuntimeService/Version
	Aug 06 08:49:25 no-preload-703340 crio[736]: time="2024-08-06 08:49:25.560078753Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d205e867-56b4-4341-8116-54611d09c248 name=/runtime.v1.RuntimeService/Version
	Aug 06 08:49:25 no-preload-703340 crio[736]: time="2024-08-06 08:49:25.561189986Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=939a7ab1-f7ab-4b96-9238-14117add626e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 08:49:25 no-preload-703340 crio[736]: time="2024-08-06 08:49:25.561546003Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722934165561525414,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=939a7ab1-f7ab-4b96-9238-14117add626e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 08:49:25 no-preload-703340 crio[736]: time="2024-08-06 08:49:25.562415919Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=10b9333c-6eda-4b23-8053-4b2c719144c7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:49:25 no-preload-703340 crio[736]: time="2024-08-06 08:49:25.562481661Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=10b9333c-6eda-4b23-8053-4b2c719144c7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:49:25 no-preload-703340 crio[736]: time="2024-08-06 08:49:25.562719714Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bf2aa151d2bbbede89e9f9a6b18d232dc742f0ad4eed4af7b5ea89c55a07e5c3,PodSandboxId:2b37f216b0b7fb7d0e89be845a308953cace561fe743a959fddf0d0b431c9d01,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722933617063651584,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-9dt4q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a62f9c28-2e06-4072-b459-3a879cba65d6,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f6da332003bcdaa2beb8516e28bfe2efc402ea7d4a6a89443466dc4cd736391,PodSandboxId:8f44a08dd23f18c04924921c4899ce5b9fcaeefd99e63c9c2a39ad7eceab3cd4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722933616941294769,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 5497ddbc-8ed5-4b42-80e8-81aa7c1b8fa7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:444b1faae9279d26715d6c59922c216fd88e63eae27e052f1fc54d9df1c8e527,PodSandboxId:d0f2cea9718124e2190ee70e5f76291f3c583cffa143980ebad3cbd1c17b0eee,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722933616481159333,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mpfwj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d
a40ee8-73c2-4479-a2aa-32a2dce7e21d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5bb6477358a380e3f783c71af8369ac4a7486080f96c787d216d599bd79305e,PodSandboxId:d8468c5168848d0133f92b7c86bce34d6b90c819fb18142728d15df87f81f572,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,State:CONTAINER_RUNNING,CreatedAt:
1722933615374717381,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zsv9x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2859a7f2-f880-4be4-806e-163eb9caf535,},Annotations:map[string]string{io.kubernetes.container.hash: 16faf14d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6d85f228745f04a5dd5a2f2e6364404974369765bf98a1724000807552e8908,PodSandboxId:ea26f246b9ce0ade860b50e92c703bf25bfcd033909ab854efc3ef2d4d554c52,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1722933604266305532,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-703340,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7024dbf08cd3cd6cf045756262139cce,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db34d02027a316f2243ff3c556e335b6a7a27646bdca17424ca44a19186ce1b1,PodSandboxId:b3cb276472800f4d6a9a8ca1ca2ee35e89197a05a4f86b8688b7183ed1e93bb5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_RUNNING,CreatedAt:1722933604308066021,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-703340,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 901f74e47f1d9dec93c609776ac15986,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e08881e193ba0b198a7a54b31142c5ab098dd8d790e6497956fed1247f2b42cf,PodSandboxId:3f46aa29d26fafd3f000eacce54ec236bc50b344b499ed9f2c75c305cd64ceca,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_RUNNING,CreatedAt:1722933604236782799,Labels:map[string]string{io.kubernetes.container.name: k
ube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-703340,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 894abcb80f3ffdf09387a41fe5cd0fa5,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99d3e8306f574ba7867e4cd33f71afc4878be2a8d207bd0b7386fe63a6aa953a,PodSandboxId:e01e4089952e966eae8624a82c9856c5782be7081af7e47674d10a649c326695,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_RUNNING,CreatedAt:1722933604230051050,Labels:map[string]string{io.kubernetes.container.na
me: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-703340,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1eff7ba90ca6a95ec01f7820a7d1aee0,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a6840a59bdd44f77956af229841972da859383e685eea8895d93ba43633026e,PodSandboxId:0a16086bf612650c192041fbfc646877dcea2212beb723c7a8ba5da928b0c62e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_EXITED,CreatedAt:1722933317212369718,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-703340,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 901f74e47f1d9dec93c609776ac15986,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=10b9333c-6eda-4b23-8053-4b2c719144c7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:49:25 no-preload-703340 crio[736]: time="2024-08-06 08:49:25.609069038Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=710d0ca4-4254-46df-8026-c1a0564de126 name=/runtime.v1.RuntimeService/Version
	Aug 06 08:49:25 no-preload-703340 crio[736]: time="2024-08-06 08:49:25.609202894Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=710d0ca4-4254-46df-8026-c1a0564de126 name=/runtime.v1.RuntimeService/Version
	Aug 06 08:49:25 no-preload-703340 crio[736]: time="2024-08-06 08:49:25.610459584Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f1dccd94-8307-49ec-9bb6-7974afe0fc72 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 08:49:25 no-preload-703340 crio[736]: time="2024-08-06 08:49:25.611288356Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722934165611163141,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f1dccd94-8307-49ec-9bb6-7974afe0fc72 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 08:49:25 no-preload-703340 crio[736]: time="2024-08-06 08:49:25.612162482Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0da08851-5421-49e6-9914-567be1bf446c name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:49:25 no-preload-703340 crio[736]: time="2024-08-06 08:49:25.612261068Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0da08851-5421-49e6-9914-567be1bf446c name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:49:25 no-preload-703340 crio[736]: time="2024-08-06 08:49:25.612531300Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bf2aa151d2bbbede89e9f9a6b18d232dc742f0ad4eed4af7b5ea89c55a07e5c3,PodSandboxId:2b37f216b0b7fb7d0e89be845a308953cace561fe743a959fddf0d0b431c9d01,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722933617063651584,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-9dt4q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a62f9c28-2e06-4072-b459-3a879cba65d6,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f6da332003bcdaa2beb8516e28bfe2efc402ea7d4a6a89443466dc4cd736391,PodSandboxId:8f44a08dd23f18c04924921c4899ce5b9fcaeefd99e63c9c2a39ad7eceab3cd4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722933616941294769,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 5497ddbc-8ed5-4b42-80e8-81aa7c1b8fa7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:444b1faae9279d26715d6c59922c216fd88e63eae27e052f1fc54d9df1c8e527,PodSandboxId:d0f2cea9718124e2190ee70e5f76291f3c583cffa143980ebad3cbd1c17b0eee,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722933616481159333,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mpfwj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d
a40ee8-73c2-4479-a2aa-32a2dce7e21d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5bb6477358a380e3f783c71af8369ac4a7486080f96c787d216d599bd79305e,PodSandboxId:d8468c5168848d0133f92b7c86bce34d6b90c819fb18142728d15df87f81f572,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,State:CONTAINER_RUNNING,CreatedAt:
1722933615374717381,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zsv9x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2859a7f2-f880-4be4-806e-163eb9caf535,},Annotations:map[string]string{io.kubernetes.container.hash: 16faf14d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6d85f228745f04a5dd5a2f2e6364404974369765bf98a1724000807552e8908,PodSandboxId:ea26f246b9ce0ade860b50e92c703bf25bfcd033909ab854efc3ef2d4d554c52,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1722933604266305532,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-703340,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7024dbf08cd3cd6cf045756262139cce,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db34d02027a316f2243ff3c556e335b6a7a27646bdca17424ca44a19186ce1b1,PodSandboxId:b3cb276472800f4d6a9a8ca1ca2ee35e89197a05a4f86b8688b7183ed1e93bb5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_RUNNING,CreatedAt:1722933604308066021,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-703340,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 901f74e47f1d9dec93c609776ac15986,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e08881e193ba0b198a7a54b31142c5ab098dd8d790e6497956fed1247f2b42cf,PodSandboxId:3f46aa29d26fafd3f000eacce54ec236bc50b344b499ed9f2c75c305cd64ceca,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_RUNNING,CreatedAt:1722933604236782799,Labels:map[string]string{io.kubernetes.container.name: k
ube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-703340,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 894abcb80f3ffdf09387a41fe5cd0fa5,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99d3e8306f574ba7867e4cd33f71afc4878be2a8d207bd0b7386fe63a6aa953a,PodSandboxId:e01e4089952e966eae8624a82c9856c5782be7081af7e47674d10a649c326695,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_RUNNING,CreatedAt:1722933604230051050,Labels:map[string]string{io.kubernetes.container.na
me: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-703340,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1eff7ba90ca6a95ec01f7820a7d1aee0,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a6840a59bdd44f77956af229841972da859383e685eea8895d93ba43633026e,PodSandboxId:0a16086bf612650c192041fbfc646877dcea2212beb723c7a8ba5da928b0c62e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_EXITED,CreatedAt:1722933317212369718,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-703340,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 901f74e47f1d9dec93c609776ac15986,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0da08851-5421-49e6-9914-567be1bf446c name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:49:25 no-preload-703340 crio[736]: time="2024-08-06 08:49:25.650393363Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6922f658-e8e4-4477-8942-f4af64c9d671 name=/runtime.v1.RuntimeService/Version
	Aug 06 08:49:25 no-preload-703340 crio[736]: time="2024-08-06 08:49:25.650502482Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6922f658-e8e4-4477-8942-f4af64c9d671 name=/runtime.v1.RuntimeService/Version
	Aug 06 08:49:25 no-preload-703340 crio[736]: time="2024-08-06 08:49:25.651862375Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ae1c1d6d-269b-4386-89b8-bc506f3b5104 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 08:49:25 no-preload-703340 crio[736]: time="2024-08-06 08:49:25.653243662Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722934165653214752,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ae1c1d6d-269b-4386-89b8-bc506f3b5104 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 08:49:25 no-preload-703340 crio[736]: time="2024-08-06 08:49:25.653970566Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ee2a1e74-c5af-4247-9dec-7d1866a08fad name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:49:25 no-preload-703340 crio[736]: time="2024-08-06 08:49:25.654061509Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ee2a1e74-c5af-4247-9dec-7d1866a08fad name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:49:25 no-preload-703340 crio[736]: time="2024-08-06 08:49:25.654340193Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bf2aa151d2bbbede89e9f9a6b18d232dc742f0ad4eed4af7b5ea89c55a07e5c3,PodSandboxId:2b37f216b0b7fb7d0e89be845a308953cace561fe743a959fddf0d0b431c9d01,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722933617063651584,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-9dt4q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a62f9c28-2e06-4072-b459-3a879cba65d6,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f6da332003bcdaa2beb8516e28bfe2efc402ea7d4a6a89443466dc4cd736391,PodSandboxId:8f44a08dd23f18c04924921c4899ce5b9fcaeefd99e63c9c2a39ad7eceab3cd4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722933616941294769,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 5497ddbc-8ed5-4b42-80e8-81aa7c1b8fa7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:444b1faae9279d26715d6c59922c216fd88e63eae27e052f1fc54d9df1c8e527,PodSandboxId:d0f2cea9718124e2190ee70e5f76291f3c583cffa143980ebad3cbd1c17b0eee,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722933616481159333,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mpfwj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d
a40ee8-73c2-4479-a2aa-32a2dce7e21d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5bb6477358a380e3f783c71af8369ac4a7486080f96c787d216d599bd79305e,PodSandboxId:d8468c5168848d0133f92b7c86bce34d6b90c819fb18142728d15df87f81f572,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,State:CONTAINER_RUNNING,CreatedAt:
1722933615374717381,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zsv9x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2859a7f2-f880-4be4-806e-163eb9caf535,},Annotations:map[string]string{io.kubernetes.container.hash: 16faf14d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6d85f228745f04a5dd5a2f2e6364404974369765bf98a1724000807552e8908,PodSandboxId:ea26f246b9ce0ade860b50e92c703bf25bfcd033909ab854efc3ef2d4d554c52,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1722933604266305532,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-703340,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7024dbf08cd3cd6cf045756262139cce,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db34d02027a316f2243ff3c556e335b6a7a27646bdca17424ca44a19186ce1b1,PodSandboxId:b3cb276472800f4d6a9a8ca1ca2ee35e89197a05a4f86b8688b7183ed1e93bb5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_RUNNING,CreatedAt:1722933604308066021,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-703340,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 901f74e47f1d9dec93c609776ac15986,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e08881e193ba0b198a7a54b31142c5ab098dd8d790e6497956fed1247f2b42cf,PodSandboxId:3f46aa29d26fafd3f000eacce54ec236bc50b344b499ed9f2c75c305cd64ceca,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_RUNNING,CreatedAt:1722933604236782799,Labels:map[string]string{io.kubernetes.container.name: k
ube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-703340,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 894abcb80f3ffdf09387a41fe5cd0fa5,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99d3e8306f574ba7867e4cd33f71afc4878be2a8d207bd0b7386fe63a6aa953a,PodSandboxId:e01e4089952e966eae8624a82c9856c5782be7081af7e47674d10a649c326695,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_RUNNING,CreatedAt:1722933604230051050,Labels:map[string]string{io.kubernetes.container.na
me: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-703340,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1eff7ba90ca6a95ec01f7820a7d1aee0,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a6840a59bdd44f77956af229841972da859383e685eea8895d93ba43633026e,PodSandboxId:0a16086bf612650c192041fbfc646877dcea2212beb723c7a8ba5da928b0c62e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_EXITED,CreatedAt:1722933317212369718,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-703340,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 901f74e47f1d9dec93c609776ac15986,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ee2a1e74-c5af-4247-9dec-7d1866a08fad name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	bf2aa151d2bbb       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   2b37f216b0b7f       coredns-6f6b679f8f-9dt4q
	7f6da332003bc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   8f44a08dd23f1       storage-provisioner
	444b1faae9279       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   d0f2cea971812       coredns-6f6b679f8f-mpfwj
	a5bb6477358a3       41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318   9 minutes ago       Running             kube-proxy                0                   d8468c5168848       kube-proxy-zsv9x
	db34d02027a31       c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0   9 minutes ago       Running             kube-apiserver            2                   b3cb276472800       kube-apiserver-no-preload-703340
	f6d85f228745f       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 minutes ago       Running             etcd                      2                   ea26f246b9ce0       etcd-no-preload-703340
	e08881e193ba0       fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c   9 minutes ago       Running             kube-controller-manager   2                   3f46aa29d26fa       kube-controller-manager-no-preload-703340
	99d3e8306f574       0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c   9 minutes ago       Running             kube-scheduler            2                   e01e4089952e9       kube-scheduler-no-preload-703340
	6a6840a59bdd4       c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0   14 minutes ago      Exited              kube-apiserver            1                   0a16086bf6126       kube-apiserver-no-preload-703340
	
	
	==> coredns [444b1faae9279d26715d6c59922c216fd88e63eae27e052f1fc54d9df1c8e527] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [bf2aa151d2bbbede89e9f9a6b18d232dc742f0ad4eed4af7b5ea89c55a07e5c3] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               no-preload-703340
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-703340
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e92cb06692f5ea1ba801d10d148e5e92e807f9c8
	                    minikube.k8s.io/name=no-preload-703340
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_06T08_40_10_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 06 Aug 2024 08:40:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-703340
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 06 Aug 2024 08:49:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 06 Aug 2024 08:45:26 +0000   Tue, 06 Aug 2024 08:40:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 06 Aug 2024 08:45:26 +0000   Tue, 06 Aug 2024 08:40:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 06 Aug 2024 08:45:26 +0000   Tue, 06 Aug 2024 08:40:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 06 Aug 2024 08:45:26 +0000   Tue, 06 Aug 2024 08:40:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.126
	  Hostname:    no-preload-703340
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e76c8ce62eab460c8d415a9980721def
	  System UUID:                e76c8ce6-2eab-460c-8d41-5a9980721def
	  Boot ID:                    1356c0b6-2011-44d7-a124-f347f2da77e7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0-rc.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-9dt4q                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m11s
	  kube-system                 coredns-6f6b679f8f-mpfwj                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m11s
	  kube-system                 etcd-no-preload-703340                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m15s
	  kube-system                 kube-apiserver-no-preload-703340             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m15s
	  kube-system                 kube-controller-manager-no-preload-703340    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m17s
	  kube-system                 kube-proxy-zsv9x                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m11s
	  kube-system                 kube-scheduler-no-preload-703340             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m15s
	  kube-system                 metrics-server-6867b74b74-tqnjm              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m9s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m10s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  9m22s (x8 over 9m22s)  kubelet          Node no-preload-703340 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m22s (x8 over 9m22s)  kubelet          Node no-preload-703340 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m22s (x7 over 9m22s)  kubelet          Node no-preload-703340 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m16s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m16s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m15s                  kubelet          Node no-preload-703340 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m15s                  kubelet          Node no-preload-703340 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m15s                  kubelet          Node no-preload-703340 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m12s                  node-controller  Node no-preload-703340 event: Registered Node no-preload-703340 in Controller
	
	
	==> dmesg <==
	[  +0.057547] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.043410] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.211948] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.702215] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.620725] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.617040] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[  +0.063694] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062796] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +0.186700] systemd-fstab-generator[679]: Ignoring "noauto" option for root device
	[  +0.160860] systemd-fstab-generator[691]: Ignoring "noauto" option for root device
	[  +0.287038] systemd-fstab-generator[721]: Ignoring "noauto" option for root device
	[Aug 6 08:35] systemd-fstab-generator[1260]: Ignoring "noauto" option for root device
	[  +0.063898] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.870826] systemd-fstab-generator[1382]: Ignoring "noauto" option for root device
	[  +3.988358] kauditd_printk_skb: 97 callbacks suppressed
	[  +7.201530] kauditd_printk_skb: 88 callbacks suppressed
	[Aug 6 08:40] systemd-fstab-generator[3009]: Ignoring "noauto" option for root device
	[  +0.070693] kauditd_printk_skb: 9 callbacks suppressed
	[  +6.495136] systemd-fstab-generator[3331]: Ignoring "noauto" option for root device
	[  +0.095814] kauditd_printk_skb: 54 callbacks suppressed
	[  +5.332552] systemd-fstab-generator[3446]: Ignoring "noauto" option for root device
	[  +0.096358] kauditd_printk_skb: 12 callbacks suppressed
	[  +6.277628] kauditd_printk_skb: 88 callbacks suppressed
	
	
	==> etcd [f6d85f228745f04a5dd5a2f2e6364404974369765bf98a1724000807552e8908] <==
	{"level":"info","ts":"2024-08-06T08:40:04.663555Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"90d6a9cd2ff29577","local-member-id":"78ee1f817cf53edb","added-peer-id":"78ee1f817cf53edb","added-peer-peer-urls":["https://192.168.72.126:2380"]}
	{"level":"info","ts":"2024-08-06T08:40:04.663148Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.72.126:2380"}
	{"level":"info","ts":"2024-08-06T08:40:04.663728Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.72.126:2380"}
	{"level":"info","ts":"2024-08-06T08:40:04.664855Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"78ee1f817cf53edb","initial-advertise-peer-urls":["https://192.168.72.126:2380"],"listen-peer-urls":["https://192.168.72.126:2380"],"advertise-client-urls":["https://192.168.72.126:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.126:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-06T08:40:04.667645Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-06T08:40:05.300630Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"78ee1f817cf53edb is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-06T08:40:05.300792Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"78ee1f817cf53edb became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-06T08:40:05.300865Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"78ee1f817cf53edb received MsgPreVoteResp from 78ee1f817cf53edb at term 1"}
	{"level":"info","ts":"2024-08-06T08:40:05.300896Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"78ee1f817cf53edb became candidate at term 2"}
	{"level":"info","ts":"2024-08-06T08:40:05.300926Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"78ee1f817cf53edb received MsgVoteResp from 78ee1f817cf53edb at term 2"}
	{"level":"info","ts":"2024-08-06T08:40:05.300957Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"78ee1f817cf53edb became leader at term 2"}
	{"level":"info","ts":"2024-08-06T08:40:05.300983Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 78ee1f817cf53edb elected leader 78ee1f817cf53edb at term 2"}
	{"level":"info","ts":"2024-08-06T08:40:05.305854Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-06T08:40:05.308871Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"78ee1f817cf53edb","local-member-attributes":"{Name:no-preload-703340 ClientURLs:[https://192.168.72.126:2379]}","request-path":"/0/members/78ee1f817cf53edb/attributes","cluster-id":"90d6a9cd2ff29577","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-06T08:40:05.308956Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-06T08:40:05.309497Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-06T08:40:05.311455Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-06T08:40:05.315341Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"90d6a9cd2ff29577","local-member-id":"78ee1f817cf53edb","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-06T08:40:05.317860Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-06T08:40:05.322962Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-06T08:40:05.323493Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-06T08:40:05.318758Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-06T08:40:05.327462Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.126:2379"}
	{"level":"info","ts":"2024-08-06T08:40:05.318817Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-06T08:40:05.368693Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 08:49:26 up 14 min,  0 users,  load average: 0.02, 0.18, 0.18
	Linux no-preload-703340 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [6a6840a59bdd44f77956af229841972da859383e685eea8895d93ba43633026e] <==
	W0806 08:39:57.212757       1 logging.go:55] [core] [Channel #9 SubChannel #10]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 08:39:57.214207       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 08:39:57.215473       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 08:39:57.217895       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 08:39:57.262702       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 08:39:57.297501       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 08:39:57.306041       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 08:39:57.326771       1 logging.go:55] [core] [Channel #19 SubChannel #20]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 08:39:57.389782       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 08:39:57.425717       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 08:39:57.434718       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 08:39:57.440272       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 08:39:57.465891       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 08:39:57.466241       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 08:39:57.473340       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 08:39:57.545736       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 08:39:57.596848       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 08:39:57.604635       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 08:39:57.695061       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 08:39:57.737879       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 08:39:57.907355       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 08:39:58.035548       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 08:39:58.043967       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 08:39:58.184470       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 08:39:58.237508       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [db34d02027a316f2243ff3c556e335b6a7a27646bdca17424ca44a19186ce1b1] <==
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0806 08:45:07.972795       1 handler_proxy.go:99] no RequestInfo found in the context
	E0806 08:45:07.972865       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0806 08:45:07.973821       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0806 08:45:07.973906       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0806 08:46:07.974662       1 handler_proxy.go:99] no RequestInfo found in the context
	E0806 08:46:07.974753       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0806 08:46:07.974885       1 handler_proxy.go:99] no RequestInfo found in the context
	E0806 08:46:07.974995       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0806 08:46:07.975953       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0806 08:46:07.977149       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0806 08:48:07.976968       1 handler_proxy.go:99] no RequestInfo found in the context
	E0806 08:48:07.977121       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0806 08:48:07.977350       1 handler_proxy.go:99] no RequestInfo found in the context
	E0806 08:48:07.977458       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0806 08:48:07.978439       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0806 08:48:07.978499       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [e08881e193ba0b198a7a54b31142c5ab098dd8d790e6497956fed1247f2b42cf] <==
	E0806 08:44:13.858805       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0806 08:44:14.333708       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0806 08:44:43.865311       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0806 08:44:44.341744       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0806 08:45:13.873534       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0806 08:45:14.350299       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0806 08:45:26.989927       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-703340"
	E0806 08:45:43.880053       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0806 08:45:44.358225       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0806 08:46:01.942252       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="370.868µs"
	E0806 08:46:13.887765       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0806 08:46:13.942928       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="75.885µs"
	I0806 08:46:14.366766       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0806 08:46:43.894655       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0806 08:46:44.375238       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0806 08:47:13.901212       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0806 08:47:14.385094       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0806 08:47:43.908189       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0806 08:47:44.393845       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0806 08:48:13.916411       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0806 08:48:14.404108       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0806 08:48:43.925236       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0806 08:48:44.412984       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0806 08:49:13.932479       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0806 08:49:14.422252       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [a5bb6477358a380e3f783c71af8369ac4a7486080f96c787d216d599bd79305e] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0806 08:40:15.829827       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0806 08:40:15.849265       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.72.126"]
	E0806 08:40:15.849654       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0806 08:40:15.944391       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0806 08:40:15.944450       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0806 08:40:15.944482       1 server_linux.go:169] "Using iptables Proxier"
	I0806 08:40:15.949856       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0806 08:40:15.950130       1 server.go:483] "Version info" version="v1.31.0-rc.0"
	I0806 08:40:15.950141       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0806 08:40:15.951432       1 config.go:197] "Starting service config controller"
	I0806 08:40:15.951459       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0806 08:40:15.951480       1 config.go:104] "Starting endpoint slice config controller"
	I0806 08:40:15.951483       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0806 08:40:15.952034       1 config.go:326] "Starting node config controller"
	I0806 08:40:15.952040       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0806 08:40:16.053747       1 shared_informer.go:320] Caches are synced for node config
	I0806 08:40:16.053776       1 shared_informer.go:320] Caches are synced for service config
	I0806 08:40:16.053806       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [99d3e8306f574ba7867e4cd33f71afc4878be2a8d207bd0b7386fe63a6aa953a] <==
	W0806 08:40:07.850546       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0806 08:40:07.850652       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0806 08:40:07.864422       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0806 08:40:07.864550       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0806 08:40:07.907798       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0806 08:40:07.908273       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0806 08:40:07.990304       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0806 08:40:07.991144       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0806 08:40:08.067741       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0806 08:40:08.068044       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0806 08:40:08.105250       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0806 08:40:08.105302       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0806 08:40:08.117501       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0806 08:40:08.117631       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0806 08:40:08.227644       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0806 08:40:08.227693       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0806 08:40:08.267765       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0806 08:40:08.268503       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0806 08:40:08.284351       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0806 08:40:08.284543       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0806 08:40:08.336833       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0806 08:40:08.336956       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0806 08:40:08.406718       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0806 08:40:08.407105       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0806 08:40:09.813958       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 06 08:48:20 no-preload-703340 kubelet[3338]: E0806 08:48:20.105234    3338 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722934100104343522,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 06 08:48:20 no-preload-703340 kubelet[3338]: E0806 08:48:20.105607    3338 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722934100104343522,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 06 08:48:22 no-preload-703340 kubelet[3338]: E0806 08:48:22.924071    3338 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-tqnjm" podUID="4ab2bdd5-bd70-445a-84ac-6a45bdaf06a7"
	Aug 06 08:48:30 no-preload-703340 kubelet[3338]: E0806 08:48:30.109017    3338 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722934110107360820,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 06 08:48:30 no-preload-703340 kubelet[3338]: E0806 08:48:30.109105    3338 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722934110107360820,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 06 08:48:36 no-preload-703340 kubelet[3338]: E0806 08:48:36.924213    3338 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-tqnjm" podUID="4ab2bdd5-bd70-445a-84ac-6a45bdaf06a7"
	Aug 06 08:48:40 no-preload-703340 kubelet[3338]: E0806 08:48:40.110284    3338 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722934120109888479,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 06 08:48:40 no-preload-703340 kubelet[3338]: E0806 08:48:40.110684    3338 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722934120109888479,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 06 08:48:48 no-preload-703340 kubelet[3338]: E0806 08:48:48.923779    3338 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-tqnjm" podUID="4ab2bdd5-bd70-445a-84ac-6a45bdaf06a7"
	Aug 06 08:48:50 no-preload-703340 kubelet[3338]: E0806 08:48:50.113679    3338 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722934130112741156,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 06 08:48:50 no-preload-703340 kubelet[3338]: E0806 08:48:50.113948    3338 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722934130112741156,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 06 08:48:59 no-preload-703340 kubelet[3338]: E0806 08:48:59.924906    3338 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-tqnjm" podUID="4ab2bdd5-bd70-445a-84ac-6a45bdaf06a7"
	Aug 06 08:49:00 no-preload-703340 kubelet[3338]: E0806 08:49:00.115701    3338 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722934140115363896,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 06 08:49:00 no-preload-703340 kubelet[3338]: E0806 08:49:00.115738    3338 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722934140115363896,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 06 08:49:09 no-preload-703340 kubelet[3338]: E0806 08:49:09.940773    3338 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 06 08:49:09 no-preload-703340 kubelet[3338]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 06 08:49:09 no-preload-703340 kubelet[3338]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 06 08:49:09 no-preload-703340 kubelet[3338]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 06 08:49:09 no-preload-703340 kubelet[3338]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 06 08:49:10 no-preload-703340 kubelet[3338]: E0806 08:49:10.118127    3338 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722934150117671131,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 06 08:49:10 no-preload-703340 kubelet[3338]: E0806 08:49:10.118168    3338 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722934150117671131,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 06 08:49:11 no-preload-703340 kubelet[3338]: E0806 08:49:11.924023    3338 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-tqnjm" podUID="4ab2bdd5-bd70-445a-84ac-6a45bdaf06a7"
	Aug 06 08:49:20 no-preload-703340 kubelet[3338]: E0806 08:49:20.119411    3338 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722934160119163712,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 06 08:49:20 no-preload-703340 kubelet[3338]: E0806 08:49:20.119997    3338 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722934160119163712,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 06 08:49:22 no-preload-703340 kubelet[3338]: E0806 08:49:22.924411    3338 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-tqnjm" podUID="4ab2bdd5-bd70-445a-84ac-6a45bdaf06a7"
	
	
	==> storage-provisioner [7f6da332003bcdaa2beb8516e28bfe2efc402ea7d4a6a89443466dc4cd736391] <==
	I0806 08:40:17.188365       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0806 08:40:17.217614       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0806 08:40:17.217847       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0806 08:40:17.242948       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0806 08:40:17.245439       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-703340_2c16686c-c303-4327-91b0-e0d7bdf0d7e7!
	I0806 08:40:17.245286       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3469128c-ad60-4354-895e-64f1e00e6347", APIVersion:"v1", ResourceVersion:"438", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-703340_2c16686c-c303-4327-91b0-e0d7bdf0d7e7 became leader
	I0806 08:40:17.346662       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-703340_2c16686c-c303-4327-91b0-e0d7bdf0d7e7!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-703340 -n no-preload-703340
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-703340 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-tqnjm
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-703340 describe pod metrics-server-6867b74b74-tqnjm
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-703340 describe pod metrics-server-6867b74b74-tqnjm: exit status 1 (64.800672ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-tqnjm" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-703340 describe pod metrics-server-6867b74b74-tqnjm: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.45s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.66s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
E0806 08:43:23.040569   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/calico-405163/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
E0806 08:43:45.265801   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/custom-flannel-405163/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
E0806 08:44:13.203494   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/addons-505457/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
E0806 08:44:26.450160   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/enable-default-cni-405163/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
E0806 08:44:46.086231   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/calico-405163/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
E0806 08:45:01.379628   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/flannel-405163/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
E0806 08:45:04.748467   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/bridge-405163/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
E0806 08:45:08.313053   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/custom-flannel-405163/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
E0806 08:45:20.597860   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/auto-405163/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
E0806 08:45:49.495455   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/enable-default-cni-405163/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
E0806 08:45:57.500666   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/kindnet-405163/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
E0806 08:46:21.824492   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/functional-811691/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
E0806 08:46:24.425510   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/flannel-405163/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
E0806 08:46:27.793123   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/bridge-405163/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
E0806 08:48:23.040785   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/calico-405163/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
E0806 08:48:45.265936   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/custom-flannel-405163/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
E0806 08:49:13.202886   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/addons-505457/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
E0806 08:50:01.380117   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/flannel-405163/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
E0806 08:50:04.748255   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/bridge-405163/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
E0806 08:50:20.597827   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/auto-405163/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
E0806 08:50:57.501215   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/kindnet-405163/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
E0806 08:51:21.824769   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/functional-811691/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-409903 -n old-k8s-version-409903
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-409903 -n old-k8s-version-409903: exit status 2 (239.19309ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-409903" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-409903 -n old-k8s-version-409903
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-409903 -n old-k8s-version-409903: exit status 2 (228.596647ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-409903 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-409903 logs -n 25: (1.723213784s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p flannel-405163 sudo cat                             | flannel-405163               | jenkins | v1.33.1 | 06 Aug 24 08:25 UTC | 06 Aug 24 08:25 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p flannel-405163 sudo                                 | flannel-405163               | jenkins | v1.33.1 | 06 Aug 24 08:25 UTC | 06 Aug 24 08:25 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p flannel-405163 sudo                                 | flannel-405163               | jenkins | v1.33.1 | 06 Aug 24 08:25 UTC | 06 Aug 24 08:25 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p flannel-405163 sudo                                 | flannel-405163               | jenkins | v1.33.1 | 06 Aug 24 08:25 UTC | 06 Aug 24 08:25 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p flannel-405163 sudo find                            | flannel-405163               | jenkins | v1.33.1 | 06 Aug 24 08:25 UTC | 06 Aug 24 08:25 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p flannel-405163 sudo crio                            | flannel-405163               | jenkins | v1.33.1 | 06 Aug 24 08:25 UTC | 06 Aug 24 08:25 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p flannel-405163                                      | flannel-405163               | jenkins | v1.33.1 | 06 Aug 24 08:25 UTC | 06 Aug 24 08:25 UTC |
	| delete  | -p                                                     | disable-driver-mounts-607762 | jenkins | v1.33.1 | 06 Aug 24 08:25 UTC | 06 Aug 24 08:25 UTC |
	|         | disable-driver-mounts-607762                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-990751 | jenkins | v1.33.1 | 06 Aug 24 08:25 UTC | 06 Aug 24 08:27 UTC |
	|         | default-k8s-diff-port-990751                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-703340             | no-preload-703340            | jenkins | v1.33.1 | 06 Aug 24 08:26 UTC | 06 Aug 24 08:26 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-703340                                   | no-preload-703340            | jenkins | v1.33.1 | 06 Aug 24 08:26 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-324808            | embed-certs-324808           | jenkins | v1.33.1 | 06 Aug 24 08:26 UTC | 06 Aug 24 08:26 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-324808                                  | embed-certs-324808           | jenkins | v1.33.1 | 06 Aug 24 08:26 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-990751  | default-k8s-diff-port-990751 | jenkins | v1.33.1 | 06 Aug 24 08:27 UTC | 06 Aug 24 08:27 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-990751 | jenkins | v1.33.1 | 06 Aug 24 08:27 UTC |                     |
	|         | default-k8s-diff-port-990751                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-703340                  | no-preload-703340            | jenkins | v1.33.1 | 06 Aug 24 08:28 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-703340                                   | no-preload-703340            | jenkins | v1.33.1 | 06 Aug 24 08:28 UTC | 06 Aug 24 08:40 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-409903        | old-k8s-version-409903       | jenkins | v1.33.1 | 06 Aug 24 08:28 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-324808                 | embed-certs-324808           | jenkins | v1.33.1 | 06 Aug 24 08:29 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-324808                                  | embed-certs-324808           | jenkins | v1.33.1 | 06 Aug 24 08:29 UTC | 06 Aug 24 08:38 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-990751       | default-k8s-diff-port-990751 | jenkins | v1.33.1 | 06 Aug 24 08:30 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-990751 | jenkins | v1.33.1 | 06 Aug 24 08:30 UTC | 06 Aug 24 08:38 UTC |
	|         | default-k8s-diff-port-990751                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-409903                              | old-k8s-version-409903       | jenkins | v1.33.1 | 06 Aug 24 08:30 UTC | 06 Aug 24 08:30 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-409903             | old-k8s-version-409903       | jenkins | v1.33.1 | 06 Aug 24 08:30 UTC | 06 Aug 24 08:30 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-409903                              | old-k8s-version-409903       | jenkins | v1.33.1 | 06 Aug 24 08:30 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/06 08:30:58
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0806 08:30:58.757517   79927 out.go:291] Setting OutFile to fd 1 ...
	I0806 08:30:58.757772   79927 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 08:30:58.757780   79927 out.go:304] Setting ErrFile to fd 2...
	I0806 08:30:58.757784   79927 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 08:30:58.757992   79927 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19370-13057/.minikube/bin
	I0806 08:30:58.758527   79927 out.go:298] Setting JSON to false
	I0806 08:30:58.759405   79927 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8005,"bootTime":1722925054,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0806 08:30:58.759463   79927 start.go:139] virtualization: kvm guest
	I0806 08:30:58.762200   79927 out.go:177] * [old-k8s-version-409903] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0806 08:30:58.763461   79927 out.go:177]   - MINIKUBE_LOCATION=19370
	I0806 08:30:58.763514   79927 notify.go:220] Checking for updates...
	I0806 08:30:58.766017   79927 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0806 08:30:58.767180   79927 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19370-13057/kubeconfig
	I0806 08:30:58.768270   79927 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19370-13057/.minikube
	I0806 08:30:58.769303   79927 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0806 08:30:58.770385   79927 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0806 08:30:58.771931   79927 config.go:182] Loaded profile config "old-k8s-version-409903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0806 08:30:58.772319   79927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:30:58.772407   79927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:30:58.787845   79927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39563
	I0806 08:30:58.788270   79927 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:30:58.788863   79927 main.go:141] libmachine: Using API Version  1
	I0806 08:30:58.788879   79927 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:30:58.789169   79927 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:30:58.789314   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .DriverName
	I0806 08:30:58.791190   79927 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0806 08:30:58.792487   79927 driver.go:392] Setting default libvirt URI to qemu:///system
	I0806 08:30:58.792776   79927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:30:58.792811   79927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:30:58.807347   79927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33093
	I0806 08:30:58.807788   79927 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:30:58.808260   79927 main.go:141] libmachine: Using API Version  1
	I0806 08:30:58.808276   79927 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:30:58.808633   79927 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:30:58.808849   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .DriverName
	I0806 08:30:58.845305   79927 out.go:177] * Using the kvm2 driver based on existing profile
	I0806 08:30:58.846560   79927 start.go:297] selected driver: kvm2
	I0806 08:30:58.846580   79927 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-409903 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-409903 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.158 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 08:30:58.846684   79927 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0806 08:30:58.847335   79927 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 08:30:58.847395   79927 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19370-13057/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0806 08:30:58.862013   79927 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0806 08:30:58.862398   79927 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0806 08:30:58.862431   79927 cni.go:84] Creating CNI manager for ""
	I0806 08:30:58.862441   79927 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0806 08:30:58.862493   79927 start.go:340] cluster config:
	{Name:old-k8s-version-409903 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-409903 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.158 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 08:30:58.862609   79927 iso.go:125] acquiring lock: {Name:mke58d71de28babe305c5eb0c31c274e9713bb3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 08:30:58.864484   79927 out.go:177] * Starting "old-k8s-version-409903" primary control-plane node in "old-k8s-version-409903" cluster
	I0806 08:30:58.865763   79927 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0806 08:30:58.865796   79927 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0806 08:30:58.865803   79927 cache.go:56] Caching tarball of preloaded images
	I0806 08:30:58.865912   79927 preload.go:172] Found /home/jenkins/minikube-integration/19370-13057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0806 08:30:58.865928   79927 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0806 08:30:58.866013   79927 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/old-k8s-version-409903/config.json ...
	I0806 08:30:58.866228   79927 start.go:360] acquireMachinesLock for old-k8s-version-409903: {Name:mkc602ac9c8cc3360dcc0fcb8104303837aaab61 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 08:31:01.700664   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:31:04.772608   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:31:10.852634   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:31:13.924605   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:31:20.004661   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:31:23.076602   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:31:29.156621   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:31:32.228616   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:31:38.308684   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:31:41.380622   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:31:47.460632   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:31:50.532638   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:31:56.612610   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:31:59.684696   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:32:05.764642   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:32:08.836560   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:32:14.916653   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:32:17.988720   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:32:24.068597   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:32:27.140686   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:32:33.220630   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:32:36.292609   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:32:42.372654   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:32:45.444619   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:32:51.524637   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:32:54.596677   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:33:00.676664   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:33:03.748647   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:33:09.828643   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:33:12.900674   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:33:18.980642   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:33:22.052616   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:33:28.132639   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:33:31.204678   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:33:34.209491   79219 start.go:364] duration metric: took 4m19.788239383s to acquireMachinesLock for "embed-certs-324808"
	I0806 08:33:34.209554   79219 start.go:96] Skipping create...Using existing machine configuration
	I0806 08:33:34.209563   79219 fix.go:54] fixHost starting: 
	I0806 08:33:34.209914   79219 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:33:34.209945   79219 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:33:34.225180   79219 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44037
	I0806 08:33:34.225607   79219 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:33:34.226014   79219 main.go:141] libmachine: Using API Version  1
	I0806 08:33:34.226033   79219 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:33:34.226323   79219 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:33:34.226522   79219 main.go:141] libmachine: (embed-certs-324808) Calling .DriverName
	I0806 08:33:34.226684   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetState
	I0806 08:33:34.228172   79219 fix.go:112] recreateIfNeeded on embed-certs-324808: state=Stopped err=<nil>
	I0806 08:33:34.228200   79219 main.go:141] libmachine: (embed-certs-324808) Calling .DriverName
	W0806 08:33:34.228355   79219 fix.go:138] unexpected machine state, will restart: <nil>
	I0806 08:33:34.231261   79219 out.go:177] * Restarting existing kvm2 VM for "embed-certs-324808" ...
	I0806 08:33:34.232543   79219 main.go:141] libmachine: (embed-certs-324808) Calling .Start
	I0806 08:33:34.232728   79219 main.go:141] libmachine: (embed-certs-324808) Ensuring networks are active...
	I0806 08:33:34.233475   79219 main.go:141] libmachine: (embed-certs-324808) Ensuring network default is active
	I0806 08:33:34.233819   79219 main.go:141] libmachine: (embed-certs-324808) Ensuring network mk-embed-certs-324808 is active
	I0806 08:33:34.234291   79219 main.go:141] libmachine: (embed-certs-324808) Getting domain xml...
	I0806 08:33:34.235129   79219 main.go:141] libmachine: (embed-certs-324808) Creating domain...
	I0806 08:33:34.206591   78922 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 08:33:34.206636   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetMachineName
	I0806 08:33:34.206950   78922 buildroot.go:166] provisioning hostname "no-preload-703340"
	I0806 08:33:34.206975   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetMachineName
	I0806 08:33:34.207187   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHHostname
	I0806 08:33:34.209379   78922 machine.go:97] duration metric: took 4m37.416826471s to provisionDockerMachine
	I0806 08:33:34.209416   78922 fix.go:56] duration metric: took 4m37.443014145s for fixHost
	I0806 08:33:34.209424   78922 start.go:83] releasing machines lock for "no-preload-703340", held for 4m37.443037097s
	W0806 08:33:34.209442   78922 start.go:714] error starting host: provision: host is not running
	W0806 08:33:34.209538   78922 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0806 08:33:34.209545   78922 start.go:729] Will try again in 5 seconds ...
	I0806 08:33:35.434516   79219 main.go:141] libmachine: (embed-certs-324808) Waiting to get IP...
	I0806 08:33:35.435546   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:35.435973   79219 main.go:141] libmachine: (embed-certs-324808) DBG | unable to find current IP address of domain embed-certs-324808 in network mk-embed-certs-324808
	I0806 08:33:35.436045   79219 main.go:141] libmachine: (embed-certs-324808) DBG | I0806 08:33:35.435966   80472 retry.go:31] will retry after 190.247547ms: waiting for machine to come up
	I0806 08:33:35.627362   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:35.627768   79219 main.go:141] libmachine: (embed-certs-324808) DBG | unable to find current IP address of domain embed-certs-324808 in network mk-embed-certs-324808
	I0806 08:33:35.627803   79219 main.go:141] libmachine: (embed-certs-324808) DBG | I0806 08:33:35.627742   80472 retry.go:31] will retry after 306.927129ms: waiting for machine to come up
	I0806 08:33:35.936242   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:35.936714   79219 main.go:141] libmachine: (embed-certs-324808) DBG | unable to find current IP address of domain embed-certs-324808 in network mk-embed-certs-324808
	I0806 08:33:35.936738   79219 main.go:141] libmachine: (embed-certs-324808) DBG | I0806 08:33:35.936661   80472 retry.go:31] will retry after 363.399025ms: waiting for machine to come up
	I0806 08:33:36.301454   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:36.301934   79219 main.go:141] libmachine: (embed-certs-324808) DBG | unable to find current IP address of domain embed-certs-324808 in network mk-embed-certs-324808
	I0806 08:33:36.301967   79219 main.go:141] libmachine: (embed-certs-324808) DBG | I0806 08:33:36.301879   80472 retry.go:31] will retry after 495.07719ms: waiting for machine to come up
	I0806 08:33:36.798489   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:36.798930   79219 main.go:141] libmachine: (embed-certs-324808) DBG | unable to find current IP address of domain embed-certs-324808 in network mk-embed-certs-324808
	I0806 08:33:36.798958   79219 main.go:141] libmachine: (embed-certs-324808) DBG | I0806 08:33:36.798870   80472 retry.go:31] will retry after 755.286107ms: waiting for machine to come up
	I0806 08:33:37.555799   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:37.556155   79219 main.go:141] libmachine: (embed-certs-324808) DBG | unable to find current IP address of domain embed-certs-324808 in network mk-embed-certs-324808
	I0806 08:33:37.556187   79219 main.go:141] libmachine: (embed-certs-324808) DBG | I0806 08:33:37.556094   80472 retry.go:31] will retry after 600.530399ms: waiting for machine to come up
	I0806 08:33:38.157817   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:38.158303   79219 main.go:141] libmachine: (embed-certs-324808) DBG | unable to find current IP address of domain embed-certs-324808 in network mk-embed-certs-324808
	I0806 08:33:38.158334   79219 main.go:141] libmachine: (embed-certs-324808) DBG | I0806 08:33:38.158241   80472 retry.go:31] will retry after 943.262354ms: waiting for machine to come up
	I0806 08:33:39.103226   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:39.103615   79219 main.go:141] libmachine: (embed-certs-324808) DBG | unable to find current IP address of domain embed-certs-324808 in network mk-embed-certs-324808
	I0806 08:33:39.103645   79219 main.go:141] libmachine: (embed-certs-324808) DBG | I0806 08:33:39.103563   80472 retry.go:31] will retry after 1.100862399s: waiting for machine to come up
	I0806 08:33:39.211204   78922 start.go:360] acquireMachinesLock for no-preload-703340: {Name:mkc602ac9c8cc3360dcc0fcb8104303837aaab61 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 08:33:40.205796   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:40.206130   79219 main.go:141] libmachine: (embed-certs-324808) DBG | unable to find current IP address of domain embed-certs-324808 in network mk-embed-certs-324808
	I0806 08:33:40.206160   79219 main.go:141] libmachine: (embed-certs-324808) DBG | I0806 08:33:40.206077   80472 retry.go:31] will retry after 1.405550913s: waiting for machine to come up
	I0806 08:33:41.613629   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:41.614060   79219 main.go:141] libmachine: (embed-certs-324808) DBG | unable to find current IP address of domain embed-certs-324808 in network mk-embed-certs-324808
	I0806 08:33:41.614092   79219 main.go:141] libmachine: (embed-certs-324808) DBG | I0806 08:33:41.614001   80472 retry.go:31] will retry after 2.190867783s: waiting for machine to come up
	I0806 08:33:43.807417   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:43.808017   79219 main.go:141] libmachine: (embed-certs-324808) DBG | unable to find current IP address of domain embed-certs-324808 in network mk-embed-certs-324808
	I0806 08:33:43.808049   79219 main.go:141] libmachine: (embed-certs-324808) DBG | I0806 08:33:43.807958   80472 retry.go:31] will retry after 2.233558066s: waiting for machine to come up
	I0806 08:33:46.043427   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:46.043803   79219 main.go:141] libmachine: (embed-certs-324808) DBG | unable to find current IP address of domain embed-certs-324808 in network mk-embed-certs-324808
	I0806 08:33:46.043830   79219 main.go:141] libmachine: (embed-certs-324808) DBG | I0806 08:33:46.043754   80472 retry.go:31] will retry after 3.171710196s: waiting for machine to come up
	I0806 08:33:49.218967   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:49.219319   79219 main.go:141] libmachine: (embed-certs-324808) DBG | unable to find current IP address of domain embed-certs-324808 in network mk-embed-certs-324808
	I0806 08:33:49.219349   79219 main.go:141] libmachine: (embed-certs-324808) DBG | I0806 08:33:49.219266   80472 retry.go:31] will retry after 4.141473633s: waiting for machine to come up
	I0806 08:33:53.361887   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.362300   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has current primary IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.362315   79219 main.go:141] libmachine: (embed-certs-324808) Found IP for machine: 192.168.39.190
	I0806 08:33:53.362324   79219 main.go:141] libmachine: (embed-certs-324808) Reserving static IP address...
	I0806 08:33:53.362762   79219 main.go:141] libmachine: (embed-certs-324808) Reserved static IP address: 192.168.39.190
	I0806 08:33:53.362812   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "embed-certs-324808", mac: "52:54:00:b9:df:8c", ip: "192.168.39.190"} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:33:53.362826   79219 main.go:141] libmachine: (embed-certs-324808) Waiting for SSH to be available...
	I0806 08:33:53.362858   79219 main.go:141] libmachine: (embed-certs-324808) DBG | skip adding static IP to network mk-embed-certs-324808 - found existing host DHCP lease matching {name: "embed-certs-324808", mac: "52:54:00:b9:df:8c", ip: "192.168.39.190"}
	I0806 08:33:53.362871   79219 main.go:141] libmachine: (embed-certs-324808) DBG | Getting to WaitForSSH function...
	I0806 08:33:53.364761   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.365170   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:33:53.365204   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.365301   79219 main.go:141] libmachine: (embed-certs-324808) DBG | Using SSH client type: external
	I0806 08:33:53.365333   79219 main.go:141] libmachine: (embed-certs-324808) DBG | Using SSH private key: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/embed-certs-324808/id_rsa (-rw-------)
	I0806 08:33:53.365366   79219 main.go:141] libmachine: (embed-certs-324808) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.190 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19370-13057/.minikube/machines/embed-certs-324808/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0806 08:33:53.365378   79219 main.go:141] libmachine: (embed-certs-324808) DBG | About to run SSH command:
	I0806 08:33:53.365386   79219 main.go:141] libmachine: (embed-certs-324808) DBG | exit 0
	I0806 08:33:53.492385   79219 main.go:141] libmachine: (embed-certs-324808) DBG | SSH cmd err, output: <nil>: 
	I0806 08:33:53.492739   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetConfigRaw
	I0806 08:33:53.493342   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetIP
	I0806 08:33:53.495371   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.495663   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:33:53.495693   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.495982   79219 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/embed-certs-324808/config.json ...
	I0806 08:33:53.496215   79219 machine.go:94] provisionDockerMachine start ...
	I0806 08:33:53.496238   79219 main.go:141] libmachine: (embed-certs-324808) Calling .DriverName
	I0806 08:33:53.496460   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHHostname
	I0806 08:33:53.498625   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.498924   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:33:53.498945   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.499096   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHPort
	I0806 08:33:53.499265   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHKeyPath
	I0806 08:33:53.499394   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHKeyPath
	I0806 08:33:53.499593   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHUsername
	I0806 08:33:53.499815   79219 main.go:141] libmachine: Using SSH client type: native
	I0806 08:33:53.500050   79219 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.190 22 <nil> <nil>}
	I0806 08:33:53.500063   79219 main.go:141] libmachine: About to run SSH command:
	hostname
	I0806 08:33:53.612698   79219 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0806 08:33:53.612725   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetMachineName
	I0806 08:33:53.612979   79219 buildroot.go:166] provisioning hostname "embed-certs-324808"
	I0806 08:33:53.613002   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetMachineName
	I0806 08:33:53.613164   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHHostname
	I0806 08:33:53.615731   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.616123   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:33:53.616157   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.616316   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHPort
	I0806 08:33:53.616530   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHKeyPath
	I0806 08:33:53.616676   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHKeyPath
	I0806 08:33:53.616810   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHUsername
	I0806 08:33:53.616961   79219 main.go:141] libmachine: Using SSH client type: native
	I0806 08:33:53.617124   79219 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.190 22 <nil> <nil>}
	I0806 08:33:53.617138   79219 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-324808 && echo "embed-certs-324808" | sudo tee /etc/hostname
	I0806 08:33:53.742677   79219 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-324808
	
	I0806 08:33:53.742722   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHHostname
	I0806 08:33:53.745670   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.745984   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:33:53.746015   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.746195   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHPort
	I0806 08:33:53.746380   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHKeyPath
	I0806 08:33:53.746545   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHKeyPath
	I0806 08:33:53.746676   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHUsername
	I0806 08:33:53.746867   79219 main.go:141] libmachine: Using SSH client type: native
	I0806 08:33:53.747033   79219 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.190 22 <nil> <nil>}
	I0806 08:33:53.747048   79219 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-324808' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-324808/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-324808' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0806 08:33:53.869596   79219 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 08:33:53.869629   79219 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19370-13057/.minikube CaCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19370-13057/.minikube}
	I0806 08:33:53.869652   79219 buildroot.go:174] setting up certificates
	I0806 08:33:53.869663   79219 provision.go:84] configureAuth start
	I0806 08:33:53.869680   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetMachineName
	I0806 08:33:53.869953   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetIP
	I0806 08:33:53.872701   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.873071   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:33:53.873105   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.873246   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHHostname
	I0806 08:33:53.875521   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.875928   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:33:53.875959   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.876009   79219 provision.go:143] copyHostCerts
	I0806 08:33:53.876077   79219 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem, removing ...
	I0806 08:33:53.876089   79219 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem
	I0806 08:33:53.876170   79219 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem (1123 bytes)
	I0806 08:33:53.876287   79219 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem, removing ...
	I0806 08:33:53.876297   79219 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem
	I0806 08:33:53.876337   79219 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem (1675 bytes)
	I0806 08:33:53.876450   79219 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem, removing ...
	I0806 08:33:53.876460   79219 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem
	I0806 08:33:53.876499   79219 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem (1078 bytes)
	I0806 08:33:53.876578   79219 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem org=jenkins.embed-certs-324808 san=[127.0.0.1 192.168.39.190 embed-certs-324808 localhost minikube]
	I0806 08:33:53.961275   79219 provision.go:177] copyRemoteCerts
	I0806 08:33:53.961344   79219 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0806 08:33:53.961375   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHHostname
	I0806 08:33:53.964004   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.964356   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:33:53.964404   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.964596   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHPort
	I0806 08:33:53.964829   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHKeyPath
	I0806 08:33:53.964983   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHUsername
	I0806 08:33:53.965115   79219 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/embed-certs-324808/id_rsa Username:docker}
	I0806 08:33:54.050335   79219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0806 08:33:54.074977   79219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0806 08:33:54.100201   79219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0806 08:33:54.123356   79219 provision.go:87] duration metric: took 253.6775ms to configureAuth
	I0806 08:33:54.123386   79219 buildroot.go:189] setting minikube options for container-runtime
	I0806 08:33:54.123556   79219 config.go:182] Loaded profile config "embed-certs-324808": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 08:33:54.123634   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHHostname
	I0806 08:33:54.126279   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:54.126721   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:33:54.126746   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:54.126941   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHPort
	I0806 08:33:54.127143   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHKeyPath
	I0806 08:33:54.127309   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHKeyPath
	I0806 08:33:54.127444   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHUsername
	I0806 08:33:54.127611   79219 main.go:141] libmachine: Using SSH client type: native
	I0806 08:33:54.127785   79219 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.190 22 <nil> <nil>}
	I0806 08:33:54.127800   79219 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0806 08:33:54.653395   79614 start.go:364] duration metric: took 3m35.719137455s to acquireMachinesLock for "default-k8s-diff-port-990751"
	I0806 08:33:54.653444   79614 start.go:96] Skipping create...Using existing machine configuration
	I0806 08:33:54.653452   79614 fix.go:54] fixHost starting: 
	I0806 08:33:54.653812   79614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:33:54.653844   79614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:33:54.673686   79614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35867
	I0806 08:33:54.674123   79614 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:33:54.674581   79614 main.go:141] libmachine: Using API Version  1
	I0806 08:33:54.674604   79614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:33:54.674950   79614 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:33:54.675108   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .DriverName
	I0806 08:33:54.675246   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetState
	I0806 08:33:54.676908   79614 fix.go:112] recreateIfNeeded on default-k8s-diff-port-990751: state=Stopped err=<nil>
	I0806 08:33:54.676939   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .DriverName
	W0806 08:33:54.677097   79614 fix.go:138] unexpected machine state, will restart: <nil>
	I0806 08:33:54.678912   79614 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-990751" ...
	I0806 08:33:54.403758   79219 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0806 08:33:54.403785   79219 machine.go:97] duration metric: took 907.554493ms to provisionDockerMachine
	I0806 08:33:54.403799   79219 start.go:293] postStartSetup for "embed-certs-324808" (driver="kvm2")
	I0806 08:33:54.403814   79219 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0806 08:33:54.403854   79219 main.go:141] libmachine: (embed-certs-324808) Calling .DriverName
	I0806 08:33:54.404139   79219 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0806 08:33:54.404168   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHHostname
	I0806 08:33:54.406768   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:54.407114   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:33:54.407142   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:54.407328   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHPort
	I0806 08:33:54.407510   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHKeyPath
	I0806 08:33:54.407647   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHUsername
	I0806 08:33:54.407765   79219 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/embed-certs-324808/id_rsa Username:docker}
	I0806 08:33:54.495482   79219 ssh_runner.go:195] Run: cat /etc/os-release
	I0806 08:33:54.499691   79219 info.go:137] Remote host: Buildroot 2023.02.9
	I0806 08:33:54.499725   79219 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-13057/.minikube/addons for local assets ...
	I0806 08:33:54.499915   79219 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-13057/.minikube/files for local assets ...
	I0806 08:33:54.500104   79219 filesync.go:149] local asset: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem -> 202462.pem in /etc/ssl/certs
	I0806 08:33:54.500243   79219 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0806 08:33:54.509690   79219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem --> /etc/ssl/certs/202462.pem (1708 bytes)
	I0806 08:33:54.534233   79219 start.go:296] duration metric: took 130.420444ms for postStartSetup
	I0806 08:33:54.534282   79219 fix.go:56] duration metric: took 20.324719602s for fixHost
	I0806 08:33:54.534307   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHHostname
	I0806 08:33:54.536871   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:54.537202   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:33:54.537234   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:54.537393   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHPort
	I0806 08:33:54.537562   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHKeyPath
	I0806 08:33:54.537746   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHKeyPath
	I0806 08:33:54.537911   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHUsername
	I0806 08:33:54.538105   79219 main.go:141] libmachine: Using SSH client type: native
	I0806 08:33:54.538285   79219 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.190 22 <nil> <nil>}
	I0806 08:33:54.538295   79219 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0806 08:33:54.653264   79219 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722933234.630593565
	
	I0806 08:33:54.653288   79219 fix.go:216] guest clock: 1722933234.630593565
	I0806 08:33:54.653298   79219 fix.go:229] Guest: 2024-08-06 08:33:54.630593565 +0000 UTC Remote: 2024-08-06 08:33:54.534287484 +0000 UTC m=+280.251105524 (delta=96.306081ms)
	I0806 08:33:54.653327   79219 fix.go:200] guest clock delta is within tolerance: 96.306081ms
	I0806 08:33:54.653337   79219 start.go:83] releasing machines lock for "embed-certs-324808", held for 20.443802529s
	I0806 08:33:54.653361   79219 main.go:141] libmachine: (embed-certs-324808) Calling .DriverName
	I0806 08:33:54.653640   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetIP
	I0806 08:33:54.656332   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:54.656741   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:33:54.656766   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:54.656933   79219 main.go:141] libmachine: (embed-certs-324808) Calling .DriverName
	I0806 08:33:54.657443   79219 main.go:141] libmachine: (embed-certs-324808) Calling .DriverName
	I0806 08:33:54.657623   79219 main.go:141] libmachine: (embed-certs-324808) Calling .DriverName
	I0806 08:33:54.657705   79219 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0806 08:33:54.657748   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHHostname
	I0806 08:33:54.657857   79219 ssh_runner.go:195] Run: cat /version.json
	I0806 08:33:54.657877   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHHostname
	I0806 08:33:54.660302   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:54.660482   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:54.660694   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:33:54.660729   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:54.660823   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:33:54.660831   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHPort
	I0806 08:33:54.660844   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:54.661011   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHPort
	I0806 08:33:54.661094   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHKeyPath
	I0806 08:33:54.661165   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHKeyPath
	I0806 08:33:54.661223   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHUsername
	I0806 08:33:54.661280   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHUsername
	I0806 08:33:54.661338   79219 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/embed-certs-324808/id_rsa Username:docker}
	I0806 08:33:54.661386   79219 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/embed-certs-324808/id_rsa Username:docker}
	I0806 08:33:54.770935   79219 ssh_runner.go:195] Run: systemctl --version
	I0806 08:33:54.777166   79219 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0806 08:33:54.920252   79219 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0806 08:33:54.927740   79219 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0806 08:33:54.927829   79219 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0806 08:33:54.949691   79219 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0806 08:33:54.949720   79219 start.go:495] detecting cgroup driver to use...
	I0806 08:33:54.949794   79219 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0806 08:33:54.967575   79219 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 08:33:54.982019   79219 docker.go:217] disabling cri-docker service (if available) ...
	I0806 08:33:54.982094   79219 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0806 08:33:55.001629   79219 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0806 08:33:55.018332   79219 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0806 08:33:55.139207   79219 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0806 08:33:55.302225   79219 docker.go:233] disabling docker service ...
	I0806 08:33:55.302300   79219 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0806 08:33:55.321470   79219 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0806 08:33:55.334925   79219 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0806 08:33:55.456388   79219 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0806 08:33:55.579725   79219 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0806 08:33:55.594362   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 08:33:55.613084   79219 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0806 08:33:55.613165   79219 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:33:55.623414   79219 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0806 08:33:55.623487   79219 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:33:55.633837   79219 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:33:55.645893   79219 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:33:55.658623   79219 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0806 08:33:55.670016   79219 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:33:55.681227   79219 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:33:55.700047   79219 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:33:55.711409   79219 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0806 08:33:55.721076   79219 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0806 08:33:55.721138   79219 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0806 08:33:55.734470   79219 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0806 08:33:55.750919   79219 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 08:33:55.880306   79219 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0806 08:33:56.031323   79219 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0806 08:33:56.031401   79219 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0806 08:33:56.036456   79219 start.go:563] Will wait 60s for crictl version
	I0806 08:33:56.036522   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:33:56.040542   79219 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0806 08:33:56.081166   79219 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0806 08:33:56.081258   79219 ssh_runner.go:195] Run: crio --version
	I0806 08:33:56.109963   79219 ssh_runner.go:195] Run: crio --version
	I0806 08:33:56.141118   79219 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0806 08:33:54.680267   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .Start
	I0806 08:33:54.680448   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Ensuring networks are active...
	I0806 08:33:54.681411   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Ensuring network default is active
	I0806 08:33:54.681778   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Ensuring network mk-default-k8s-diff-port-990751 is active
	I0806 08:33:54.682205   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Getting domain xml...
	I0806 08:33:54.683028   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Creating domain...
	I0806 08:33:55.964313   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting to get IP...
	I0806 08:33:55.965483   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:33:55.965965   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | unable to find current IP address of domain default-k8s-diff-port-990751 in network mk-default-k8s-diff-port-990751
	I0806 08:33:55.966053   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | I0806 08:33:55.965953   80608 retry.go:31] will retry after 294.177981ms: waiting for machine to come up
	I0806 08:33:56.261404   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:33:56.262032   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | unable to find current IP address of domain default-k8s-diff-port-990751 in network mk-default-k8s-diff-port-990751
	I0806 08:33:56.262059   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | I0806 08:33:56.261969   80608 retry.go:31] will retry after 375.14393ms: waiting for machine to come up
	I0806 08:33:56.638714   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:33:56.639150   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | unable to find current IP address of domain default-k8s-diff-port-990751 in network mk-default-k8s-diff-port-990751
	I0806 08:33:56.639178   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | I0806 08:33:56.639118   80608 retry.go:31] will retry after 439.582122ms: waiting for machine to come up
	I0806 08:33:57.080785   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:33:57.081335   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | unable to find current IP address of domain default-k8s-diff-port-990751 in network mk-default-k8s-diff-port-990751
	I0806 08:33:57.081373   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | I0806 08:33:57.081299   80608 retry.go:31] will retry after 523.875054ms: waiting for machine to come up
	I0806 08:33:57.606731   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:33:57.607220   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | unable to find current IP address of domain default-k8s-diff-port-990751 in network mk-default-k8s-diff-port-990751
	I0806 08:33:57.607250   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | I0806 08:33:57.607178   80608 retry.go:31] will retry after 727.363483ms: waiting for machine to come up
	I0806 08:33:58.336352   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:33:58.336852   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | unable to find current IP address of domain default-k8s-diff-port-990751 in network mk-default-k8s-diff-port-990751
	I0806 08:33:58.336885   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | I0806 08:33:58.336782   80608 retry.go:31] will retry after 662.304027ms: waiting for machine to come up
	I0806 08:33:56.142523   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetIP
	I0806 08:33:56.145641   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:56.145997   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:33:56.146023   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:56.146239   79219 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0806 08:33:56.150763   79219 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 08:33:56.165617   79219 kubeadm.go:883] updating cluster {Name:embed-certs-324808 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-324808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.190 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0806 08:33:56.165775   79219 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0806 08:33:56.165837   79219 ssh_runner.go:195] Run: sudo crictl images --output json
	I0806 08:33:56.205302   79219 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0806 08:33:56.205393   79219 ssh_runner.go:195] Run: which lz4
	I0806 08:33:56.209791   79219 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0806 08:33:56.214338   79219 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0806 08:33:56.214369   79219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0806 08:33:57.735973   79219 crio.go:462] duration metric: took 1.526221921s to copy over tarball
	I0806 08:33:57.736049   79219 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0806 08:33:59.000478   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:33:59.000905   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | unable to find current IP address of domain default-k8s-diff-port-990751 in network mk-default-k8s-diff-port-990751
	I0806 08:33:59.000971   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | I0806 08:33:59.000829   80608 retry.go:31] will retry after 1.110230284s: waiting for machine to come up
	I0806 08:34:00.112658   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:00.113097   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | unable to find current IP address of domain default-k8s-diff-port-990751 in network mk-default-k8s-diff-port-990751
	I0806 08:34:00.113124   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | I0806 08:34:00.113053   80608 retry.go:31] will retry after 1.081213279s: waiting for machine to come up
	I0806 08:34:01.195646   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:01.196062   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | unable to find current IP address of domain default-k8s-diff-port-990751 in network mk-default-k8s-diff-port-990751
	I0806 08:34:01.196097   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | I0806 08:34:01.196014   80608 retry.go:31] will retry after 1.334553019s: waiting for machine to come up
	I0806 08:34:02.532009   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:02.532492   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | unable to find current IP address of domain default-k8s-diff-port-990751 in network mk-default-k8s-diff-port-990751
	I0806 08:34:02.532520   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | I0806 08:34:02.532449   80608 retry.go:31] will retry after 1.42743583s: waiting for machine to come up
	I0806 08:33:59.991340   79219 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.255258522s)
	I0806 08:33:59.991368   79219 crio.go:469] duration metric: took 2.255367027s to extract the tarball
	I0806 08:33:59.991378   79219 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0806 08:34:00.030261   79219 ssh_runner.go:195] Run: sudo crictl images --output json
	I0806 08:34:00.075217   79219 crio.go:514] all images are preloaded for cri-o runtime.
	I0806 08:34:00.075243   79219 cache_images.go:84] Images are preloaded, skipping loading
	I0806 08:34:00.075253   79219 kubeadm.go:934] updating node { 192.168.39.190 8443 v1.30.3 crio true true} ...
	I0806 08:34:00.075396   79219 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-324808 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.190
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-324808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0806 08:34:00.075477   79219 ssh_runner.go:195] Run: crio config
	I0806 08:34:00.124134   79219 cni.go:84] Creating CNI manager for ""
	I0806 08:34:00.124160   79219 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0806 08:34:00.124174   79219 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0806 08:34:00.124195   79219 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.190 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-324808 NodeName:embed-certs-324808 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.190"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.190 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0806 08:34:00.124348   79219 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.190
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-324808"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.190
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.190"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0806 08:34:00.124446   79219 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0806 08:34:00.134782   79219 binaries.go:44] Found k8s binaries, skipping transfer
	I0806 08:34:00.134874   79219 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0806 08:34:00.144892   79219 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0806 08:34:00.164614   79219 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0806 08:34:00.183911   79219 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0806 08:34:00.203821   79219 ssh_runner.go:195] Run: grep 192.168.39.190	control-plane.minikube.internal$ /etc/hosts
	I0806 08:34:00.207980   79219 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.190	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 08:34:00.220566   79219 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 08:34:00.339428   79219 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 08:34:00.357441   79219 certs.go:68] Setting up /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/embed-certs-324808 for IP: 192.168.39.190
	I0806 08:34:00.357470   79219 certs.go:194] generating shared ca certs ...
	I0806 08:34:00.357486   79219 certs.go:226] acquiring lock for ca certs: {Name:mkf5c5aed91f4108054d9f2c742cad66c86afbed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:34:00.357648   79219 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.key
	I0806 08:34:00.357694   79219 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.key
	I0806 08:34:00.357702   79219 certs.go:256] generating profile certs ...
	I0806 08:34:00.357788   79219 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/embed-certs-324808/client.key
	I0806 08:34:00.357870   79219 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/embed-certs-324808/apiserver.key.078f53b0
	I0806 08:34:00.357907   79219 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/embed-certs-324808/proxy-client.key
	I0806 08:34:00.358033   79219 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246.pem (1338 bytes)
	W0806 08:34:00.358070   79219 certs.go:480] ignoring /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246_empty.pem, impossibly tiny 0 bytes
	I0806 08:34:00.358085   79219 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem (1675 bytes)
	I0806 08:34:00.358122   79219 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem (1078 bytes)
	I0806 08:34:00.358151   79219 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem (1123 bytes)
	I0806 08:34:00.358174   79219 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem (1675 bytes)
	I0806 08:34:00.358231   79219 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem (1708 bytes)
	I0806 08:34:00.359196   79219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0806 08:34:00.397544   79219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0806 08:34:00.436824   79219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0806 08:34:00.478633   79219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0806 08:34:00.525641   79219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/embed-certs-324808/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0806 08:34:00.571003   79219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/embed-certs-324808/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0806 08:34:00.596423   79219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/embed-certs-324808/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0806 08:34:00.621129   79219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/embed-certs-324808/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0806 08:34:00.646297   79219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem --> /usr/share/ca-certificates/202462.pem (1708 bytes)
	I0806 08:34:00.672396   79219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0806 08:34:00.698468   79219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246.pem --> /usr/share/ca-certificates/20246.pem (1338 bytes)
	I0806 08:34:00.724115   79219 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0806 08:34:00.742200   79219 ssh_runner.go:195] Run: openssl version
	I0806 08:34:00.748418   79219 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202462.pem && ln -fs /usr/share/ca-certificates/202462.pem /etc/ssl/certs/202462.pem"
	I0806 08:34:00.760234   79219 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202462.pem
	I0806 08:34:00.764947   79219 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  6 07:17 /usr/share/ca-certificates/202462.pem
	I0806 08:34:00.765013   79219 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202462.pem
	I0806 08:34:00.771065   79219 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202462.pem /etc/ssl/certs/3ec20f2e.0"
	I0806 08:34:00.782995   79219 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0806 08:34:00.795092   79219 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0806 08:34:00.800011   79219 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  6 07:07 /usr/share/ca-certificates/minikubeCA.pem
	I0806 08:34:00.800079   79219 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0806 08:34:00.806038   79219 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0806 08:34:00.817660   79219 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20246.pem && ln -fs /usr/share/ca-certificates/20246.pem /etc/ssl/certs/20246.pem"
	I0806 08:34:00.829045   79219 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20246.pem
	I0806 08:34:00.833640   79219 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  6 07:17 /usr/share/ca-certificates/20246.pem
	I0806 08:34:00.833704   79219 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20246.pem
	I0806 08:34:00.839618   79219 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20246.pem /etc/ssl/certs/51391683.0"
	I0806 08:34:00.850938   79219 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0806 08:34:00.855577   79219 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0806 08:34:00.861960   79219 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0806 08:34:00.868626   79219 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0806 08:34:00.875291   79219 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0806 08:34:00.881553   79219 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0806 08:34:00.887650   79219 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0806 08:34:00.894134   79219 kubeadm.go:392] StartCluster: {Name:embed-certs-324808 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-324808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.190 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 08:34:00.894223   79219 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0806 08:34:00.894272   79219 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0806 08:34:00.938980   79219 cri.go:89] found id: ""
	I0806 08:34:00.939046   79219 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0806 08:34:00.951339   79219 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0806 08:34:00.951369   79219 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0806 08:34:00.951430   79219 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0806 08:34:00.962789   79219 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0806 08:34:00.964165   79219 kubeconfig.go:125] found "embed-certs-324808" server: "https://192.168.39.190:8443"
	I0806 08:34:00.967221   79219 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0806 08:34:00.977500   79219 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.190
	I0806 08:34:00.977535   79219 kubeadm.go:1160] stopping kube-system containers ...
	I0806 08:34:00.977547   79219 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0806 08:34:00.977603   79219 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0806 08:34:01.017218   79219 cri.go:89] found id: ""
	I0806 08:34:01.017315   79219 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0806 08:34:01.035305   79219 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0806 08:34:01.046210   79219 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0806 08:34:01.046233   79219 kubeadm.go:157] found existing configuration files:
	
	I0806 08:34:01.046290   79219 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0806 08:34:01.056904   79219 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0806 08:34:01.056974   79219 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0806 08:34:01.066640   79219 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0806 08:34:01.076042   79219 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0806 08:34:01.076114   79219 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0806 08:34:01.085735   79219 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0806 08:34:01.094602   79219 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0806 08:34:01.094661   79219 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0806 08:34:01.104160   79219 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0806 08:34:01.113176   79219 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0806 08:34:01.113249   79219 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0806 08:34:01.122679   79219 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0806 08:34:01.133032   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:01.259256   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:02.252011   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:02.481629   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:02.570215   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:02.710597   79219 api_server.go:52] waiting for apiserver process to appear ...
	I0806 08:34:02.710700   79219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:03.210803   79219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:03.711111   79219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:03.738971   79219 api_server.go:72] duration metric: took 1.028375323s to wait for apiserver process to appear ...
	I0806 08:34:03.739003   79219 api_server.go:88] waiting for apiserver healthz status ...
	I0806 08:34:03.739025   79219 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8443/healthz ...
	I0806 08:34:03.962232   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:03.962678   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | unable to find current IP address of domain default-k8s-diff-port-990751 in network mk-default-k8s-diff-port-990751
	I0806 08:34:03.962709   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | I0806 08:34:03.962638   80608 retry.go:31] will retry after 2.113116057s: waiting for machine to come up
	I0806 08:34:06.078010   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:06.078537   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | unable to find current IP address of domain default-k8s-diff-port-990751 in network mk-default-k8s-diff-port-990751
	I0806 08:34:06.078570   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | I0806 08:34:06.078503   80608 retry.go:31] will retry after 2.580076274s: waiting for machine to come up
	I0806 08:34:08.662313   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:08.662651   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | unable to find current IP address of domain default-k8s-diff-port-990751 in network mk-default-k8s-diff-port-990751
	I0806 08:34:08.662673   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | I0806 08:34:08.662609   80608 retry.go:31] will retry after 4.365982677s: waiting for machine to come up
	I0806 08:34:06.791233   79219 api_server.go:279] https://192.168.39.190:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0806 08:34:06.791272   79219 api_server.go:103] status: https://192.168.39.190:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0806 08:34:06.791311   79219 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8443/healthz ...
	I0806 08:34:06.804038   79219 api_server.go:279] https://192.168.39.190:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0806 08:34:06.804074   79219 api_server.go:103] status: https://192.168.39.190:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0806 08:34:07.239192   79219 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8443/healthz ...
	I0806 08:34:07.244973   79219 api_server.go:279] https://192.168.39.190:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0806 08:34:07.245005   79219 api_server.go:103] status: https://192.168.39.190:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0806 08:34:07.739556   79219 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8443/healthz ...
	I0806 08:34:07.754354   79219 api_server.go:279] https://192.168.39.190:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0806 08:34:07.754382   79219 api_server.go:103] status: https://192.168.39.190:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0806 08:34:08.240114   79219 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8443/healthz ...
	I0806 08:34:08.244799   79219 api_server.go:279] https://192.168.39.190:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0806 08:34:08.244829   79219 api_server.go:103] status: https://192.168.39.190:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0806 08:34:08.739407   79219 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8443/healthz ...
	I0806 08:34:08.743770   79219 api_server.go:279] https://192.168.39.190:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0806 08:34:08.743804   79219 api_server.go:103] status: https://192.168.39.190:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0806 08:34:09.239855   79219 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8443/healthz ...
	I0806 08:34:09.244078   79219 api_server.go:279] https://192.168.39.190:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0806 08:34:09.244135   79219 api_server.go:103] status: https://192.168.39.190:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0806 08:34:09.739859   79219 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8443/healthz ...
	I0806 08:34:09.744236   79219 api_server.go:279] https://192.168.39.190:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0806 08:34:09.744272   79219 api_server.go:103] status: https://192.168.39.190:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0806 08:34:10.240036   79219 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8443/healthz ...
	I0806 08:34:10.244217   79219 api_server.go:279] https://192.168.39.190:8443/healthz returned 200:
	ok
	I0806 08:34:10.250333   79219 api_server.go:141] control plane version: v1.30.3
	I0806 08:34:10.250361   79219 api_server.go:131] duration metric: took 6.511350709s to wait for apiserver health ...
	I0806 08:34:10.250370   79219 cni.go:84] Creating CNI manager for ""
	I0806 08:34:10.250376   79219 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0806 08:34:10.252472   79219 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0806 08:34:10.253779   79219 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0806 08:34:10.264772   79219 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0806 08:34:10.282904   79219 system_pods.go:43] waiting for kube-system pods to appear ...
	I0806 08:34:10.292343   79219 system_pods.go:59] 8 kube-system pods found
	I0806 08:34:10.292397   79219 system_pods.go:61] "coredns-7db6d8ff4d-4xxwm" [eeb47dea-32e9-458f-9ad9-2af6d4e68cc0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0806 08:34:10.292408   79219 system_pods.go:61] "etcd-embed-certs-324808" [1e0a82cd-6704-4da3-8992-15c68a7aaf70] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0806 08:34:10.292419   79219 system_pods.go:61] "kube-apiserver-embed-certs-324808" [a33c54d9-1fc5-4243-8fe9-d535c40908c3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0806 08:34:10.292426   79219 system_pods.go:61] "kube-controller-manager-embed-certs-324808" [b1c519b8-9c96-4c87-90f3-cc277cfe7763] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0806 08:34:10.292432   79219 system_pods.go:61] "kube-proxy-8lbzv" [0c45e6c0-9b7f-4c48-bff5-38d4a5949bec] Running
	I0806 08:34:10.292438   79219 system_pods.go:61] "kube-scheduler-embed-certs-324808" [78c2827f-341a-4441-bf7e-eff947fc3ea8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0806 08:34:10.292445   79219 system_pods.go:61] "metrics-server-569cc877fc-8xb9k" [7a5fb326-11a7-48fa-bcde-a22026a1dccb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0806 08:34:10.292452   79219 system_pods.go:61] "storage-provisioner" [15c7d89a-e994-4cca-931c-f646157a15e8] Running
	I0806 08:34:10.292465   79219 system_pods.go:74] duration metric: took 9.536183ms to wait for pod list to return data ...
	I0806 08:34:10.292474   79219 node_conditions.go:102] verifying NodePressure condition ...
	I0806 08:34:10.295584   79219 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0806 08:34:10.295616   79219 node_conditions.go:123] node cpu capacity is 2
	I0806 08:34:10.295632   79219 node_conditions.go:105] duration metric: took 3.152403ms to run NodePressure ...
	I0806 08:34:10.295654   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:10.566724   79219 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0806 08:34:10.571777   79219 kubeadm.go:739] kubelet initialised
	I0806 08:34:10.571801   79219 kubeadm.go:740] duration metric: took 5.042307ms waiting for restarted kubelet to initialise ...
	I0806 08:34:10.571812   79219 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 08:34:10.576738   79219 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-4xxwm" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:10.581943   79219 pod_ready.go:97] node "embed-certs-324808" hosting pod "coredns-7db6d8ff4d-4xxwm" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-324808" has status "Ready":"False"
	I0806 08:34:10.581976   79219 pod_ready.go:81] duration metric: took 5.207743ms for pod "coredns-7db6d8ff4d-4xxwm" in "kube-system" namespace to be "Ready" ...
	E0806 08:34:10.581989   79219 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-324808" hosting pod "coredns-7db6d8ff4d-4xxwm" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-324808" has status "Ready":"False"
	I0806 08:34:10.581997   79219 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-324808" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:10.586983   79219 pod_ready.go:97] node "embed-certs-324808" hosting pod "etcd-embed-certs-324808" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-324808" has status "Ready":"False"
	I0806 08:34:10.587007   79219 pod_ready.go:81] duration metric: took 5.000306ms for pod "etcd-embed-certs-324808" in "kube-system" namespace to be "Ready" ...
	E0806 08:34:10.587016   79219 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-324808" hosting pod "etcd-embed-certs-324808" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-324808" has status "Ready":"False"
	I0806 08:34:10.587022   79219 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-324808" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:10.592042   79219 pod_ready.go:97] node "embed-certs-324808" hosting pod "kube-apiserver-embed-certs-324808" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-324808" has status "Ready":"False"
	I0806 08:34:10.592066   79219 pod_ready.go:81] duration metric: took 5.035031ms for pod "kube-apiserver-embed-certs-324808" in "kube-system" namespace to be "Ready" ...
	E0806 08:34:10.592074   79219 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-324808" hosting pod "kube-apiserver-embed-certs-324808" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-324808" has status "Ready":"False"
	I0806 08:34:10.592080   79219 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-324808" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:10.687156   79219 pod_ready.go:97] node "embed-certs-324808" hosting pod "kube-controller-manager-embed-certs-324808" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-324808" has status "Ready":"False"
	I0806 08:34:10.687193   79219 pod_ready.go:81] duration metric: took 95.100916ms for pod "kube-controller-manager-embed-certs-324808" in "kube-system" namespace to be "Ready" ...
	E0806 08:34:10.687207   79219 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-324808" hosting pod "kube-controller-manager-embed-certs-324808" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-324808" has status "Ready":"False"
	I0806 08:34:10.687217   79219 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-8lbzv" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:11.086180   79219 pod_ready.go:97] node "embed-certs-324808" hosting pod "kube-proxy-8lbzv" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-324808" has status "Ready":"False"
	I0806 08:34:11.086213   79219 pod_ready.go:81] duration metric: took 398.984678ms for pod "kube-proxy-8lbzv" in "kube-system" namespace to be "Ready" ...
	E0806 08:34:11.086224   79219 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-324808" hosting pod "kube-proxy-8lbzv" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-324808" has status "Ready":"False"
	I0806 08:34:11.086230   79219 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-324808" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:11.486971   79219 pod_ready.go:97] node "embed-certs-324808" hosting pod "kube-scheduler-embed-certs-324808" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-324808" has status "Ready":"False"
	I0806 08:34:11.486995   79219 pod_ready.go:81] duration metric: took 400.759449ms for pod "kube-scheduler-embed-certs-324808" in "kube-system" namespace to be "Ready" ...
	E0806 08:34:11.487005   79219 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-324808" hosting pod "kube-scheduler-embed-certs-324808" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-324808" has status "Ready":"False"
	I0806 08:34:11.487011   79219 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:11.887030   79219 pod_ready.go:97] node "embed-certs-324808" hosting pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-324808" has status "Ready":"False"
	I0806 08:34:11.887052   79219 pod_ready.go:81] duration metric: took 400.033763ms for pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace to be "Ready" ...
	E0806 08:34:11.887061   79219 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-324808" hosting pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-324808" has status "Ready":"False"
	I0806 08:34:11.887068   79219 pod_ready.go:38] duration metric: took 1.315245982s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 08:34:11.887084   79219 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0806 08:34:11.899633   79219 ops.go:34] apiserver oom_adj: -16
	I0806 08:34:11.899649   79219 kubeadm.go:597] duration metric: took 10.948272769s to restartPrimaryControlPlane
	I0806 08:34:11.899657   79219 kubeadm.go:394] duration metric: took 11.005530169s to StartCluster
	I0806 08:34:11.899676   79219 settings.go:142] acquiring lock: {Name:mkb0bfbcae3b4205ef43487a9de84cfd8afd2e5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:34:11.899745   79219 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19370-13057/kubeconfig
	I0806 08:34:11.901341   79219 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/kubeconfig: {Name:mk5c066914acf8e36556b29792f75b98076ca34a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:34:11.901603   79219 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.190 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0806 08:34:11.901676   79219 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0806 08:34:11.901753   79219 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-324808"
	I0806 08:34:11.901785   79219 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-324808"
	W0806 08:34:11.901796   79219 addons.go:243] addon storage-provisioner should already be in state true
	I0806 08:34:11.901788   79219 addons.go:69] Setting default-storageclass=true in profile "embed-certs-324808"
	I0806 08:34:11.901834   79219 host.go:66] Checking if "embed-certs-324808" exists ...
	I0806 08:34:11.901851   79219 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-324808"
	I0806 08:34:11.901854   79219 config.go:182] Loaded profile config "embed-certs-324808": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 08:34:11.901842   79219 addons.go:69] Setting metrics-server=true in profile "embed-certs-324808"
	I0806 08:34:11.901921   79219 addons.go:234] Setting addon metrics-server=true in "embed-certs-324808"
	W0806 08:34:11.901940   79219 addons.go:243] addon metrics-server should already be in state true
	I0806 08:34:11.901998   79219 host.go:66] Checking if "embed-certs-324808" exists ...
	I0806 08:34:11.902216   79219 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:34:11.902252   79219 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:34:11.902280   79219 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:34:11.902308   79219 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:34:11.902383   79219 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:34:11.902411   79219 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:34:11.904343   79219 out.go:177] * Verifying Kubernetes components...
	I0806 08:34:11.905934   79219 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 08:34:11.917271   79219 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32971
	I0806 08:34:11.917375   79219 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38975
	I0806 08:34:11.917833   79219 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:34:11.917874   79219 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:34:11.917981   79219 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33569
	I0806 08:34:11.918394   79219 main.go:141] libmachine: Using API Version  1
	I0806 08:34:11.918416   79219 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:34:11.918453   79219 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:34:11.918530   79219 main.go:141] libmachine: Using API Version  1
	I0806 08:34:11.918552   79219 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:34:11.918761   79219 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:34:11.918819   79219 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:34:11.918913   79219 main.go:141] libmachine: Using API Version  1
	I0806 08:34:11.918939   79219 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:34:11.918965   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetState
	I0806 08:34:11.919280   79219 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:34:11.919322   79219 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:34:11.919367   79219 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:34:11.919873   79219 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:34:11.919901   79219 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:34:11.921815   79219 addons.go:234] Setting addon default-storageclass=true in "embed-certs-324808"
	W0806 08:34:11.921834   79219 addons.go:243] addon default-storageclass should already be in state true
	I0806 08:34:11.921856   79219 host.go:66] Checking if "embed-certs-324808" exists ...
	I0806 08:34:11.922132   79219 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:34:11.922168   79219 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:34:11.936191   79219 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40029
	I0806 08:34:11.936678   79219 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:34:11.937209   79219 main.go:141] libmachine: Using API Version  1
	I0806 08:34:11.937234   79219 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:34:11.940428   79219 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35087
	I0806 08:34:11.940508   79219 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38421
	I0806 08:34:11.940845   79219 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:34:11.941345   79219 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:34:11.941346   79219 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:34:11.941707   79219 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:34:11.941752   79219 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:34:11.941831   79219 main.go:141] libmachine: Using API Version  1
	I0806 08:34:11.941872   79219 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:34:11.941904   79219 main.go:141] libmachine: Using API Version  1
	I0806 08:34:11.941917   79219 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:34:11.942273   79219 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:34:11.942359   79219 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:34:11.942514   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetState
	I0806 08:34:11.942532   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetState
	I0806 08:34:11.944317   79219 main.go:141] libmachine: (embed-certs-324808) Calling .DriverName
	I0806 08:34:11.944503   79219 main.go:141] libmachine: (embed-certs-324808) Calling .DriverName
	I0806 08:34:11.946511   79219 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0806 08:34:11.946525   79219 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 08:34:11.947929   79219 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0806 08:34:11.947948   79219 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0806 08:34:11.947976   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHHostname
	I0806 08:34:11.948044   79219 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0806 08:34:11.948066   79219 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0806 08:34:11.948085   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHHostname
	I0806 08:34:11.951758   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:34:11.951988   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:34:11.952181   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:34:11.952205   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:34:11.952530   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:34:11.952565   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHPort
	I0806 08:34:11.952566   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:34:11.952693   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHPort
	I0806 08:34:11.952819   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHKeyPath
	I0806 08:34:11.952858   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHKeyPath
	I0806 08:34:11.952970   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHUsername
	I0806 08:34:11.952974   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHUsername
	I0806 08:34:11.953143   79219 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/embed-certs-324808/id_rsa Username:docker}
	I0806 08:34:11.953207   79219 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/embed-certs-324808/id_rsa Username:docker}
	I0806 08:34:11.959949   79219 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35275
	I0806 08:34:11.960353   79219 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:34:11.960900   79219 main.go:141] libmachine: Using API Version  1
	I0806 08:34:11.960929   79219 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:34:11.961241   79219 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:34:11.961451   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetState
	I0806 08:34:11.963031   79219 main.go:141] libmachine: (embed-certs-324808) Calling .DriverName
	I0806 08:34:11.963333   79219 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0806 08:34:11.963349   79219 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0806 08:34:11.963366   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHHostname
	I0806 08:34:11.966079   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:34:11.966448   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:34:11.966481   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:34:11.966559   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHPort
	I0806 08:34:11.966737   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHKeyPath
	I0806 08:34:11.966881   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHUsername
	I0806 08:34:11.967013   79219 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/embed-certs-324808/id_rsa Username:docker}
	I0806 08:34:12.081433   79219 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 08:34:12.100701   79219 node_ready.go:35] waiting up to 6m0s for node "embed-certs-324808" to be "Ready" ...
	I0806 08:34:12.196131   79219 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0806 08:34:12.196151   79219 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0806 08:34:12.201603   79219 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0806 08:34:12.217590   79219 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0806 08:34:12.217626   79219 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0806 08:34:12.239354   79219 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0806 08:34:12.239377   79219 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0806 08:34:12.254418   79219 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0806 08:34:12.266126   79219 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0806 08:34:13.441725   79219 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.187269206s)
	I0806 08:34:13.441780   79219 main.go:141] libmachine: Making call to close driver server
	I0806 08:34:13.441791   79219 main.go:141] libmachine: (embed-certs-324808) Calling .Close
	I0806 08:34:13.441856   79219 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.240222324s)
	I0806 08:34:13.441895   79219 main.go:141] libmachine: Making call to close driver server
	I0806 08:34:13.441910   79219 main.go:141] libmachine: (embed-certs-324808) Calling .Close
	I0806 08:34:13.442096   79219 main.go:141] libmachine: (embed-certs-324808) DBG | Closing plugin on server side
	I0806 08:34:13.442116   79219 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:34:13.442128   79219 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:34:13.442136   79219 main.go:141] libmachine: Making call to close driver server
	I0806 08:34:13.442147   79219 main.go:141] libmachine: (embed-certs-324808) Calling .Close
	I0806 08:34:13.442211   79219 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:34:13.442225   79219 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:34:13.442235   79219 main.go:141] libmachine: Making call to close driver server
	I0806 08:34:13.442245   79219 main.go:141] libmachine: (embed-certs-324808) Calling .Close
	I0806 08:34:13.442414   79219 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:34:13.442424   79219 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:34:13.443716   79219 main.go:141] libmachine: (embed-certs-324808) DBG | Closing plugin on server side
	I0806 08:34:13.443727   79219 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:34:13.443748   79219 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:34:13.448919   79219 main.go:141] libmachine: Making call to close driver server
	I0806 08:34:13.448945   79219 main.go:141] libmachine: (embed-certs-324808) Calling .Close
	I0806 08:34:13.449225   79219 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:34:13.449242   79219 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:34:13.521481   79219 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.255292404s)
	I0806 08:34:13.521542   79219 main.go:141] libmachine: Making call to close driver server
	I0806 08:34:13.521561   79219 main.go:141] libmachine: (embed-certs-324808) Calling .Close
	I0806 08:34:13.521875   79219 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:34:13.521898   79219 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:34:13.521899   79219 main.go:141] libmachine: (embed-certs-324808) DBG | Closing plugin on server side
	I0806 08:34:13.521916   79219 main.go:141] libmachine: Making call to close driver server
	I0806 08:34:13.521927   79219 main.go:141] libmachine: (embed-certs-324808) Calling .Close
	I0806 08:34:13.522147   79219 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:34:13.522165   79219 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:34:13.522175   79219 addons.go:475] Verifying addon metrics-server=true in "embed-certs-324808"
	I0806 08:34:13.522185   79219 main.go:141] libmachine: (embed-certs-324808) DBG | Closing plugin on server side
	I0806 08:34:13.524209   79219 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0806 08:34:13.032813   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.033366   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Found IP for machine: 192.168.50.235
	I0806 08:34:13.033394   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Reserving static IP address...
	I0806 08:34:13.033414   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has current primary IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.033956   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-990751", mac: "52:54:00:fb:51:bd", ip: "192.168.50.235"} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:13.033992   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Reserved static IP address: 192.168.50.235
	I0806 08:34:13.034012   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | skip adding static IP to network mk-default-k8s-diff-port-990751 - found existing host DHCP lease matching {name: "default-k8s-diff-port-990751", mac: "52:54:00:fb:51:bd", ip: "192.168.50.235"}
	I0806 08:34:13.034031   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | Getting to WaitForSSH function...
	I0806 08:34:13.034046   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for SSH to be available...
	I0806 08:34:13.036483   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.036843   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:13.036874   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.037013   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | Using SSH client type: external
	I0806 08:34:13.037042   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | Using SSH private key: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/default-k8s-diff-port-990751/id_rsa (-rw-------)
	I0806 08:34:13.037066   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.235 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19370-13057/.minikube/machines/default-k8s-diff-port-990751/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0806 08:34:13.037078   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | About to run SSH command:
	I0806 08:34:13.037119   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | exit 0
	I0806 08:34:13.164932   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | SSH cmd err, output: <nil>: 
	I0806 08:34:13.165301   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetConfigRaw
	I0806 08:34:13.165892   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetIP
	I0806 08:34:13.169003   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.169408   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:13.169449   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.169783   79614 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/default-k8s-diff-port-990751/config.json ...
	I0806 08:34:13.170431   79614 machine.go:94] provisionDockerMachine start ...
	I0806 08:34:13.170452   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .DriverName
	I0806 08:34:13.170703   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHHostname
	I0806 08:34:13.173536   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.173994   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:13.174022   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.174241   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHPort
	I0806 08:34:13.174456   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHKeyPath
	I0806 08:34:13.174632   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHKeyPath
	I0806 08:34:13.174789   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHUsername
	I0806 08:34:13.175056   79614 main.go:141] libmachine: Using SSH client type: native
	I0806 08:34:13.175329   79614 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.235 22 <nil> <nil>}
	I0806 08:34:13.175344   79614 main.go:141] libmachine: About to run SSH command:
	hostname
	I0806 08:34:13.281288   79614 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0806 08:34:13.281322   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetMachineName
	I0806 08:34:13.281573   79614 buildroot.go:166] provisioning hostname "default-k8s-diff-port-990751"
	I0806 08:34:13.281606   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetMachineName
	I0806 08:34:13.281808   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHHostname
	I0806 08:34:13.284463   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.284825   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:13.284860   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.284988   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHPort
	I0806 08:34:13.285178   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHKeyPath
	I0806 08:34:13.285350   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHKeyPath
	I0806 08:34:13.285468   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHUsername
	I0806 08:34:13.285614   79614 main.go:141] libmachine: Using SSH client type: native
	I0806 08:34:13.285807   79614 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.235 22 <nil> <nil>}
	I0806 08:34:13.285821   79614 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-990751 && echo "default-k8s-diff-port-990751" | sudo tee /etc/hostname
	I0806 08:34:13.413970   79614 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-990751
	
	I0806 08:34:13.414005   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHHostname
	I0806 08:34:13.417279   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.417708   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:13.417757   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.417947   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHPort
	I0806 08:34:13.418153   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHKeyPath
	I0806 08:34:13.418337   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHKeyPath
	I0806 08:34:13.418495   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHUsername
	I0806 08:34:13.418683   79614 main.go:141] libmachine: Using SSH client type: native
	I0806 08:34:13.418935   79614 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.235 22 <nil> <nil>}
	I0806 08:34:13.418970   79614 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-990751' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-990751/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-990751' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0806 08:34:13.534550   79614 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 08:34:13.534578   79614 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19370-13057/.minikube CaCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19370-13057/.minikube}
	I0806 08:34:13.534630   79614 buildroot.go:174] setting up certificates
	I0806 08:34:13.534647   79614 provision.go:84] configureAuth start
	I0806 08:34:13.534664   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetMachineName
	I0806 08:34:13.534962   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetIP
	I0806 08:34:13.537582   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.538144   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:13.538173   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.538329   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHHostname
	I0806 08:34:13.541132   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.541422   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:13.541447   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.541638   79614 provision.go:143] copyHostCerts
	I0806 08:34:13.541693   79614 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem, removing ...
	I0806 08:34:13.541703   79614 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem
	I0806 08:34:13.541761   79614 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem (1078 bytes)
	I0806 08:34:13.541862   79614 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem, removing ...
	I0806 08:34:13.541870   79614 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem
	I0806 08:34:13.541891   79614 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem (1123 bytes)
	I0806 08:34:13.541943   79614 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem, removing ...
	I0806 08:34:13.541950   79614 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem
	I0806 08:34:13.541966   79614 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem (1675 bytes)
	I0806 08:34:13.542013   79614 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-990751 san=[127.0.0.1 192.168.50.235 default-k8s-diff-port-990751 localhost minikube]
	I0806 08:34:13.632290   79614 provision.go:177] copyRemoteCerts
	I0806 08:34:13.632345   79614 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0806 08:34:13.632389   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHHostname
	I0806 08:34:13.635332   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.635619   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:13.635644   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.635868   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHPort
	I0806 08:34:13.636069   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHKeyPath
	I0806 08:34:13.636208   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHUsername
	I0806 08:34:13.636381   79614 sshutil.go:53] new ssh client: &{IP:192.168.50.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/default-k8s-diff-port-990751/id_rsa Username:docker}
	I0806 08:34:13.719415   79614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0806 08:34:13.745981   79614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0806 08:34:13.775906   79614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0806 08:34:13.806948   79614 provision.go:87] duration metric: took 272.283009ms to configureAuth
	I0806 08:34:13.806986   79614 buildroot.go:189] setting minikube options for container-runtime
	I0806 08:34:13.807234   79614 config.go:182] Loaded profile config "default-k8s-diff-port-990751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 08:34:13.807328   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHHostname
	I0806 08:34:13.810500   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.810895   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:13.810928   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.811074   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHPort
	I0806 08:34:13.811302   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHKeyPath
	I0806 08:34:13.811497   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHKeyPath
	I0806 08:34:13.811627   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHUsername
	I0806 08:34:13.811896   79614 main.go:141] libmachine: Using SSH client type: native
	I0806 08:34:13.812124   79614 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.235 22 <nil> <nil>}
	I0806 08:34:13.812148   79614 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0806 08:34:13.525425   79219 addons.go:510] duration metric: took 1.623751418s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0806 08:34:14.104863   79219 node_ready.go:53] node "embed-certs-324808" has status "Ready":"False"
	I0806 08:34:14.353523   79927 start.go:364] duration metric: took 3m15.487259208s to acquireMachinesLock for "old-k8s-version-409903"
	I0806 08:34:14.353614   79927 start.go:96] Skipping create...Using existing machine configuration
	I0806 08:34:14.353629   79927 fix.go:54] fixHost starting: 
	I0806 08:34:14.354059   79927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:34:14.354096   79927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:34:14.373696   79927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33985
	I0806 08:34:14.374148   79927 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:34:14.374646   79927 main.go:141] libmachine: Using API Version  1
	I0806 08:34:14.374672   79927 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:34:14.374995   79927 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:34:14.375277   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .DriverName
	I0806 08:34:14.375472   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetState
	I0806 08:34:14.377209   79927 fix.go:112] recreateIfNeeded on old-k8s-version-409903: state=Stopped err=<nil>
	I0806 08:34:14.377251   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .DriverName
	W0806 08:34:14.377387   79927 fix.go:138] unexpected machine state, will restart: <nil>
	I0806 08:34:14.379029   79927 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-409903" ...
	I0806 08:34:14.113926   79614 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0806 08:34:14.113953   79614 machine.go:97] duration metric: took 943.50747ms to provisionDockerMachine
	I0806 08:34:14.113967   79614 start.go:293] postStartSetup for "default-k8s-diff-port-990751" (driver="kvm2")
	I0806 08:34:14.113982   79614 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0806 08:34:14.114004   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .DriverName
	I0806 08:34:14.114313   79614 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0806 08:34:14.114339   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHHostname
	I0806 08:34:14.116618   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:14.117028   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:14.117067   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:14.117162   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHPort
	I0806 08:34:14.117371   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHKeyPath
	I0806 08:34:14.117545   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHUsername
	I0806 08:34:14.117738   79614 sshutil.go:53] new ssh client: &{IP:192.168.50.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/default-k8s-diff-port-990751/id_rsa Username:docker}
	I0806 08:34:14.199514   79614 ssh_runner.go:195] Run: cat /etc/os-release
	I0806 08:34:14.204063   79614 info.go:137] Remote host: Buildroot 2023.02.9
	I0806 08:34:14.204084   79614 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-13057/.minikube/addons for local assets ...
	I0806 08:34:14.204139   79614 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-13057/.minikube/files for local assets ...
	I0806 08:34:14.204219   79614 filesync.go:149] local asset: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem -> 202462.pem in /etc/ssl/certs
	I0806 08:34:14.204308   79614 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0806 08:34:14.213847   79614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem --> /etc/ssl/certs/202462.pem (1708 bytes)
	I0806 08:34:14.241524   79614 start.go:296] duration metric: took 127.543182ms for postStartSetup
	I0806 08:34:14.241570   79614 fix.go:56] duration metric: took 19.588116399s for fixHost
	I0806 08:34:14.241590   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHHostname
	I0806 08:34:14.244333   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:14.244702   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:14.244733   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:14.244936   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHPort
	I0806 08:34:14.245178   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHKeyPath
	I0806 08:34:14.245338   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHKeyPath
	I0806 08:34:14.245497   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHUsername
	I0806 08:34:14.245637   79614 main.go:141] libmachine: Using SSH client type: native
	I0806 08:34:14.245783   79614 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.235 22 <nil> <nil>}
	I0806 08:34:14.245793   79614 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0806 08:34:14.353301   79614 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722933254.326161993
	
	I0806 08:34:14.353325   79614 fix.go:216] guest clock: 1722933254.326161993
	I0806 08:34:14.353339   79614 fix.go:229] Guest: 2024-08-06 08:34:14.326161993 +0000 UTC Remote: 2024-08-06 08:34:14.2415744 +0000 UTC m=+235.446489438 (delta=84.587593ms)
	I0806 08:34:14.353389   79614 fix.go:200] guest clock delta is within tolerance: 84.587593ms
	I0806 08:34:14.353398   79614 start.go:83] releasing machines lock for "default-k8s-diff-port-990751", held for 19.699973637s
	I0806 08:34:14.353432   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .DriverName
	I0806 08:34:14.353751   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetIP
	I0806 08:34:14.356590   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:14.357031   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:14.357065   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:14.357349   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .DriverName
	I0806 08:34:14.357896   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .DriverName
	I0806 08:34:14.358125   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .DriverName
	I0806 08:34:14.358222   79614 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0806 08:34:14.358265   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHHostname
	I0806 08:34:14.358387   79614 ssh_runner.go:195] Run: cat /version.json
	I0806 08:34:14.358415   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHHostname
	I0806 08:34:14.361131   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:14.361444   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:14.361590   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:14.361629   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:14.361801   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHPort
	I0806 08:34:14.361905   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:14.361940   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:14.362025   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHKeyPath
	I0806 08:34:14.362228   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHUsername
	I0806 08:34:14.362242   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHPort
	I0806 08:34:14.362420   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHKeyPath
	I0806 08:34:14.362424   79614 sshutil.go:53] new ssh client: &{IP:192.168.50.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/default-k8s-diff-port-990751/id_rsa Username:docker}
	I0806 08:34:14.362600   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHUsername
	I0806 08:34:14.362717   79614 sshutil.go:53] new ssh client: &{IP:192.168.50.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/default-k8s-diff-port-990751/id_rsa Username:docker}
	I0806 08:34:14.469232   79614 ssh_runner.go:195] Run: systemctl --version
	I0806 08:34:14.476929   79614 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0806 08:34:14.629487   79614 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0806 08:34:14.637276   79614 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0806 08:34:14.637349   79614 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0806 08:34:14.654612   79614 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0806 08:34:14.654634   79614 start.go:495] detecting cgroup driver to use...
	I0806 08:34:14.654699   79614 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0806 08:34:14.672255   79614 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 08:34:14.689135   79614 docker.go:217] disabling cri-docker service (if available) ...
	I0806 08:34:14.689203   79614 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0806 08:34:14.705745   79614 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0806 08:34:14.721267   79614 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0806 08:34:14.841773   79614 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0806 08:34:14.982757   79614 docker.go:233] disabling docker service ...
	I0806 08:34:14.982837   79614 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0806 08:34:14.997637   79614 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0806 08:34:15.011404   79614 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0806 08:34:15.167108   79614 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0806 08:34:15.286749   79614 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0806 08:34:15.301818   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 08:34:15.320848   79614 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0806 08:34:15.320920   79614 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:15.331331   79614 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0806 08:34:15.331414   79614 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:15.342763   79614 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:15.353534   79614 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:15.365593   79614 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0806 08:34:15.379804   79614 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:15.390200   79614 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:15.410545   79614 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:15.423050   79614 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0806 08:34:15.432810   79614 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0806 08:34:15.432878   79614 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0806 08:34:15.447298   79614 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0806 08:34:15.457992   79614 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 08:34:15.606288   79614 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0806 08:34:15.792322   79614 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0806 08:34:15.792411   79614 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0806 08:34:15.798556   79614 start.go:563] Will wait 60s for crictl version
	I0806 08:34:15.798615   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:34:15.802732   79614 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0806 08:34:15.856880   79614 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0806 08:34:15.856963   79614 ssh_runner.go:195] Run: crio --version
	I0806 08:34:15.890496   79614 ssh_runner.go:195] Run: crio --version
	I0806 08:34:15.924486   79614 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0806 08:34:14.380306   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .Start
	I0806 08:34:14.380486   79927 main.go:141] libmachine: (old-k8s-version-409903) Ensuring networks are active...
	I0806 08:34:14.381155   79927 main.go:141] libmachine: (old-k8s-version-409903) Ensuring network default is active
	I0806 08:34:14.381514   79927 main.go:141] libmachine: (old-k8s-version-409903) Ensuring network mk-old-k8s-version-409903 is active
	I0806 08:34:14.381820   79927 main.go:141] libmachine: (old-k8s-version-409903) Getting domain xml...
	I0806 08:34:14.382679   79927 main.go:141] libmachine: (old-k8s-version-409903) Creating domain...
	I0806 08:34:15.725416   79927 main.go:141] libmachine: (old-k8s-version-409903) Waiting to get IP...
	I0806 08:34:15.726492   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:15.727078   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:34:15.727276   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:34:15.727179   80849 retry.go:31] will retry after 303.235731ms: waiting for machine to come up
	I0806 08:34:16.031996   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:16.032592   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:34:16.032621   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:34:16.032542   80849 retry.go:31] will retry after 355.466914ms: waiting for machine to come up
	I0806 08:34:16.390212   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:16.390736   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:34:16.390768   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:34:16.390674   80849 retry.go:31] will retry after 432.942387ms: waiting for machine to come up
	I0806 08:34:16.825739   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:16.826366   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:34:16.826390   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:34:16.826317   80849 retry.go:31] will retry after 481.17687ms: waiting for machine to come up
	I0806 08:34:17.309005   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:17.309768   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:34:17.309799   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:34:17.309745   80849 retry.go:31] will retry after 631.184181ms: waiting for machine to come up
	I0806 08:34:17.942220   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:17.942742   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:34:17.942766   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:34:17.942701   80849 retry.go:31] will retry after 602.253362ms: waiting for machine to come up
	I0806 08:34:18.546333   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:18.546918   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:34:18.546944   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:34:18.546855   80849 retry.go:31] will retry after 1.061939923s: waiting for machine to come up
	I0806 08:34:15.925709   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetIP
	I0806 08:34:15.929420   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:15.929901   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:15.929931   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:15.930159   79614 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0806 08:34:15.934916   79614 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 08:34:15.949815   79614 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-990751 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-990751 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.235 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0806 08:34:15.949971   79614 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0806 08:34:15.950054   79614 ssh_runner.go:195] Run: sudo crictl images --output json
	I0806 08:34:16.004551   79614 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0806 08:34:16.004631   79614 ssh_runner.go:195] Run: which lz4
	I0806 08:34:16.011388   79614 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0806 08:34:16.017475   79614 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0806 08:34:16.017511   79614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0806 08:34:17.564750   79614 crio.go:462] duration metric: took 1.553401521s to copy over tarball
	I0806 08:34:17.564840   79614 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0806 08:34:16.106922   79219 node_ready.go:53] node "embed-certs-324808" has status "Ready":"False"
	I0806 08:34:17.104735   79219 node_ready.go:49] node "embed-certs-324808" has status "Ready":"True"
	I0806 08:34:17.104778   79219 node_ready.go:38] duration metric: took 5.004030652s for node "embed-certs-324808" to be "Ready" ...
	I0806 08:34:17.104790   79219 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 08:34:17.113668   79219 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-4xxwm" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:17.120777   79219 pod_ready.go:92] pod "coredns-7db6d8ff4d-4xxwm" in "kube-system" namespace has status "Ready":"True"
	I0806 08:34:17.120801   79219 pod_ready.go:81] duration metric: took 7.097875ms for pod "coredns-7db6d8ff4d-4xxwm" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:17.120811   79219 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-324808" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:17.126581   79219 pod_ready.go:92] pod "etcd-embed-certs-324808" in "kube-system" namespace has status "Ready":"True"
	I0806 08:34:17.126606   79219 pod_ready.go:81] duration metric: took 5.788426ms for pod "etcd-embed-certs-324808" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:17.126619   79219 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-324808" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:18.136739   79219 pod_ready.go:92] pod "kube-apiserver-embed-certs-324808" in "kube-system" namespace has status "Ready":"True"
	I0806 08:34:18.136767   79219 pod_ready.go:81] duration metric: took 1.010139212s for pod "kube-apiserver-embed-certs-324808" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:18.136781   79219 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-324808" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:19.610165   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:19.610567   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:34:19.610598   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:34:19.610542   80849 retry.go:31] will retry after 1.339464672s: waiting for machine to come up
	I0806 08:34:20.951505   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:20.952049   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:34:20.952080   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:34:20.951975   80849 retry.go:31] will retry after 1.622574331s: waiting for machine to come up
	I0806 08:34:22.576150   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:22.576559   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:34:22.576582   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:34:22.576508   80849 retry.go:31] will retry after 1.408827574s: waiting for machine to come up
	I0806 08:34:19.920581   79614 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.355705846s)
	I0806 08:34:19.920609   79614 crio.go:469] duration metric: took 2.355825462s to extract the tarball
	I0806 08:34:19.920619   79614 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0806 08:34:19.959118   79614 ssh_runner.go:195] Run: sudo crictl images --output json
	I0806 08:34:20.002413   79614 crio.go:514] all images are preloaded for cri-o runtime.
	I0806 08:34:20.002449   79614 cache_images.go:84] Images are preloaded, skipping loading
	I0806 08:34:20.002461   79614 kubeadm.go:934] updating node { 192.168.50.235 8444 v1.30.3 crio true true} ...
	I0806 08:34:20.002588   79614 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-990751 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.235
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-990751 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0806 08:34:20.002655   79614 ssh_runner.go:195] Run: crio config
	I0806 08:34:20.058943   79614 cni.go:84] Creating CNI manager for ""
	I0806 08:34:20.058970   79614 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0806 08:34:20.058981   79614 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0806 08:34:20.059010   79614 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.235 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-990751 NodeName:default-k8s-diff-port-990751 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.235"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.235 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0806 08:34:20.059247   79614 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.235
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-990751"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.235
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.235"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0806 08:34:20.059337   79614 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0806 08:34:20.072766   79614 binaries.go:44] Found k8s binaries, skipping transfer
	I0806 08:34:20.072876   79614 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0806 08:34:20.085730   79614 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0806 08:34:20.105230   79614 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0806 08:34:20.124800   79614 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0806 08:34:20.146547   79614 ssh_runner.go:195] Run: grep 192.168.50.235	control-plane.minikube.internal$ /etc/hosts
	I0806 08:34:20.150692   79614 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.235	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 08:34:20.164006   79614 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 08:34:20.286820   79614 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 08:34:20.304141   79614 certs.go:68] Setting up /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/default-k8s-diff-port-990751 for IP: 192.168.50.235
	I0806 08:34:20.304166   79614 certs.go:194] generating shared ca certs ...
	I0806 08:34:20.304187   79614 certs.go:226] acquiring lock for ca certs: {Name:mkf5c5aed91f4108054d9f2c742cad66c86afbed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:34:20.304386   79614 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.key
	I0806 08:34:20.304448   79614 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.key
	I0806 08:34:20.304464   79614 certs.go:256] generating profile certs ...
	I0806 08:34:20.304567   79614 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/default-k8s-diff-port-990751/client.key
	I0806 08:34:20.304648   79614 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/default-k8s-diff-port-990751/apiserver.key.aa7eccb7
	I0806 08:34:20.304715   79614 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/default-k8s-diff-port-990751/proxy-client.key
	I0806 08:34:20.304872   79614 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246.pem (1338 bytes)
	W0806 08:34:20.304919   79614 certs.go:480] ignoring /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246_empty.pem, impossibly tiny 0 bytes
	I0806 08:34:20.304929   79614 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem (1675 bytes)
	I0806 08:34:20.304960   79614 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem (1078 bytes)
	I0806 08:34:20.304989   79614 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem (1123 bytes)
	I0806 08:34:20.305019   79614 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem (1675 bytes)
	I0806 08:34:20.305089   79614 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem (1708 bytes)
	I0806 08:34:20.305821   79614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0806 08:34:20.341201   79614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0806 08:34:20.375406   79614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0806 08:34:20.407409   79614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0806 08:34:20.449555   79614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/default-k8s-diff-port-990751/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0806 08:34:20.479479   79614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/default-k8s-diff-port-990751/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0806 08:34:20.518719   79614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/default-k8s-diff-port-990751/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0806 08:34:20.544212   79614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/default-k8s-diff-port-990751/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0806 08:34:20.569413   79614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0806 08:34:20.594573   79614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246.pem --> /usr/share/ca-certificates/20246.pem (1338 bytes)
	I0806 08:34:20.620183   79614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem --> /usr/share/ca-certificates/202462.pem (1708 bytes)
	I0806 08:34:20.648311   79614 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0806 08:34:20.670845   79614 ssh_runner.go:195] Run: openssl version
	I0806 08:34:20.677437   79614 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202462.pem && ln -fs /usr/share/ca-certificates/202462.pem /etc/ssl/certs/202462.pem"
	I0806 08:34:20.688990   79614 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202462.pem
	I0806 08:34:20.694512   79614 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  6 07:17 /usr/share/ca-certificates/202462.pem
	I0806 08:34:20.694580   79614 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202462.pem
	I0806 08:34:20.701362   79614 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202462.pem /etc/ssl/certs/3ec20f2e.0"
	I0806 08:34:20.712966   79614 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0806 08:34:20.724455   79614 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0806 08:34:20.729111   79614 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  6 07:07 /usr/share/ca-certificates/minikubeCA.pem
	I0806 08:34:20.729165   79614 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0806 08:34:20.734973   79614 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0806 08:34:20.746387   79614 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20246.pem && ln -fs /usr/share/ca-certificates/20246.pem /etc/ssl/certs/20246.pem"
	I0806 08:34:20.757763   79614 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20246.pem
	I0806 08:34:20.762396   79614 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  6 07:17 /usr/share/ca-certificates/20246.pem
	I0806 08:34:20.762454   79614 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20246.pem
	I0806 08:34:20.768261   79614 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20246.pem /etc/ssl/certs/51391683.0"
	I0806 08:34:20.779553   79614 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0806 08:34:20.785854   79614 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0806 08:34:20.794054   79614 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0806 08:34:20.802652   79614 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0806 08:34:20.809929   79614 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0806 08:34:20.816886   79614 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0806 08:34:20.823667   79614 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0806 08:34:20.830355   79614 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-990751 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-990751 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.235 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 08:34:20.830456   79614 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0806 08:34:20.830527   79614 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0806 08:34:20.878368   79614 cri.go:89] found id: ""
	I0806 08:34:20.878455   79614 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0806 08:34:20.889175   79614 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0806 08:34:20.889196   79614 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0806 08:34:20.889250   79614 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0806 08:34:20.899842   79614 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0806 08:34:20.901037   79614 kubeconfig.go:125] found "default-k8s-diff-port-990751" server: "https://192.168.50.235:8444"
	I0806 08:34:20.902781   79614 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0806 08:34:20.914810   79614 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.235
	I0806 08:34:20.914847   79614 kubeadm.go:1160] stopping kube-system containers ...
	I0806 08:34:20.914859   79614 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0806 08:34:20.914905   79614 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0806 08:34:20.956719   79614 cri.go:89] found id: ""
	I0806 08:34:20.956772   79614 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0806 08:34:20.977367   79614 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0806 08:34:20.989491   79614 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0806 08:34:20.989513   79614 kubeadm.go:157] found existing configuration files:
	
	I0806 08:34:20.989565   79614 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0806 08:34:21.000759   79614 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0806 08:34:21.000820   79614 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0806 08:34:21.012404   79614 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0806 08:34:21.023449   79614 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0806 08:34:21.023536   79614 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0806 08:34:21.035354   79614 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0806 08:34:21.045285   79614 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0806 08:34:21.045347   79614 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0806 08:34:21.057182   79614 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0806 08:34:21.067431   79614 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0806 08:34:21.067497   79614 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0806 08:34:21.077955   79614 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0806 08:34:21.088824   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:21.209627   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:21.772624   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:22.022615   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:22.107829   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:22.235446   79614 api_server.go:52] waiting for apiserver process to appear ...
	I0806 08:34:22.235552   79614 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:22.735843   79614 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:23.236385   79614 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:23.314994   79614 api_server.go:72] duration metric: took 1.079547638s to wait for apiserver process to appear ...
	I0806 08:34:23.315025   79614 api_server.go:88] waiting for apiserver healthz status ...
	I0806 08:34:23.315071   79614 api_server.go:253] Checking apiserver healthz at https://192.168.50.235:8444/healthz ...
	I0806 08:34:23.315581   79614 api_server.go:269] stopped: https://192.168.50.235:8444/healthz: Get "https://192.168.50.235:8444/healthz": dial tcp 192.168.50.235:8444: connect: connection refused
	I0806 08:34:23.815428   79614 api_server.go:253] Checking apiserver healthz at https://192.168.50.235:8444/healthz ...
	I0806 08:34:20.145118   79219 pod_ready.go:102] pod "kube-controller-manager-embed-certs-324808" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:23.051556   79219 pod_ready.go:102] pod "kube-controller-manager-embed-certs-324808" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:23.077556   79219 pod_ready.go:92] pod "kube-controller-manager-embed-certs-324808" in "kube-system" namespace has status "Ready":"True"
	I0806 08:34:23.077581   79219 pod_ready.go:81] duration metric: took 4.94079164s for pod "kube-controller-manager-embed-certs-324808" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:23.077594   79219 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8lbzv" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:23.091745   79219 pod_ready.go:92] pod "kube-proxy-8lbzv" in "kube-system" namespace has status "Ready":"True"
	I0806 08:34:23.091773   79219 pod_ready.go:81] duration metric: took 14.170363ms for pod "kube-proxy-8lbzv" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:23.091789   79219 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-324808" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:23.105567   79219 pod_ready.go:92] pod "kube-scheduler-embed-certs-324808" in "kube-system" namespace has status "Ready":"True"
	I0806 08:34:23.105597   79219 pod_ready.go:81] duration metric: took 13.798762ms for pod "kube-scheduler-embed-certs-324808" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:23.105612   79219 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:23.987256   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:23.987658   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:34:23.987683   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:34:23.987617   80849 retry.go:31] will retry after 2.000071561s: waiting for machine to come up
	I0806 08:34:25.989501   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:25.989880   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:34:25.989906   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:34:25.989838   80849 retry.go:31] will retry after 2.508269501s: waiting for machine to come up
	I0806 08:34:28.501449   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:28.501842   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:34:28.501874   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:34:28.501802   80849 retry.go:31] will retry after 2.7583106s: waiting for machine to come up
	I0806 08:34:26.696273   79614 api_server.go:279] https://192.168.50.235:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0806 08:34:26.696325   79614 api_server.go:103] status: https://192.168.50.235:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0806 08:34:26.696342   79614 api_server.go:253] Checking apiserver healthz at https://192.168.50.235:8444/healthz ...
	I0806 08:34:26.704047   79614 api_server.go:279] https://192.168.50.235:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0806 08:34:26.704078   79614 api_server.go:103] status: https://192.168.50.235:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0806 08:34:26.815341   79614 api_server.go:253] Checking apiserver healthz at https://192.168.50.235:8444/healthz ...
	I0806 08:34:26.819686   79614 api_server.go:279] https://192.168.50.235:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0806 08:34:26.819718   79614 api_server.go:103] status: https://192.168.50.235:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0806 08:34:27.315242   79614 api_server.go:253] Checking apiserver healthz at https://192.168.50.235:8444/healthz ...
	I0806 08:34:27.320753   79614 api_server.go:279] https://192.168.50.235:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0806 08:34:27.320789   79614 api_server.go:103] status: https://192.168.50.235:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0806 08:34:27.815164   79614 api_server.go:253] Checking apiserver healthz at https://192.168.50.235:8444/healthz ...
	I0806 08:34:27.821943   79614 api_server.go:279] https://192.168.50.235:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0806 08:34:27.821967   79614 api_server.go:103] status: https://192.168.50.235:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0806 08:34:28.315169   79614 api_server.go:253] Checking apiserver healthz at https://192.168.50.235:8444/healthz ...
	I0806 08:34:28.321921   79614 api_server.go:279] https://192.168.50.235:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0806 08:34:28.321945   79614 api_server.go:103] status: https://192.168.50.235:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0806 08:34:28.815717   79614 api_server.go:253] Checking apiserver healthz at https://192.168.50.235:8444/healthz ...
	I0806 08:34:28.819946   79614 api_server.go:279] https://192.168.50.235:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0806 08:34:28.819973   79614 api_server.go:103] status: https://192.168.50.235:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0806 08:34:25.112333   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:27.112488   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:29.315643   79614 api_server.go:253] Checking apiserver healthz at https://192.168.50.235:8444/healthz ...
	I0806 08:34:29.319784   79614 api_server.go:279] https://192.168.50.235:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0806 08:34:29.319813   79614 api_server.go:103] status: https://192.168.50.235:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0806 08:34:29.815945   79614 api_server.go:253] Checking apiserver healthz at https://192.168.50.235:8444/healthz ...
	I0806 08:34:29.821441   79614 api_server.go:279] https://192.168.50.235:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0806 08:34:29.821476   79614 api_server.go:103] status: https://192.168.50.235:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0806 08:34:30.315997   79614 api_server.go:253] Checking apiserver healthz at https://192.168.50.235:8444/healthz ...
	I0806 08:34:30.321179   79614 api_server.go:279] https://192.168.50.235:8444/healthz returned 200:
	ok
	I0806 08:34:30.327083   79614 api_server.go:141] control plane version: v1.30.3
	I0806 08:34:30.327113   79614 api_server.go:131] duration metric: took 7.012080775s to wait for apiserver health ...
	I0806 08:34:30.327122   79614 cni.go:84] Creating CNI manager for ""
	I0806 08:34:30.327128   79614 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0806 08:34:30.328860   79614 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0806 08:34:30.330232   79614 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0806 08:34:30.341069   79614 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0806 08:34:30.359470   79614 system_pods.go:43] waiting for kube-system pods to appear ...
	I0806 08:34:30.369165   79614 system_pods.go:59] 8 kube-system pods found
	I0806 08:34:30.369209   79614 system_pods.go:61] "coredns-7db6d8ff4d-gx8tt" [48f83ba8-a238-4baf-94af-88370fefcb02] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0806 08:34:30.369227   79614 system_pods.go:61] "etcd-default-k8s-diff-port-990751" [bf342ba4-1e11-40bf-80fd-02b402043ecf] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0806 08:34:30.369246   79614 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-990751" [11ee42c4-c9e9-41cf-ace2-deac0f30153a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0806 08:34:30.369256   79614 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-990751" [f17fca30-0afc-4ed8-a8cc-cb64e69723f3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0806 08:34:30.369265   79614 system_pods.go:61] "kube-proxy-lpf88" [866c10f0-2c44-4c03-b460-ff2194bd1ec6] Running
	I0806 08:34:30.369272   79614 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-990751" [2e591ea7-07bd-464f-bf0e-bc24bfcca016] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0806 08:34:30.369282   79614 system_pods.go:61] "metrics-server-569cc877fc-hnxd4" [f369ac70-49b7-448a-9852-89232fb66da9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0806 08:34:30.369290   79614 system_pods.go:61] "storage-provisioner" [0faff3fc-d34b-418e-972b-95a2e36b577a] Running
	I0806 08:34:30.369299   79614 system_pods.go:74] duration metric: took 9.80217ms to wait for pod list to return data ...
	I0806 08:34:30.369311   79614 node_conditions.go:102] verifying NodePressure condition ...
	I0806 08:34:30.372222   79614 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0806 08:34:30.372250   79614 node_conditions.go:123] node cpu capacity is 2
	I0806 08:34:30.372261   79614 node_conditions.go:105] duration metric: took 2.944552ms to run NodePressure ...
	I0806 08:34:30.372279   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:30.639677   79614 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0806 08:34:30.643923   79614 kubeadm.go:739] kubelet initialised
	I0806 08:34:30.643952   79614 kubeadm.go:740] duration metric: took 4.248408ms waiting for restarted kubelet to initialise ...
	I0806 08:34:30.643962   79614 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 08:34:30.648766   79614 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-gx8tt" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:30.653672   79614 pod_ready.go:97] node "default-k8s-diff-port-990751" hosting pod "coredns-7db6d8ff4d-gx8tt" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-990751" has status "Ready":"False"
	I0806 08:34:30.653703   79614 pod_ready.go:81] duration metric: took 4.90961ms for pod "coredns-7db6d8ff4d-gx8tt" in "kube-system" namespace to be "Ready" ...
	E0806 08:34:30.653714   79614 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-990751" hosting pod "coredns-7db6d8ff4d-gx8tt" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-990751" has status "Ready":"False"
	I0806 08:34:30.653723   79614 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-990751" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:30.658080   79614 pod_ready.go:97] node "default-k8s-diff-port-990751" hosting pod "etcd-default-k8s-diff-port-990751" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-990751" has status "Ready":"False"
	I0806 08:34:30.658106   79614 pod_ready.go:81] duration metric: took 4.370083ms for pod "etcd-default-k8s-diff-port-990751" in "kube-system" namespace to be "Ready" ...
	E0806 08:34:30.658119   79614 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-990751" hosting pod "etcd-default-k8s-diff-port-990751" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-990751" has status "Ready":"False"
	I0806 08:34:30.658125   79614 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-990751" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:30.662246   79614 pod_ready.go:97] node "default-k8s-diff-port-990751" hosting pod "kube-apiserver-default-k8s-diff-port-990751" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-990751" has status "Ready":"False"
	I0806 08:34:30.662271   79614 pod_ready.go:81] duration metric: took 4.139553ms for pod "kube-apiserver-default-k8s-diff-port-990751" in "kube-system" namespace to be "Ready" ...
	E0806 08:34:30.662280   79614 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-990751" hosting pod "kube-apiserver-default-k8s-diff-port-990751" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-990751" has status "Ready":"False"
	I0806 08:34:30.662286   79614 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-990751" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:30.763246   79614 pod_ready.go:97] node "default-k8s-diff-port-990751" hosting pod "kube-controller-manager-default-k8s-diff-port-990751" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-990751" has status "Ready":"False"
	I0806 08:34:30.763286   79614 pod_ready.go:81] duration metric: took 100.991212ms for pod "kube-controller-manager-default-k8s-diff-port-990751" in "kube-system" namespace to be "Ready" ...
	E0806 08:34:30.763302   79614 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-990751" hosting pod "kube-controller-manager-default-k8s-diff-port-990751" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-990751" has status "Ready":"False"
	I0806 08:34:30.763311   79614 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-lpf88" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:31.162972   79614 pod_ready.go:97] node "default-k8s-diff-port-990751" hosting pod "kube-proxy-lpf88" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-990751" has status "Ready":"False"
	I0806 08:34:31.162998   79614 pod_ready.go:81] duration metric: took 399.678401ms for pod "kube-proxy-lpf88" in "kube-system" namespace to be "Ready" ...
	E0806 08:34:31.163006   79614 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-990751" hosting pod "kube-proxy-lpf88" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-990751" has status "Ready":"False"
	I0806 08:34:31.163012   79614 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-990751" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:31.562538   79614 pod_ready.go:97] node "default-k8s-diff-port-990751" hosting pod "kube-scheduler-default-k8s-diff-port-990751" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-990751" has status "Ready":"False"
	I0806 08:34:31.562564   79614 pod_ready.go:81] duration metric: took 399.544055ms for pod "kube-scheduler-default-k8s-diff-port-990751" in "kube-system" namespace to be "Ready" ...
	E0806 08:34:31.562575   79614 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-990751" hosting pod "kube-scheduler-default-k8s-diff-port-990751" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-990751" has status "Ready":"False"
	I0806 08:34:31.562582   79614 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:31.963410   79614 pod_ready.go:97] node "default-k8s-diff-port-990751" hosting pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-990751" has status "Ready":"False"
	I0806 08:34:31.963438   79614 pod_ready.go:81] duration metric: took 400.849931ms for pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace to be "Ready" ...
	E0806 08:34:31.963449   79614 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-990751" hosting pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-990751" has status "Ready":"False"
	I0806 08:34:31.963456   79614 pod_ready.go:38] duration metric: took 1.319483278s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 08:34:31.963472   79614 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0806 08:34:31.975982   79614 ops.go:34] apiserver oom_adj: -16
	I0806 08:34:31.976006   79614 kubeadm.go:597] duration metric: took 11.086801084s to restartPrimaryControlPlane
	I0806 08:34:31.976016   79614 kubeadm.go:394] duration metric: took 11.145668347s to StartCluster
	I0806 08:34:31.976035   79614 settings.go:142] acquiring lock: {Name:mkb0bfbcae3b4205ef43487a9de84cfd8afd2e5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:34:31.976131   79614 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19370-13057/kubeconfig
	I0806 08:34:31.977626   79614 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/kubeconfig: {Name:mk5c066914acf8e36556b29792f75b98076ca34a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:34:31.977871   79614 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.235 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0806 08:34:31.977927   79614 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0806 08:34:31.978004   79614 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-990751"
	I0806 08:34:31.978058   79614 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-990751"
	W0806 08:34:31.978070   79614 addons.go:243] addon storage-provisioner should already be in state true
	I0806 08:34:31.978060   79614 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-990751"
	I0806 08:34:31.978092   79614 config.go:182] Loaded profile config "default-k8s-diff-port-990751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 08:34:31.978076   79614 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-990751"
	I0806 08:34:31.978124   79614 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-990751"
	I0806 08:34:31.978154   79614 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-990751"
	I0806 08:34:31.978111   79614 host.go:66] Checking if "default-k8s-diff-port-990751" exists ...
	W0806 08:34:31.978172   79614 addons.go:243] addon metrics-server should already be in state true
	I0806 08:34:31.978250   79614 host.go:66] Checking if "default-k8s-diff-port-990751" exists ...
	I0806 08:34:31.978595   79614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:34:31.978651   79614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:34:31.978602   79614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:34:31.978691   79614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:34:31.978650   79614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:34:31.978806   79614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:34:31.979749   79614 out.go:177] * Verifying Kubernetes components...
	I0806 08:34:31.981485   79614 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 08:34:31.994362   79614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36289
	I0806 08:34:31.995043   79614 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:34:31.995594   79614 main.go:141] libmachine: Using API Version  1
	I0806 08:34:31.995610   79614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:34:31.995940   79614 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:34:31.996152   79614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36601
	I0806 08:34:31.996485   79614 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:34:31.996569   79614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:34:31.996613   79614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:34:31.996785   79614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42273
	I0806 08:34:31.997043   79614 main.go:141] libmachine: Using API Version  1
	I0806 08:34:31.997062   79614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:34:31.997335   79614 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:34:31.997424   79614 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:34:31.997601   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetState
	I0806 08:34:31.997759   79614 main.go:141] libmachine: Using API Version  1
	I0806 08:34:31.997775   79614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:34:31.998047   79614 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:34:31.998511   79614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:34:31.998546   79614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:34:32.001381   79614 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-990751"
	W0806 08:34:32.001397   79614 addons.go:243] addon default-storageclass should already be in state true
	I0806 08:34:32.001427   79614 host.go:66] Checking if "default-k8s-diff-port-990751" exists ...
	I0806 08:34:32.001690   79614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:34:32.001721   79614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:34:32.015095   79614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41633
	I0806 08:34:32.015597   79614 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:34:32.016197   79614 main.go:141] libmachine: Using API Version  1
	I0806 08:34:32.016220   79614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:34:32.016245   79614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33731
	I0806 08:34:32.016951   79614 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:34:32.016953   79614 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:34:32.017491   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetState
	I0806 08:34:32.017572   79614 main.go:141] libmachine: Using API Version  1
	I0806 08:34:32.017601   79614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:34:32.017951   79614 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:34:32.018222   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetState
	I0806 08:34:32.020339   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .DriverName
	I0806 08:34:32.020347   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .DriverName
	I0806 08:34:32.021417   79614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40949
	I0806 08:34:32.021814   79614 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:34:32.022232   79614 main.go:141] libmachine: Using API Version  1
	I0806 08:34:32.022254   79614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:34:32.022609   79614 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:34:32.022690   79614 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 08:34:32.022690   79614 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0806 08:34:32.023365   79614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:34:32.023410   79614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:34:32.024304   79614 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0806 08:34:32.024326   79614 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0806 08:34:32.024343   79614 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0806 08:34:32.024358   79614 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0806 08:34:32.024387   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHHostname
	I0806 08:34:32.024346   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHHostname
	I0806 08:34:32.029046   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:32.029292   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:32.029490   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:32.029516   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:32.029672   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:32.029676   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHPort
	I0806 08:34:32.029694   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:32.029840   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHPort
	I0806 08:34:32.029866   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHKeyPath
	I0806 08:34:32.030000   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHKeyPath
	I0806 08:34:32.030122   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHUsername
	I0806 08:34:32.030124   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHUsername
	I0806 08:34:32.030265   79614 sshutil.go:53] new ssh client: &{IP:192.168.50.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/default-k8s-diff-port-990751/id_rsa Username:docker}
	I0806 08:34:32.030265   79614 sshutil.go:53] new ssh client: &{IP:192.168.50.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/default-k8s-diff-port-990751/id_rsa Username:docker}
	I0806 08:34:32.043523   79614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40021
	I0806 08:34:32.043915   79614 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:34:32.044560   79614 main.go:141] libmachine: Using API Version  1
	I0806 08:34:32.044583   79614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:34:32.044969   79614 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:34:32.045214   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetState
	I0806 08:34:32.046980   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .DriverName
	I0806 08:34:32.047261   79614 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0806 08:34:32.047277   79614 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0806 08:34:32.047296   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHHostname
	I0806 08:34:32.050654   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:32.050991   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:32.051020   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:32.051152   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHPort
	I0806 08:34:32.051329   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHKeyPath
	I0806 08:34:32.051495   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHUsername
	I0806 08:34:32.051613   79614 sshutil.go:53] new ssh client: &{IP:192.168.50.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/default-k8s-diff-port-990751/id_rsa Username:docker}
	I0806 08:34:32.192056   79614 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 08:34:32.231428   79614 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-990751" to be "Ready" ...
	I0806 08:34:32.278049   79614 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0806 08:34:32.378290   79614 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0806 08:34:32.378317   79614 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0806 08:34:32.399728   79614 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0806 08:34:32.419412   79614 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0806 08:34:32.419441   79614 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0806 08:34:32.470691   79614 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0806 08:34:32.470726   79614 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0806 08:34:32.514191   79614 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0806 08:34:32.590737   79614 main.go:141] libmachine: Making call to close driver server
	I0806 08:34:32.590772   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .Close
	I0806 08:34:32.591110   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | Closing plugin on server side
	I0806 08:34:32.591177   79614 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:34:32.591190   79614 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:34:32.591202   79614 main.go:141] libmachine: Making call to close driver server
	I0806 08:34:32.591212   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .Close
	I0806 08:34:32.591465   79614 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:34:32.591492   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | Closing plugin on server side
	I0806 08:34:32.591505   79614 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:34:32.597276   79614 main.go:141] libmachine: Making call to close driver server
	I0806 08:34:32.597297   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .Close
	I0806 08:34:32.597547   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | Closing plugin on server side
	I0806 08:34:32.597564   79614 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:34:32.597579   79614 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:34:33.401000   79614 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.001229332s)
	I0806 08:34:33.401045   79614 main.go:141] libmachine: Making call to close driver server
	I0806 08:34:33.401062   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .Close
	I0806 08:34:33.401346   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | Closing plugin on server side
	I0806 08:34:33.401402   79614 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:34:33.401426   79614 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:34:33.401443   79614 main.go:141] libmachine: Making call to close driver server
	I0806 08:34:33.401452   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .Close
	I0806 08:34:33.401692   79614 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:34:33.401720   79614 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:34:33.455633   79614 main.go:141] libmachine: Making call to close driver server
	I0806 08:34:33.455663   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .Close
	I0806 08:34:33.455947   79614 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:34:33.455964   79614 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:34:33.455964   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | Closing plugin on server side
	I0806 08:34:33.455972   79614 main.go:141] libmachine: Making call to close driver server
	I0806 08:34:33.455980   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .Close
	I0806 08:34:33.456288   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | Closing plugin on server side
	I0806 08:34:33.456286   79614 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:34:33.456328   79614 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:34:33.456347   79614 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-990751"
	I0806 08:34:33.459411   79614 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0806 08:34:31.262960   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:31.263282   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:34:31.263304   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:34:31.263238   80849 retry.go:31] will retry after 4.76596973s: waiting for machine to come up
	I0806 08:34:33.460780   79614 addons.go:510] duration metric: took 1.482853001s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0806 08:34:29.611714   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:31.612385   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:34.111936   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:37.353538   78922 start.go:364] duration metric: took 58.142260904s to acquireMachinesLock for "no-preload-703340"
	I0806 08:34:37.353587   78922 start.go:96] Skipping create...Using existing machine configuration
	I0806 08:34:37.353598   78922 fix.go:54] fixHost starting: 
	I0806 08:34:37.354016   78922 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:34:37.354050   78922 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:34:37.373838   78922 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37209
	I0806 08:34:37.374283   78922 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:34:37.374874   78922 main.go:141] libmachine: Using API Version  1
	I0806 08:34:37.374910   78922 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:34:37.375250   78922 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:34:37.375448   78922 main.go:141] libmachine: (no-preload-703340) Calling .DriverName
	I0806 08:34:37.375584   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetState
	I0806 08:34:37.377251   78922 fix.go:112] recreateIfNeeded on no-preload-703340: state=Stopped err=<nil>
	I0806 08:34:37.377275   78922 main.go:141] libmachine: (no-preload-703340) Calling .DriverName
	W0806 08:34:37.377421   78922 fix.go:138] unexpected machine state, will restart: <nil>
	I0806 08:34:37.379343   78922 out.go:177] * Restarting existing kvm2 VM for "no-preload-703340" ...
	I0806 08:34:36.030512   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.030950   79927 main.go:141] libmachine: (old-k8s-version-409903) Found IP for machine: 192.168.61.158
	I0806 08:34:36.030972   79927 main.go:141] libmachine: (old-k8s-version-409903) Reserving static IP address...
	I0806 08:34:36.030989   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has current primary IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.031349   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "old-k8s-version-409903", mac: "52:54:00:12:30:88", ip: "192.168.61.158"} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:36.031376   79927 main.go:141] libmachine: (old-k8s-version-409903) Reserved static IP address: 192.168.61.158
	I0806 08:34:36.031398   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | skip adding static IP to network mk-old-k8s-version-409903 - found existing host DHCP lease matching {name: "old-k8s-version-409903", mac: "52:54:00:12:30:88", ip: "192.168.61.158"}
	I0806 08:34:36.031416   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | Getting to WaitForSSH function...
	I0806 08:34:36.031428   79927 main.go:141] libmachine: (old-k8s-version-409903) Waiting for SSH to be available...
	I0806 08:34:36.033684   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.034094   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:36.034124   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.034327   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | Using SSH client type: external
	I0806 08:34:36.034366   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | Using SSH private key: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/old-k8s-version-409903/id_rsa (-rw-------)
	I0806 08:34:36.034395   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.158 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19370-13057/.minikube/machines/old-k8s-version-409903/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0806 08:34:36.034407   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | About to run SSH command:
	I0806 08:34:36.034415   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | exit 0
	I0806 08:34:36.164605   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | SSH cmd err, output: <nil>: 
	I0806 08:34:36.164952   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetConfigRaw
	I0806 08:34:36.165509   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetIP
	I0806 08:34:36.168135   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.168471   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:36.168499   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.168756   79927 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/old-k8s-version-409903/config.json ...
	I0806 08:34:36.168956   79927 machine.go:94] provisionDockerMachine start ...
	I0806 08:34:36.168974   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .DriverName
	I0806 08:34:36.169186   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHHostname
	I0806 08:34:36.171367   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.171702   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:36.171723   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.171902   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHPort
	I0806 08:34:36.172080   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:34:36.172240   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:34:36.172400   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHUsername
	I0806 08:34:36.172569   79927 main.go:141] libmachine: Using SSH client type: native
	I0806 08:34:36.172798   79927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.158 22 <nil> <nil>}
	I0806 08:34:36.172812   79927 main.go:141] libmachine: About to run SSH command:
	hostname
	I0806 08:34:36.288738   79927 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0806 08:34:36.288770   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetMachineName
	I0806 08:34:36.289073   79927 buildroot.go:166] provisioning hostname "old-k8s-version-409903"
	I0806 08:34:36.289088   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetMachineName
	I0806 08:34:36.289272   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHHostname
	I0806 08:34:36.291958   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.292326   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:36.292354   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.292462   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHPort
	I0806 08:34:36.292662   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:34:36.292807   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:34:36.292953   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHUsername
	I0806 08:34:36.293122   79927 main.go:141] libmachine: Using SSH client type: native
	I0806 08:34:36.293287   79927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.158 22 <nil> <nil>}
	I0806 08:34:36.293300   79927 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-409903 && echo "old-k8s-version-409903" | sudo tee /etc/hostname
	I0806 08:34:36.427591   79927 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-409903
	
	I0806 08:34:36.427620   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHHostname
	I0806 08:34:36.430395   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.430724   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:36.430758   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.430900   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHPort
	I0806 08:34:36.431133   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:34:36.431311   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:34:36.431476   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHUsername
	I0806 08:34:36.431662   79927 main.go:141] libmachine: Using SSH client type: native
	I0806 08:34:36.431876   79927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.158 22 <nil> <nil>}
	I0806 08:34:36.431895   79927 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-409903' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-409903/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-409903' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0806 08:34:36.554542   79927 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 08:34:36.554574   79927 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19370-13057/.minikube CaCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19370-13057/.minikube}
	I0806 08:34:36.554613   79927 buildroot.go:174] setting up certificates
	I0806 08:34:36.554629   79927 provision.go:84] configureAuth start
	I0806 08:34:36.554646   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetMachineName
	I0806 08:34:36.554932   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetIP
	I0806 08:34:36.557899   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.558351   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:36.558373   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.558530   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHHostname
	I0806 08:34:36.560878   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.561153   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:36.561182   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.561339   79927 provision.go:143] copyHostCerts
	I0806 08:34:36.561400   79927 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem, removing ...
	I0806 08:34:36.561410   79927 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem
	I0806 08:34:36.561462   79927 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem (1078 bytes)
	I0806 08:34:36.561563   79927 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem, removing ...
	I0806 08:34:36.561572   79927 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem
	I0806 08:34:36.561594   79927 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem (1123 bytes)
	I0806 08:34:36.561647   79927 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem, removing ...
	I0806 08:34:36.561653   79927 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem
	I0806 08:34:36.561670   79927 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem (1675 bytes)
	I0806 08:34:36.561752   79927 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-409903 san=[127.0.0.1 192.168.61.158 localhost minikube old-k8s-version-409903]
	I0806 08:34:36.646180   79927 provision.go:177] copyRemoteCerts
	I0806 08:34:36.646233   79927 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0806 08:34:36.646257   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHHostname
	I0806 08:34:36.649020   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.649380   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:36.649405   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.649564   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHPort
	I0806 08:34:36.649791   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:34:36.649962   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHUsername
	I0806 08:34:36.650092   79927 sshutil.go:53] new ssh client: &{IP:192.168.61.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/old-k8s-version-409903/id_rsa Username:docker}
	I0806 08:34:36.738988   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0806 08:34:36.765104   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0806 08:34:36.791946   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0806 08:34:36.817128   79927 provision.go:87] duration metric: took 262.482885ms to configureAuth
	I0806 08:34:36.817157   79927 buildroot.go:189] setting minikube options for container-runtime
	I0806 08:34:36.817352   79927 config.go:182] Loaded profile config "old-k8s-version-409903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0806 08:34:36.817435   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHHostname
	I0806 08:34:36.820142   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.820480   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:36.820520   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.820659   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHPort
	I0806 08:34:36.820892   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:34:36.821095   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:34:36.821289   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHUsername
	I0806 08:34:36.821459   79927 main.go:141] libmachine: Using SSH client type: native
	I0806 08:34:36.821703   79927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.158 22 <nil> <nil>}
	I0806 08:34:36.821722   79927 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0806 08:34:37.099949   79927 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0806 08:34:37.099979   79927 machine.go:97] duration metric: took 931.010538ms to provisionDockerMachine
	I0806 08:34:37.099993   79927 start.go:293] postStartSetup for "old-k8s-version-409903" (driver="kvm2")
	I0806 08:34:37.100007   79927 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0806 08:34:37.100028   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .DriverName
	I0806 08:34:37.100379   79927 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0806 08:34:37.100414   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHHostname
	I0806 08:34:37.103297   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:37.103691   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:37.103717   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:37.103887   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHPort
	I0806 08:34:37.104106   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:34:37.104263   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHUsername
	I0806 08:34:37.104402   79927 sshutil.go:53] new ssh client: &{IP:192.168.61.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/old-k8s-version-409903/id_rsa Username:docker}
	I0806 08:34:37.192035   79927 ssh_runner.go:195] Run: cat /etc/os-release
	I0806 08:34:37.196693   79927 info.go:137] Remote host: Buildroot 2023.02.9
	I0806 08:34:37.196712   79927 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-13057/.minikube/addons for local assets ...
	I0806 08:34:37.196776   79927 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-13057/.minikube/files for local assets ...
	I0806 08:34:37.196861   79927 filesync.go:149] local asset: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem -> 202462.pem in /etc/ssl/certs
	I0806 08:34:37.196957   79927 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0806 08:34:37.206906   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem --> /etc/ssl/certs/202462.pem (1708 bytes)
	I0806 08:34:37.232270   79927 start.go:296] duration metric: took 132.262362ms for postStartSetup
	I0806 08:34:37.232311   79927 fix.go:56] duration metric: took 22.878683419s for fixHost
	I0806 08:34:37.232332   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHHostname
	I0806 08:34:37.235278   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:37.235721   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:37.235751   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:37.235946   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHPort
	I0806 08:34:37.236168   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:34:37.236337   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:34:37.236491   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHUsername
	I0806 08:34:37.236640   79927 main.go:141] libmachine: Using SSH client type: native
	I0806 08:34:37.236793   79927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.158 22 <nil> <nil>}
	I0806 08:34:37.236803   79927 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0806 08:34:37.353370   79927 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722933277.327882532
	
	I0806 08:34:37.353397   79927 fix.go:216] guest clock: 1722933277.327882532
	I0806 08:34:37.353408   79927 fix.go:229] Guest: 2024-08-06 08:34:37.327882532 +0000 UTC Remote: 2024-08-06 08:34:37.232315481 +0000 UTC m=+218.506631326 (delta=95.567051ms)
	I0806 08:34:37.353439   79927 fix.go:200] guest clock delta is within tolerance: 95.567051ms
	I0806 08:34:37.353451   79927 start.go:83] releasing machines lock for "old-k8s-version-409903", held for 22.999876259s
	I0806 08:34:37.353490   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .DriverName
	I0806 08:34:37.353775   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetIP
	I0806 08:34:37.356788   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:37.357197   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:37.357224   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:37.357434   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .DriverName
	I0806 08:34:37.358035   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .DriverName
	I0806 08:34:37.358238   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .DriverName
	I0806 08:34:37.358320   79927 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0806 08:34:37.358360   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHHostname
	I0806 08:34:37.358478   79927 ssh_runner.go:195] Run: cat /version.json
	I0806 08:34:37.358505   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHHostname
	I0806 08:34:37.361125   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:37.361429   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:37.361464   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:37.361490   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:37.361681   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHPort
	I0806 08:34:37.361846   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:34:37.361889   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:37.361913   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:37.362018   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHUsername
	I0806 08:34:37.362078   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHPort
	I0806 08:34:37.362188   79927 sshutil.go:53] new ssh client: &{IP:192.168.61.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/old-k8s-version-409903/id_rsa Username:docker}
	I0806 08:34:37.362319   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:34:37.362452   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHUsername
	I0806 08:34:37.362596   79927 sshutil.go:53] new ssh client: &{IP:192.168.61.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/old-k8s-version-409903/id_rsa Username:docker}
	I0806 08:34:37.468465   79927 ssh_runner.go:195] Run: systemctl --version
	I0806 08:34:37.475436   79927 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0806 08:34:37.627725   79927 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0806 08:34:37.635346   79927 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0806 08:34:37.635432   79927 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0806 08:34:37.652546   79927 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0806 08:34:37.652571   79927 start.go:495] detecting cgroup driver to use...
	I0806 08:34:37.652642   79927 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0806 08:34:37.671386   79927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 08:34:37.686408   79927 docker.go:217] disabling cri-docker service (if available) ...
	I0806 08:34:37.686473   79927 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0806 08:34:37.701731   79927 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0806 08:34:37.717528   79927 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0806 08:34:37.845160   79927 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0806 08:34:37.999278   79927 docker.go:233] disabling docker service ...
	I0806 08:34:37.999361   79927 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0806 08:34:38.017101   79927 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0806 08:34:38.033189   79927 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0806 08:34:38.213620   79927 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0806 08:34:38.374790   79927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0806 08:34:38.393394   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 08:34:38.419807   79927 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0806 08:34:38.419878   79927 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:38.434870   79927 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0806 08:34:38.434946   79927 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:38.449029   79927 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:38.460440   79927 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:38.473654   79927 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0806 08:34:38.486032   79927 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0806 08:34:38.495861   79927 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0806 08:34:38.495927   79927 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0806 08:34:38.511021   79927 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0806 08:34:38.521985   79927 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 08:34:38.651919   79927 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0806 08:34:34.235598   79614 node_ready.go:53] node "default-k8s-diff-port-990751" has status "Ready":"False"
	I0806 08:34:36.734741   79614 node_ready.go:53] node "default-k8s-diff-port-990751" has status "Ready":"False"
	I0806 08:34:37.235469   79614 node_ready.go:49] node "default-k8s-diff-port-990751" has status "Ready":"True"
	I0806 08:34:37.235496   79614 node_ready.go:38] duration metric: took 5.004029432s for node "default-k8s-diff-port-990751" to be "Ready" ...
	I0806 08:34:37.235506   79614 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 08:34:37.242299   79614 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-gx8tt" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:37.247917   79614 pod_ready.go:92] pod "coredns-7db6d8ff4d-gx8tt" in "kube-system" namespace has status "Ready":"True"
	I0806 08:34:37.247936   79614 pod_ready.go:81] duration metric: took 5.613775ms for pod "coredns-7db6d8ff4d-gx8tt" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:37.247944   79614 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-990751" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:38.822367   79927 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0806 08:34:38.822444   79927 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0806 08:34:38.828148   79927 start.go:563] Will wait 60s for crictl version
	I0806 08:34:38.828211   79927 ssh_runner.go:195] Run: which crictl
	I0806 08:34:38.832084   79927 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0806 08:34:38.872571   79927 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0806 08:34:38.872658   79927 ssh_runner.go:195] Run: crio --version
	I0806 08:34:38.903507   79927 ssh_runner.go:195] Run: crio --version
	I0806 08:34:38.934955   79927 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0806 08:34:36.612552   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:38.614644   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:37.380808   78922 main.go:141] libmachine: (no-preload-703340) Calling .Start
	I0806 08:34:37.381005   78922 main.go:141] libmachine: (no-preload-703340) Ensuring networks are active...
	I0806 08:34:37.381749   78922 main.go:141] libmachine: (no-preload-703340) Ensuring network default is active
	I0806 08:34:37.382052   78922 main.go:141] libmachine: (no-preload-703340) Ensuring network mk-no-preload-703340 is active
	I0806 08:34:37.382556   78922 main.go:141] libmachine: (no-preload-703340) Getting domain xml...
	I0806 08:34:37.383386   78922 main.go:141] libmachine: (no-preload-703340) Creating domain...
	I0806 08:34:38.779397   78922 main.go:141] libmachine: (no-preload-703340) Waiting to get IP...
	I0806 08:34:38.780262   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:38.780737   78922 main.go:141] libmachine: (no-preload-703340) DBG | unable to find current IP address of domain no-preload-703340 in network mk-no-preload-703340
	I0806 08:34:38.780809   78922 main.go:141] libmachine: (no-preload-703340) DBG | I0806 08:34:38.780723   81046 retry.go:31] will retry after 194.059215ms: waiting for machine to come up
	I0806 08:34:38.976254   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:38.976734   78922 main.go:141] libmachine: (no-preload-703340) DBG | unable to find current IP address of domain no-preload-703340 in network mk-no-preload-703340
	I0806 08:34:38.976769   78922 main.go:141] libmachine: (no-preload-703340) DBG | I0806 08:34:38.976653   81046 retry.go:31] will retry after 288.140112ms: waiting for machine to come up
	I0806 08:34:39.266297   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:39.267031   78922 main.go:141] libmachine: (no-preload-703340) DBG | unable to find current IP address of domain no-preload-703340 in network mk-no-preload-703340
	I0806 08:34:39.267060   78922 main.go:141] libmachine: (no-preload-703340) DBG | I0806 08:34:39.266948   81046 retry.go:31] will retry after 354.804756ms: waiting for machine to come up
	I0806 08:34:39.623548   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:39.624083   78922 main.go:141] libmachine: (no-preload-703340) DBG | unable to find current IP address of domain no-preload-703340 in network mk-no-preload-703340
	I0806 08:34:39.624111   78922 main.go:141] libmachine: (no-preload-703340) DBG | I0806 08:34:39.623994   81046 retry.go:31] will retry after 418.851823ms: waiting for machine to come up
	I0806 08:34:40.044734   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:40.045142   78922 main.go:141] libmachine: (no-preload-703340) DBG | unable to find current IP address of domain no-preload-703340 in network mk-no-preload-703340
	I0806 08:34:40.045165   78922 main.go:141] libmachine: (no-preload-703340) DBG | I0806 08:34:40.045103   81046 retry.go:31] will retry after 525.29917ms: waiting for machine to come up
	I0806 08:34:40.571928   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:40.572395   78922 main.go:141] libmachine: (no-preload-703340) DBG | unable to find current IP address of domain no-preload-703340 in network mk-no-preload-703340
	I0806 08:34:40.572431   78922 main.go:141] libmachine: (no-preload-703340) DBG | I0806 08:34:40.572332   81046 retry.go:31] will retry after 609.965809ms: waiting for machine to come up
	I0806 08:34:41.184209   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:41.184793   78922 main.go:141] libmachine: (no-preload-703340) DBG | unable to find current IP address of domain no-preload-703340 in network mk-no-preload-703340
	I0806 08:34:41.184820   78922 main.go:141] libmachine: (no-preload-703340) DBG | I0806 08:34:41.184739   81046 retry.go:31] will retry after 1.120596s: waiting for machine to come up
	I0806 08:34:38.936206   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetIP
	I0806 08:34:38.939581   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:38.940049   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:38.940076   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:38.940336   79927 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0806 08:34:38.944801   79927 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 08:34:38.959454   79927 kubeadm.go:883] updating cluster {Name:old-k8s-version-409903 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-409903 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.158 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0806 08:34:38.959581   79927 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0806 08:34:38.959647   79927 ssh_runner.go:195] Run: sudo crictl images --output json
	I0806 08:34:39.014798   79927 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0806 08:34:39.014908   79927 ssh_runner.go:195] Run: which lz4
	I0806 08:34:39.019642   79927 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0806 08:34:39.025125   79927 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0806 08:34:39.025165   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0806 08:34:40.772902   79927 crio.go:462] duration metric: took 1.753273519s to copy over tarball
	I0806 08:34:40.772994   79927 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0806 08:34:39.256627   79614 pod_ready.go:102] pod "etcd-default-k8s-diff-port-990751" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:41.257158   79614 pod_ready.go:102] pod "etcd-default-k8s-diff-port-990751" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:42.754699   79614 pod_ready.go:92] pod "etcd-default-k8s-diff-port-990751" in "kube-system" namespace has status "Ready":"True"
	I0806 08:34:42.754729   79614 pod_ready.go:81] duration metric: took 5.506777292s for pod "etcd-default-k8s-diff-port-990751" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:42.754742   79614 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-990751" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:42.761260   79614 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-990751" in "kube-system" namespace has status "Ready":"True"
	I0806 08:34:42.761282   79614 pod_ready.go:81] duration metric: took 6.531767ms for pod "kube-apiserver-default-k8s-diff-port-990751" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:42.761292   79614 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-990751" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:42.769860   79614 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-990751" in "kube-system" namespace has status "Ready":"True"
	I0806 08:34:42.769884   79614 pod_ready.go:81] duration metric: took 8.585216ms for pod "kube-controller-manager-default-k8s-diff-port-990751" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:42.769898   79614 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lpf88" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:42.775420   79614 pod_ready.go:92] pod "kube-proxy-lpf88" in "kube-system" namespace has status "Ready":"True"
	I0806 08:34:42.775445   79614 pod_ready.go:81] duration metric: took 5.540587ms for pod "kube-proxy-lpf88" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:42.775453   79614 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-990751" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:42.781317   79614 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-990751" in "kube-system" namespace has status "Ready":"True"
	I0806 08:34:42.781341   79614 pod_ready.go:81] duration metric: took 5.88019ms for pod "kube-scheduler-default-k8s-diff-port-990751" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:42.781353   79614 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:40.614686   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:43.114736   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:42.306869   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:42.307378   78922 main.go:141] libmachine: (no-preload-703340) DBG | unable to find current IP address of domain no-preload-703340 in network mk-no-preload-703340
	I0806 08:34:42.307408   78922 main.go:141] libmachine: (no-preload-703340) DBG | I0806 08:34:42.307341   81046 retry.go:31] will retry after 1.26489185s: waiting for machine to come up
	I0806 08:34:43.574072   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:43.574494   78922 main.go:141] libmachine: (no-preload-703340) DBG | unable to find current IP address of domain no-preload-703340 in network mk-no-preload-703340
	I0806 08:34:43.574517   78922 main.go:141] libmachine: (no-preload-703340) DBG | I0806 08:34:43.574440   81046 retry.go:31] will retry after 1.547737608s: waiting for machine to come up
	I0806 08:34:45.124236   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:45.124794   78922 main.go:141] libmachine: (no-preload-703340) DBG | unable to find current IP address of domain no-preload-703340 in network mk-no-preload-703340
	I0806 08:34:45.124822   78922 main.go:141] libmachine: (no-preload-703340) DBG | I0806 08:34:45.124745   81046 retry.go:31] will retry after 1.491615644s: waiting for machine to come up
	I0806 08:34:46.617663   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:46.618159   78922 main.go:141] libmachine: (no-preload-703340) DBG | unable to find current IP address of domain no-preload-703340 in network mk-no-preload-703340
	I0806 08:34:46.618183   78922 main.go:141] libmachine: (no-preload-703340) DBG | I0806 08:34:46.618117   81046 retry.go:31] will retry after 2.284217057s: waiting for machine to come up
	I0806 08:34:43.977375   79927 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.204351244s)
	I0806 08:34:43.977404   79927 crio.go:469] duration metric: took 3.204457213s to extract the tarball
	I0806 08:34:43.977412   79927 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0806 08:34:44.023819   79927 ssh_runner.go:195] Run: sudo crictl images --output json
	I0806 08:34:44.070991   79927 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0806 08:34:44.071016   79927 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0806 08:34:44.071089   79927 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0806 08:34:44.071129   79927 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0806 08:34:44.071138   79927 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0806 08:34:44.071090   79927 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 08:34:44.071184   79927 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0806 08:34:44.071192   79927 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0806 08:34:44.071210   79927 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0806 08:34:44.071255   79927 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0806 08:34:44.072641   79927 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0806 08:34:44.072728   79927 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 08:34:44.072743   79927 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0806 08:34:44.072642   79927 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0806 08:34:44.072749   79927 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0806 08:34:44.072811   79927 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0806 08:34:44.073085   79927 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0806 08:34:44.073298   79927 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0806 08:34:44.239872   79927 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0806 08:34:44.287412   79927 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0806 08:34:44.287458   79927 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0806 08:34:44.287504   79927 ssh_runner.go:195] Run: which crictl
	I0806 08:34:44.292050   79927 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0806 08:34:44.309229   79927 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0806 08:34:44.336611   79927 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0806 08:34:44.371389   79927 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0806 08:34:44.371447   79927 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0806 08:34:44.371500   79927 ssh_runner.go:195] Run: which crictl
	I0806 08:34:44.375821   79927 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0806 08:34:44.412139   79927 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0806 08:34:44.415159   79927 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0806 08:34:44.415339   79927 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0806 08:34:44.418296   79927 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0806 08:34:44.423718   79927 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0806 08:34:44.437866   79927 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0806 08:34:44.494716   79927 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0806 08:34:44.494765   79927 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0806 08:34:44.494819   79927 ssh_runner.go:195] Run: which crictl
	I0806 08:34:44.526849   79927 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0806 08:34:44.526898   79927 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0806 08:34:44.526961   79927 ssh_runner.go:195] Run: which crictl
	I0806 08:34:44.555530   79927 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0806 08:34:44.555587   79927 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0806 08:34:44.555620   79927 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0806 08:34:44.555634   79927 ssh_runner.go:195] Run: which crictl
	I0806 08:34:44.555655   79927 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0806 08:34:44.555692   79927 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0806 08:34:44.555707   79927 ssh_runner.go:195] Run: which crictl
	I0806 08:34:44.555714   79927 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0806 08:34:44.555743   79927 ssh_runner.go:195] Run: which crictl
	I0806 08:34:44.555748   79927 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0806 08:34:44.555792   79927 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0806 08:34:44.561072   79927 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0806 08:34:44.624216   79927 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0806 08:34:44.639877   79927 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0806 08:34:44.639895   79927 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0806 08:34:44.639979   79927 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0806 08:34:44.653684   79927 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0806 08:34:44.711774   79927 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0806 08:34:44.711804   79927 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0806 08:34:44.964733   79927 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 08:34:45.112727   79927 cache_images.go:92] duration metric: took 1.041682516s to LoadCachedImages
	W0806 08:34:45.112813   79927 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0806 08:34:45.112837   79927 kubeadm.go:934] updating node { 192.168.61.158 8443 v1.20.0 crio true true} ...
	I0806 08:34:45.112964   79927 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-409903 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.158
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-409903 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0806 08:34:45.113047   79927 ssh_runner.go:195] Run: crio config
	I0806 08:34:45.165383   79927 cni.go:84] Creating CNI manager for ""
	I0806 08:34:45.165413   79927 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0806 08:34:45.165429   79927 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0806 08:34:45.165455   79927 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.158 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-409903 NodeName:old-k8s-version-409903 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.158"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.158 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0806 08:34:45.165687   79927 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.158
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-409903"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.158
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.158"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0806 08:34:45.165760   79927 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0806 08:34:45.177296   79927 binaries.go:44] Found k8s binaries, skipping transfer
	I0806 08:34:45.177371   79927 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0806 08:34:45.187299   79927 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0806 08:34:45.205986   79927 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0806 08:34:45.225072   79927 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0806 08:34:45.244222   79927 ssh_runner.go:195] Run: grep 192.168.61.158	control-plane.minikube.internal$ /etc/hosts
	I0806 08:34:45.248728   79927 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.158	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 08:34:45.263094   79927 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 08:34:45.399106   79927 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 08:34:45.417589   79927 certs.go:68] Setting up /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/old-k8s-version-409903 for IP: 192.168.61.158
	I0806 08:34:45.417611   79927 certs.go:194] generating shared ca certs ...
	I0806 08:34:45.417630   79927 certs.go:226] acquiring lock for ca certs: {Name:mkf5c5aed91f4108054d9f2c742cad66c86afbed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:34:45.417779   79927 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.key
	I0806 08:34:45.417820   79927 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.key
	I0806 08:34:45.417829   79927 certs.go:256] generating profile certs ...
	I0806 08:34:45.417913   79927 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/old-k8s-version-409903/client.key
	I0806 08:34:45.417970   79927 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/old-k8s-version-409903/apiserver.key.1ffd6785
	I0806 08:34:45.418060   79927 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/old-k8s-version-409903/proxy-client.key
	I0806 08:34:45.418242   79927 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246.pem (1338 bytes)
	W0806 08:34:45.418273   79927 certs.go:480] ignoring /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246_empty.pem, impossibly tiny 0 bytes
	I0806 08:34:45.418282   79927 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem (1675 bytes)
	I0806 08:34:45.418302   79927 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem (1078 bytes)
	I0806 08:34:45.418322   79927 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem (1123 bytes)
	I0806 08:34:45.418350   79927 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem (1675 bytes)
	I0806 08:34:45.418389   79927 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem (1708 bytes)
	I0806 08:34:45.419018   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0806 08:34:45.464550   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0806 08:34:45.500764   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0806 08:34:45.549225   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0806 08:34:45.597480   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/old-k8s-version-409903/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0806 08:34:45.639834   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/old-k8s-version-409903/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0806 08:34:45.680789   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/old-k8s-version-409903/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0806 08:34:45.737986   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/old-k8s-version-409903/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0806 08:34:45.766471   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem --> /usr/share/ca-certificates/202462.pem (1708 bytes)
	I0806 08:34:45.797047   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0806 08:34:45.825789   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246.pem --> /usr/share/ca-certificates/20246.pem (1338 bytes)
	I0806 08:34:45.860217   79927 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0806 08:34:45.883841   79927 ssh_runner.go:195] Run: openssl version
	I0806 08:34:45.891103   79927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202462.pem && ln -fs /usr/share/ca-certificates/202462.pem /etc/ssl/certs/202462.pem"
	I0806 08:34:45.905260   79927 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202462.pem
	I0806 08:34:45.910759   79927 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  6 07:17 /usr/share/ca-certificates/202462.pem
	I0806 08:34:45.910837   79927 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202462.pem
	I0806 08:34:45.918440   79927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202462.pem /etc/ssl/certs/3ec20f2e.0"
	I0806 08:34:45.932479   79927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0806 08:34:45.945678   79927 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0806 08:34:45.951612   79927 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  6 07:07 /usr/share/ca-certificates/minikubeCA.pem
	I0806 08:34:45.951691   79927 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0806 08:34:45.957749   79927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0806 08:34:45.969987   79927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20246.pem && ln -fs /usr/share/ca-certificates/20246.pem /etc/ssl/certs/20246.pem"
	I0806 08:34:45.982878   79927 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20246.pem
	I0806 08:34:45.987953   79927 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  6 07:17 /usr/share/ca-certificates/20246.pem
	I0806 08:34:45.988032   79927 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20246.pem
	I0806 08:34:45.994219   79927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20246.pem /etc/ssl/certs/51391683.0"
	I0806 08:34:46.006666   79927 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0806 08:34:46.011624   79927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0806 08:34:46.018416   79927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0806 08:34:46.025220   79927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0806 08:34:46.031854   79927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0806 08:34:46.038570   79927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0806 08:34:46.045142   79927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0806 08:34:46.052198   79927 kubeadm.go:392] StartCluster: {Name:old-k8s-version-409903 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-409903 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.158 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 08:34:46.052313   79927 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0806 08:34:46.052372   79927 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0806 08:34:46.098382   79927 cri.go:89] found id: ""
	I0806 08:34:46.098454   79927 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0806 08:34:46.110861   79927 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0806 08:34:46.110888   79927 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0806 08:34:46.110956   79927 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0806 08:34:46.123420   79927 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0806 08:34:46.124617   79927 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-409903" does not appear in /home/jenkins/minikube-integration/19370-13057/kubeconfig
	I0806 08:34:46.125562   79927 kubeconfig.go:62] /home/jenkins/minikube-integration/19370-13057/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-409903" cluster setting kubeconfig missing "old-k8s-version-409903" context setting]
	I0806 08:34:46.127017   79927 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/kubeconfig: {Name:mk5c066914acf8e36556b29792f75b98076ca34a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:34:46.173767   79927 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0806 08:34:46.186472   79927 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.158
	I0806 08:34:46.186517   79927 kubeadm.go:1160] stopping kube-system containers ...
	I0806 08:34:46.186532   79927 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0806 08:34:46.186592   79927 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0806 08:34:46.227817   79927 cri.go:89] found id: ""
	I0806 08:34:46.227914   79927 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0806 08:34:46.248248   79927 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0806 08:34:46.259555   79927 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0806 08:34:46.259578   79927 kubeadm.go:157] found existing configuration files:
	
	I0806 08:34:46.259647   79927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0806 08:34:46.271246   79927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0806 08:34:46.271311   79927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0806 08:34:46.283559   79927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0806 08:34:46.295584   79927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0806 08:34:46.295660   79927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0806 08:34:46.307223   79927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0806 08:34:46.318046   79927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0806 08:34:46.318136   79927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0806 08:34:46.328827   79927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0806 08:34:46.339577   79927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0806 08:34:46.339665   79927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0806 08:34:46.352107   79927 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0806 08:34:46.369776   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:46.505082   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:47.036627   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:47.285779   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:47.397777   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:47.517913   79927 api_server.go:52] waiting for apiserver process to appear ...
	I0806 08:34:47.518028   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:48.018399   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:48.518708   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:44.788404   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:46.789746   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:45.115046   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:47.611343   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:48.905056   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:48.905522   78922 main.go:141] libmachine: (no-preload-703340) DBG | unable to find current IP address of domain no-preload-703340 in network mk-no-preload-703340
	I0806 08:34:48.905548   78922 main.go:141] libmachine: (no-preload-703340) DBG | I0806 08:34:48.905484   81046 retry.go:31] will retry after 2.647234658s: waiting for machine to come up
	I0806 08:34:51.556304   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:51.556689   78922 main.go:141] libmachine: (no-preload-703340) DBG | unable to find current IP address of domain no-preload-703340 in network mk-no-preload-703340
	I0806 08:34:51.556712   78922 main.go:141] libmachine: (no-preload-703340) DBG | I0806 08:34:51.556663   81046 retry.go:31] will retry after 4.275379711s: waiting for machine to come up
	I0806 08:34:49.018572   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:49.518975   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:50.018355   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:50.518866   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:51.018156   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:51.518855   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:52.018270   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:52.518584   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:53.018606   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:53.518118   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:49.288691   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:51.787435   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:53.787824   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:49.612621   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:52.112274   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:55.834423   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:55.834961   78922 main.go:141] libmachine: (no-preload-703340) Found IP for machine: 192.168.72.126
	I0806 08:34:55.834980   78922 main.go:141] libmachine: (no-preload-703340) Reserving static IP address...
	I0806 08:34:55.834991   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has current primary IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:55.835314   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "no-preload-703340", mac: "52:54:00:a1:fa:d3", ip: "192.168.72.126"} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:34:55.835348   78922 main.go:141] libmachine: (no-preload-703340) Reserved static IP address: 192.168.72.126
	I0806 08:34:55.835363   78922 main.go:141] libmachine: (no-preload-703340) DBG | skip adding static IP to network mk-no-preload-703340 - found existing host DHCP lease matching {name: "no-preload-703340", mac: "52:54:00:a1:fa:d3", ip: "192.168.72.126"}
	I0806 08:34:55.835376   78922 main.go:141] libmachine: (no-preload-703340) DBG | Getting to WaitForSSH function...
	I0806 08:34:55.835388   78922 main.go:141] libmachine: (no-preload-703340) Waiting for SSH to be available...
	I0806 08:34:55.837645   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:55.838001   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:34:55.838030   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:55.838167   78922 main.go:141] libmachine: (no-preload-703340) DBG | Using SSH client type: external
	I0806 08:34:55.838193   78922 main.go:141] libmachine: (no-preload-703340) DBG | Using SSH private key: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/no-preload-703340/id_rsa (-rw-------)
	I0806 08:34:55.838234   78922 main.go:141] libmachine: (no-preload-703340) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.126 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19370-13057/.minikube/machines/no-preload-703340/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0806 08:34:55.838256   78922 main.go:141] libmachine: (no-preload-703340) DBG | About to run SSH command:
	I0806 08:34:55.838272   78922 main.go:141] libmachine: (no-preload-703340) DBG | exit 0
	I0806 08:34:55.968557   78922 main.go:141] libmachine: (no-preload-703340) DBG | SSH cmd err, output: <nil>: 
	I0806 08:34:55.968893   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetConfigRaw
	I0806 08:34:55.969485   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetIP
	I0806 08:34:55.971998   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:55.972457   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:34:55.972487   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:55.972694   78922 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/no-preload-703340/config.json ...
	I0806 08:34:55.972889   78922 machine.go:94] provisionDockerMachine start ...
	I0806 08:34:55.972908   78922 main.go:141] libmachine: (no-preload-703340) Calling .DriverName
	I0806 08:34:55.973115   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHHostname
	I0806 08:34:55.975361   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:55.975734   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:34:55.975753   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:55.975905   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHPort
	I0806 08:34:55.976051   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHKeyPath
	I0806 08:34:55.976265   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHKeyPath
	I0806 08:34:55.976424   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHUsername
	I0806 08:34:55.976635   78922 main.go:141] libmachine: Using SSH client type: native
	I0806 08:34:55.976891   78922 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.126 22 <nil> <nil>}
	I0806 08:34:55.976905   78922 main.go:141] libmachine: About to run SSH command:
	hostname
	I0806 08:34:56.084844   78922 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0806 08:34:56.084905   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetMachineName
	I0806 08:34:56.085226   78922 buildroot.go:166] provisioning hostname "no-preload-703340"
	I0806 08:34:56.085253   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetMachineName
	I0806 08:34:56.085477   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHHostname
	I0806 08:34:56.088358   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:56.088775   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:34:56.088806   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:56.088904   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHPort
	I0806 08:34:56.089070   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHKeyPath
	I0806 08:34:56.089262   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHKeyPath
	I0806 08:34:56.089417   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHUsername
	I0806 08:34:56.089600   78922 main.go:141] libmachine: Using SSH client type: native
	I0806 08:34:56.089810   78922 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.126 22 <nil> <nil>}
	I0806 08:34:56.089840   78922 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-703340 && echo "no-preload-703340" | sudo tee /etc/hostname
	I0806 08:34:56.216217   78922 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-703340
	
	I0806 08:34:56.216257   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHHostname
	I0806 08:34:56.218949   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:56.219274   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:34:56.219309   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:56.219488   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHPort
	I0806 08:34:56.219699   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHKeyPath
	I0806 08:34:56.219899   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHKeyPath
	I0806 08:34:56.220054   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHUsername
	I0806 08:34:56.220231   78922 main.go:141] libmachine: Using SSH client type: native
	I0806 08:34:56.220513   78922 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.126 22 <nil> <nil>}
	I0806 08:34:56.220544   78922 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-703340' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-703340/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-703340' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0806 08:34:56.337357   78922 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 08:34:56.337390   78922 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19370-13057/.minikube CaCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19370-13057/.minikube}
	I0806 08:34:56.337428   78922 buildroot.go:174] setting up certificates
	I0806 08:34:56.337439   78922 provision.go:84] configureAuth start
	I0806 08:34:56.337454   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetMachineName
	I0806 08:34:56.337736   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetIP
	I0806 08:34:56.340297   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:56.340655   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:34:56.340680   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:56.340898   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHHostname
	I0806 08:34:56.343013   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:56.343356   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:34:56.343395   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:56.343544   78922 provision.go:143] copyHostCerts
	I0806 08:34:56.343613   78922 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem, removing ...
	I0806 08:34:56.343626   78922 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem
	I0806 08:34:56.343692   78922 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem (1078 bytes)
	I0806 08:34:56.343806   78922 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem, removing ...
	I0806 08:34:56.343815   78922 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem
	I0806 08:34:56.343838   78922 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem (1123 bytes)
	I0806 08:34:56.343915   78922 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem, removing ...
	I0806 08:34:56.343929   78922 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem
	I0806 08:34:56.343948   78922 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem (1675 bytes)
	I0806 08:34:56.344044   78922 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem org=jenkins.no-preload-703340 san=[127.0.0.1 192.168.72.126 localhost minikube no-preload-703340]
	I0806 08:34:56.441531   78922 provision.go:177] copyRemoteCerts
	I0806 08:34:56.441588   78922 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0806 08:34:56.441610   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHHostname
	I0806 08:34:56.444202   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:56.444549   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:34:56.444574   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:56.444786   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHPort
	I0806 08:34:56.444977   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHKeyPath
	I0806 08:34:56.445125   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHUsername
	I0806 08:34:56.445327   78922 sshutil.go:53] new ssh client: &{IP:192.168.72.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/no-preload-703340/id_rsa Username:docker}
	I0806 08:34:56.530980   78922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0806 08:34:56.557880   78922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0806 08:34:56.584108   78922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0806 08:34:56.608415   78922 provision.go:87] duration metric: took 270.961494ms to configureAuth
	I0806 08:34:56.608449   78922 buildroot.go:189] setting minikube options for container-runtime
	I0806 08:34:56.608675   78922 config.go:182] Loaded profile config "no-preload-703340": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-rc.0
	I0806 08:34:56.608794   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHHostname
	I0806 08:34:56.611937   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:56.612383   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:34:56.612409   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:56.612585   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHPort
	I0806 08:34:56.612774   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHKeyPath
	I0806 08:34:56.612953   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHKeyPath
	I0806 08:34:56.613134   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHUsername
	I0806 08:34:56.613282   78922 main.go:141] libmachine: Using SSH client type: native
	I0806 08:34:56.613436   78922 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.126 22 <nil> <nil>}
	I0806 08:34:56.613450   78922 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0806 08:34:56.904446   78922 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0806 08:34:56.904474   78922 machine.go:97] duration metric: took 931.572346ms to provisionDockerMachine
	I0806 08:34:56.904484   78922 start.go:293] postStartSetup for "no-preload-703340" (driver="kvm2")
	I0806 08:34:56.904496   78922 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0806 08:34:56.904517   78922 main.go:141] libmachine: (no-preload-703340) Calling .DriverName
	I0806 08:34:56.905043   78922 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0806 08:34:56.905076   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHHostname
	I0806 08:34:56.907707   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:56.908101   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:34:56.908124   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:56.908356   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHPort
	I0806 08:34:56.908586   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHKeyPath
	I0806 08:34:56.908753   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHUsername
	I0806 08:34:56.908924   78922 sshutil.go:53] new ssh client: &{IP:192.168.72.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/no-preload-703340/id_rsa Username:docker}
	I0806 08:34:56.995555   78922 ssh_runner.go:195] Run: cat /etc/os-release
	I0806 08:34:56.999971   78922 info.go:137] Remote host: Buildroot 2023.02.9
	I0806 08:34:57.000004   78922 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-13057/.minikube/addons for local assets ...
	I0806 08:34:57.000079   78922 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-13057/.minikube/files for local assets ...
	I0806 08:34:57.000150   78922 filesync.go:149] local asset: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem -> 202462.pem in /etc/ssl/certs
	I0806 08:34:57.000243   78922 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0806 08:34:57.010178   78922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem --> /etc/ssl/certs/202462.pem (1708 bytes)
	I0806 08:34:57.035704   78922 start.go:296] duration metric: took 131.206204ms for postStartSetup
	I0806 08:34:57.035751   78922 fix.go:56] duration metric: took 19.682153204s for fixHost
	I0806 08:34:57.035775   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHHostname
	I0806 08:34:57.038230   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:57.038565   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:34:57.038596   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:57.038809   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHPort
	I0806 08:34:57.039016   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHKeyPath
	I0806 08:34:57.039197   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHKeyPath
	I0806 08:34:57.039308   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHUsername
	I0806 08:34:57.039486   78922 main.go:141] libmachine: Using SSH client type: native
	I0806 08:34:57.039697   78922 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.126 22 <nil> <nil>}
	I0806 08:34:57.039712   78922 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0806 08:34:57.149272   78922 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722933297.121300330
	
	I0806 08:34:57.149291   78922 fix.go:216] guest clock: 1722933297.121300330
	I0806 08:34:57.149298   78922 fix.go:229] Guest: 2024-08-06 08:34:57.12130033 +0000 UTC Remote: 2024-08-06 08:34:57.035756139 +0000 UTC m=+360.415626547 (delta=85.544191ms)
	I0806 08:34:57.149319   78922 fix.go:200] guest clock delta is within tolerance: 85.544191ms
	I0806 08:34:57.149326   78922 start.go:83] releasing machines lock for "no-preload-703340", held for 19.795759576s
	I0806 08:34:57.149348   78922 main.go:141] libmachine: (no-preload-703340) Calling .DriverName
	I0806 08:34:57.149624   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetIP
	I0806 08:34:57.152286   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:57.152699   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:34:57.152731   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:57.152905   78922 main.go:141] libmachine: (no-preload-703340) Calling .DriverName
	I0806 08:34:57.153488   78922 main.go:141] libmachine: (no-preload-703340) Calling .DriverName
	I0806 08:34:57.153680   78922 main.go:141] libmachine: (no-preload-703340) Calling .DriverName
	I0806 08:34:57.153764   78922 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0806 08:34:57.153803   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHHostname
	I0806 08:34:57.154068   78922 ssh_runner.go:195] Run: cat /version.json
	I0806 08:34:57.154090   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHHostname
	I0806 08:34:57.156678   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:57.157026   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:34:57.157053   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:57.157123   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:57.157327   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHPort
	I0806 08:34:57.157492   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHKeyPath
	I0806 08:34:57.157580   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:34:57.157621   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHUsername
	I0806 08:34:57.157602   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:57.157701   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHPort
	I0806 08:34:57.157919   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHKeyPath
	I0806 08:34:57.157933   78922 sshutil.go:53] new ssh client: &{IP:192.168.72.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/no-preload-703340/id_rsa Username:docker}
	I0806 08:34:57.158089   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHUsername
	I0806 08:34:57.158243   78922 sshutil.go:53] new ssh client: &{IP:192.168.72.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/no-preload-703340/id_rsa Username:docker}
	I0806 08:34:57.237574   78922 ssh_runner.go:195] Run: systemctl --version
	I0806 08:34:57.260498   78922 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0806 08:34:57.403619   78922 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0806 08:34:57.410198   78922 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0806 08:34:57.410267   78922 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0806 08:34:57.426321   78922 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0806 08:34:57.426345   78922 start.go:495] detecting cgroup driver to use...
	I0806 08:34:57.426414   78922 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0806 08:34:57.442340   78922 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 08:34:57.458008   78922 docker.go:217] disabling cri-docker service (if available) ...
	I0806 08:34:57.458061   78922 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0806 08:34:57.471516   78922 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0806 08:34:57.484996   78922 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0806 08:34:57.604605   78922 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0806 08:34:57.753590   78922 docker.go:233] disabling docker service ...
	I0806 08:34:57.753651   78922 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0806 08:34:57.768534   78922 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0806 08:34:57.782187   78922 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0806 08:34:57.936046   78922 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0806 08:34:58.076402   78922 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0806 08:34:58.091386   78922 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 08:34:58.112810   78922 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0806 08:34:58.112873   78922 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:58.123376   78922 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0806 08:34:58.123438   78922 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:58.133981   78922 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:58.144891   78922 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:58.156276   78922 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0806 08:34:58.166927   78922 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:58.176956   78922 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:58.193969   78922 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:58.204122   78922 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0806 08:34:58.213745   78922 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0806 08:34:58.213859   78922 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0806 08:34:58.226556   78922 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0806 08:34:58.236413   78922 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 08:34:58.359776   78922 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0806 08:34:58.505947   78922 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0806 08:34:58.506016   78922 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0806 08:34:58.511591   78922 start.go:563] Will wait 60s for crictl version
	I0806 08:34:58.511649   78922 ssh_runner.go:195] Run: which crictl
	I0806 08:34:58.516259   78922 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0806 08:34:58.568580   78922 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0806 08:34:58.568658   78922 ssh_runner.go:195] Run: crio --version
	I0806 08:34:58.607964   78922 ssh_runner.go:195] Run: crio --version
	I0806 08:34:58.642564   78922 out.go:177] * Preparing Kubernetes v1.31.0-rc.0 on CRI-O 1.29.1 ...
	I0806 08:34:54.018818   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:54.518987   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:55.018584   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:55.518848   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:56.018094   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:56.518052   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:57.018765   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:57.518128   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:58.018977   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:58.518233   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:55.789183   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:58.288863   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:54.612520   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:56.614634   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:59.115966   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:58.643802   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetIP
	I0806 08:34:58.646395   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:58.646714   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:34:58.646744   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:58.646948   78922 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0806 08:34:58.651136   78922 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 08:34:58.664347   78922 kubeadm.go:883] updating cluster {Name:no-preload-703340 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-rc.0 ClusterName:no-preload-703340 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.126 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m
0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0806 08:34:58.664479   78922 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime crio
	I0806 08:34:58.664511   78922 ssh_runner.go:195] Run: sudo crictl images --output json
	I0806 08:34:58.705043   78922 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-rc.0". assuming images are not preloaded.
	I0806 08:34:58.705080   78922 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-rc.0 registry.k8s.io/kube-controller-manager:v1.31.0-rc.0 registry.k8s.io/kube-scheduler:v1.31.0-rc.0 registry.k8s.io/kube-proxy:v1.31.0-rc.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0806 08:34:58.705157   78922 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 08:34:58.705170   78922 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0806 08:34:58.705177   78922 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0806 08:34:58.705245   78922 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0806 08:34:58.705272   78922 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0806 08:34:58.705281   78922 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0806 08:34:58.705160   78922 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0806 08:34:58.705469   78922 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0806 08:34:58.706962   78922 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0806 08:34:58.706993   78922 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 08:34:58.706986   78922 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0806 08:34:58.707018   78922 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0806 08:34:58.706968   78922 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0806 08:34:58.706989   78922 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0806 08:34:58.707066   78922 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0806 08:34:58.707315   78922 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0806 08:34:58.835957   78922 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0806 08:34:58.839572   78922 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0806 08:34:58.846955   78922 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0806 08:34:58.865707   78922 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0806 08:34:58.872729   78922 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0806 08:34:58.878484   78922 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0806 08:34:58.898935   78922 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0806 08:34:58.919621   78922 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-rc.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-rc.0" does not exist at hash "fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c" in container runtime
	I0806 08:34:58.919654   78922 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0806 08:34:58.919691   78922 ssh_runner.go:195] Run: which crictl
	I0806 08:34:58.949493   78922 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0806 08:34:58.949544   78922 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0806 08:34:58.949597   78922 ssh_runner.go:195] Run: which crictl
	I0806 08:34:58.971429   78922 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0806 08:34:58.971476   78922 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0806 08:34:58.971522   78922 ssh_runner.go:195] Run: which crictl
	I0806 08:34:59.015321   78922 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-rc.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-rc.0" does not exist at hash "0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c" in container runtime
	I0806 08:34:59.015359   78922 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0806 08:34:59.015397   78922 ssh_runner.go:195] Run: which crictl
	I0806 08:34:59.107718   78922 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-rc.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-rc.0" does not exist at hash "41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318" in container runtime
	I0806 08:34:59.107761   78922 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0806 08:34:59.107797   78922 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-rc.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-rc.0" does not exist at hash "c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0" in container runtime
	I0806 08:34:59.107867   78922 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0806 08:34:59.107904   78922 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0806 08:34:59.107987   78922 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0806 08:34:59.107809   78922 ssh_runner.go:195] Run: which crictl
	I0806 08:34:59.108037   78922 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0806 08:34:59.107994   78922 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0806 08:34:59.108037   78922 ssh_runner.go:195] Run: which crictl
	I0806 08:34:59.120323   78922 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0806 08:34:59.218864   78922 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0
	I0806 08:34:59.218982   78922 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-rc.0
	I0806 08:34:59.222696   78922 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0806 08:34:59.222735   78922 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0806 08:34:59.222794   78922 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0
	I0806 08:34:59.222820   78922 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0806 08:34:59.222854   78922 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0806 08:34:59.222882   78922 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-rc.0
	I0806 08:34:59.222900   78922 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-rc.0
	I0806 08:34:59.222933   78922 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.15-0
	I0806 08:34:59.222964   78922 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-rc.0
	I0806 08:34:59.227260   78922 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-rc.0 (exists)
	I0806 08:34:59.227280   78922 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-rc.0
	I0806 08:34:59.227319   78922 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-rc.0
	I0806 08:34:59.268220   78922 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-rc.0 (exists)
	I0806 08:34:59.268240   78922 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0
	I0806 08:34:59.268291   78922 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0806 08:34:59.268325   78922 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0806 08:34:59.268332   78922 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-rc.0
	I0806 08:34:59.268349   78922 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-rc.0 (exists)
	I0806 08:34:59.613734   78922 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 08:35:01.623115   78922 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-rc.0: (2.395772878s)
	I0806 08:35:01.623140   78922 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-rc.0: (2.354795538s)
	I0806 08:35:01.623152   78922 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0 from cache
	I0806 08:35:01.623161   78922 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-rc.0 (exists)
	I0806 08:35:01.623168   78922 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-rc.0
	I0806 08:35:01.623217   78922 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-rc.0
	I0806 08:35:01.623218   78922 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.009459709s)
	I0806 08:35:01.623310   78922 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0806 08:35:01.623340   78922 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 08:35:01.623366   78922 ssh_runner.go:195] Run: which crictl
	I0806 08:34:59.019117   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:59.518121   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:00.018102   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:00.518402   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:01.019125   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:01.518141   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:02.018063   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:02.518834   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:03.018074   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:03.518904   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:00.789181   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:02.790704   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:01.612449   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:03.655402   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:03.781865   78922 ssh_runner.go:235] Completed: which crictl: (2.158478864s)
	I0806 08:35:03.781942   78922 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 08:35:03.782003   78922 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-rc.0: (2.158736373s)
	I0806 08:35:03.782023   78922 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0 from cache
	I0806 08:35:03.782042   78922 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-rc.0
	I0806 08:35:03.782112   78922 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-rc.0
	I0806 08:35:05.247976   78922 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.466007397s)
	I0806 08:35:05.248028   78922 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-rc.0: (1.465889339s)
	I0806 08:35:05.248049   78922 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0806 08:35:05.248054   78922 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0 from cache
	I0806 08:35:05.248065   78922 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0806 08:35:05.248117   78922 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0806 08:35:05.248166   78922 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0806 08:35:04.018047   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:04.518891   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:05.018447   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:05.518506   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:06.019133   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:06.518070   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:07.018362   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:07.518320   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:08.019034   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:08.518740   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:05.288137   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:07.288969   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:06.113473   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:08.612266   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:09.135551   78922 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.887407478s)
	I0806 08:35:09.135591   78922 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0806 08:35:09.135592   78922 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (3.887405091s)
	I0806 08:35:09.135605   78922 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0806 08:35:09.135618   78922 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0806 08:35:09.135645   78922 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0806 08:35:11.131984   78922 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.996319459s)
	I0806 08:35:11.132013   78922 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0806 08:35:11.132024   78922 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-rc.0
	I0806 08:35:11.132070   78922 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-rc.0
	I0806 08:35:09.018349   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:09.518189   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:10.018824   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:10.518260   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:11.018914   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:11.518709   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:12.018349   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:12.518719   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:13.019022   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:13.518318   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:09.787576   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:11.789165   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:10.614185   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:13.112391   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:13.395512   78922 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-rc.0: (2.263417409s)
	I0806 08:35:13.395548   78922 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-rc.0 from cache
	I0806 08:35:13.395560   78922 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0806 08:35:13.395608   78922 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0806 08:35:14.150250   78922 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0806 08:35:14.150301   78922 cache_images.go:123] Successfully loaded all cached images
	I0806 08:35:14.150309   78922 cache_images.go:92] duration metric: took 15.445214072s to LoadCachedImages
	I0806 08:35:14.150322   78922 kubeadm.go:934] updating node { 192.168.72.126 8443 v1.31.0-rc.0 crio true true} ...
	I0806 08:35:14.150540   78922 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-703340 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.126
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-rc.0 ClusterName:no-preload-703340 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0806 08:35:14.150640   78922 ssh_runner.go:195] Run: crio config
	I0806 08:35:14.210763   78922 cni.go:84] Creating CNI manager for ""
	I0806 08:35:14.210794   78922 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0806 08:35:14.210806   78922 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0806 08:35:14.210842   78922 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.126 APIServerPort:8443 KubernetesVersion:v1.31.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-703340 NodeName:no-preload-703340 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.126"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.126 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0806 08:35:14.210979   78922 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.126
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-703340"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.126
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.126"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0806 08:35:14.211036   78922 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-rc.0
	I0806 08:35:14.222106   78922 binaries.go:44] Found k8s binaries, skipping transfer
	I0806 08:35:14.222189   78922 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0806 08:35:14.231944   78922 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0806 08:35:14.250491   78922 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0806 08:35:14.268955   78922 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0806 08:35:14.289661   78922 ssh_runner.go:195] Run: grep 192.168.72.126	control-plane.minikube.internal$ /etc/hosts
	I0806 08:35:14.294005   78922 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.126	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 08:35:14.307700   78922 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 08:35:14.451462   78922 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 08:35:14.478155   78922 certs.go:68] Setting up /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/no-preload-703340 for IP: 192.168.72.126
	I0806 08:35:14.478177   78922 certs.go:194] generating shared ca certs ...
	I0806 08:35:14.478191   78922 certs.go:226] acquiring lock for ca certs: {Name:mkf5c5aed91f4108054d9f2c742cad66c86afbed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:35:14.478365   78922 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.key
	I0806 08:35:14.478420   78922 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.key
	I0806 08:35:14.478434   78922 certs.go:256] generating profile certs ...
	I0806 08:35:14.478529   78922 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/no-preload-703340/client.key
	I0806 08:35:14.478615   78922 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/no-preload-703340/apiserver.key.0b170e14
	I0806 08:35:14.478665   78922 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/no-preload-703340/proxy-client.key
	I0806 08:35:14.478822   78922 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246.pem (1338 bytes)
	W0806 08:35:14.478871   78922 certs.go:480] ignoring /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246_empty.pem, impossibly tiny 0 bytes
	I0806 08:35:14.478885   78922 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem (1675 bytes)
	I0806 08:35:14.478941   78922 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem (1078 bytes)
	I0806 08:35:14.478978   78922 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem (1123 bytes)
	I0806 08:35:14.479014   78922 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem (1675 bytes)
	I0806 08:35:14.479078   78922 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem (1708 bytes)
	I0806 08:35:14.479773   78922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0806 08:35:14.527838   78922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0806 08:35:14.579788   78922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0806 08:35:14.614739   78922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0806 08:35:14.657559   78922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/no-preload-703340/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0806 08:35:14.699354   78922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/no-preload-703340/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0806 08:35:14.724875   78922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/no-preload-703340/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0806 08:35:14.751020   78922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/no-preload-703340/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0806 08:35:14.778300   78922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem --> /usr/share/ca-certificates/202462.pem (1708 bytes)
	I0806 08:35:14.805018   78922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0806 08:35:14.830791   78922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246.pem --> /usr/share/ca-certificates/20246.pem (1338 bytes)
	I0806 08:35:14.856120   78922 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0806 08:35:14.874909   78922 ssh_runner.go:195] Run: openssl version
	I0806 08:35:14.880931   78922 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202462.pem && ln -fs /usr/share/ca-certificates/202462.pem /etc/ssl/certs/202462.pem"
	I0806 08:35:14.892741   78922 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202462.pem
	I0806 08:35:14.897430   78922 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  6 07:17 /usr/share/ca-certificates/202462.pem
	I0806 08:35:14.897489   78922 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202462.pem
	I0806 08:35:14.903324   78922 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202462.pem /etc/ssl/certs/3ec20f2e.0"
	I0806 08:35:14.914505   78922 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0806 08:35:14.925465   78922 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0806 08:35:14.930490   78922 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  6 07:07 /usr/share/ca-certificates/minikubeCA.pem
	I0806 08:35:14.930559   78922 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0806 08:35:14.936384   78922 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0806 08:35:14.947836   78922 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20246.pem && ln -fs /usr/share/ca-certificates/20246.pem /etc/ssl/certs/20246.pem"
	I0806 08:35:14.959376   78922 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20246.pem
	I0806 08:35:14.964118   78922 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  6 07:17 /usr/share/ca-certificates/20246.pem
	I0806 08:35:14.964168   78922 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20246.pem
	I0806 08:35:14.970266   78922 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20246.pem /etc/ssl/certs/51391683.0"
	I0806 08:35:14.982244   78922 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0806 08:35:14.987367   78922 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0806 08:35:14.993508   78922 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0806 08:35:14.999400   78922 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0806 08:35:15.005751   78922 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0806 08:35:15.011588   78922 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0806 08:35:15.017766   78922 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0806 08:35:15.023853   78922 kubeadm.go:392] StartCluster: {Name:no-preload-703340 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-rc.0 ClusterName:no-preload-703340 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.126 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 08:35:15.024060   78922 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0806 08:35:15.024113   78922 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0806 08:35:15.072318   78922 cri.go:89] found id: ""
	I0806 08:35:15.072415   78922 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0806 08:35:15.083158   78922 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0806 08:35:15.083182   78922 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0806 08:35:15.083225   78922 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0806 08:35:15.093301   78922 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0806 08:35:15.094369   78922 kubeconfig.go:125] found "no-preload-703340" server: "https://192.168.72.126:8443"
	I0806 08:35:15.096545   78922 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0806 08:35:15.106190   78922 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.126
	I0806 08:35:15.106226   78922 kubeadm.go:1160] stopping kube-system containers ...
	I0806 08:35:15.106237   78922 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0806 08:35:15.106288   78922 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0806 08:35:15.145974   78922 cri.go:89] found id: ""
	I0806 08:35:15.146060   78922 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0806 08:35:15.163821   78922 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0806 08:35:15.174473   78922 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0806 08:35:15.174498   78922 kubeadm.go:157] found existing configuration files:
	
	I0806 08:35:15.174562   78922 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0806 08:35:15.185227   78922 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0806 08:35:15.185309   78922 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0806 08:35:15.195614   78922 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0806 08:35:15.205197   78922 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0806 08:35:15.205261   78922 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0806 08:35:15.215511   78922 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0806 08:35:15.225354   78922 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0806 08:35:15.225412   78922 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0806 08:35:15.234912   78922 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0806 08:35:15.244638   78922 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0806 08:35:15.244703   78922 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0806 08:35:15.255080   78922 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0806 08:35:15.265861   78922 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:35:15.399192   78922 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:35:16.182816   78922 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:35:16.397386   78922 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:35:16.463980   78922 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:35:16.550229   78922 api_server.go:52] waiting for apiserver process to appear ...
	I0806 08:35:16.550347   78922 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:14.018930   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:14.518251   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:15.018981   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:15.518764   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:16.018114   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:16.518157   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:17.018124   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:17.518604   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:18.018742   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:18.518755   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:14.290050   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:16.788472   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:18.788526   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:15.112664   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:17.113214   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:17.050447   78922 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:17.551220   78922 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:17.603538   78922 api_server.go:72] duration metric: took 1.053308623s to wait for apiserver process to appear ...
	I0806 08:35:17.603574   78922 api_server.go:88] waiting for apiserver healthz status ...
	I0806 08:35:17.603595   78922 api_server.go:253] Checking apiserver healthz at https://192.168.72.126:8443/healthz ...
	I0806 08:35:17.604064   78922 api_server.go:269] stopped: https://192.168.72.126:8443/healthz: Get "https://192.168.72.126:8443/healthz": dial tcp 192.168.72.126:8443: connect: connection refused
	I0806 08:35:18.104696   78922 api_server.go:253] Checking apiserver healthz at https://192.168.72.126:8443/healthz ...
	I0806 08:35:20.458841   78922 api_server.go:279] https://192.168.72.126:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0806 08:35:20.458876   78922 api_server.go:103] status: https://192.168.72.126:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0806 08:35:20.458893   78922 api_server.go:253] Checking apiserver healthz at https://192.168.72.126:8443/healthz ...
	I0806 08:35:20.495570   78922 api_server.go:279] https://192.168.72.126:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0806 08:35:20.495605   78922 api_server.go:103] status: https://192.168.72.126:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0806 08:35:20.603728   78922 api_server.go:253] Checking apiserver healthz at https://192.168.72.126:8443/healthz ...
	I0806 08:35:20.628351   78922 api_server.go:279] https://192.168.72.126:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0806 08:35:20.628409   78922 api_server.go:103] status: https://192.168.72.126:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0806 08:35:21.103717   78922 api_server.go:253] Checking apiserver healthz at https://192.168.72.126:8443/healthz ...
	I0806 08:35:21.109031   78922 api_server.go:279] https://192.168.72.126:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0806 08:35:21.109064   78922 api_server.go:103] status: https://192.168.72.126:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0806 08:35:21.604400   78922 api_server.go:253] Checking apiserver healthz at https://192.168.72.126:8443/healthz ...
	I0806 08:35:21.613667   78922 api_server.go:279] https://192.168.72.126:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0806 08:35:21.613706   78922 api_server.go:103] status: https://192.168.72.126:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0806 08:35:22.103921   78922 api_server.go:253] Checking apiserver healthz at https://192.168.72.126:8443/healthz ...
	I0806 08:35:22.111933   78922 api_server.go:279] https://192.168.72.126:8443/healthz returned 200:
	ok
	I0806 08:35:22.118406   78922 api_server.go:141] control plane version: v1.31.0-rc.0
	I0806 08:35:22.118430   78922 api_server.go:131] duration metric: took 4.514848902s to wait for apiserver health ...
	I0806 08:35:22.118438   78922 cni.go:84] Creating CNI manager for ""
	I0806 08:35:22.118444   78922 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0806 08:35:22.119689   78922 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0806 08:35:19.018302   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:19.518693   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:20.018804   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:20.518878   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:21.018974   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:21.518747   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:22.018530   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:22.519082   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:23.019088   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:23.518932   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:20.788766   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:23.288382   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:19.614878   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:22.113841   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:22.121321   78922 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0806 08:35:22.132760   78922 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0806 08:35:22.151770   78922 system_pods.go:43] waiting for kube-system pods to appear ...
	I0806 08:35:22.161775   78922 system_pods.go:59] 8 kube-system pods found
	I0806 08:35:22.161806   78922 system_pods.go:61] "coredns-6f6b679f8f-ft9vg" [0f9ea981-764b-4777-bf78-8798570b20d8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0806 08:35:22.161818   78922 system_pods.go:61] "etcd-no-preload-703340" [6adb80b2-0d33-4d53-a868-f83226cc7c26] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0806 08:35:22.161826   78922 system_pods.go:61] "kube-apiserver-no-preload-703340" [4a415305-2f94-42c5-901d-49f0b0114cf6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0806 08:35:22.161832   78922 system_pods.go:61] "kube-controller-manager-no-preload-703340" [0351f8f4-5046-4068-ada3-103394ee9ae2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0806 08:35:22.161837   78922 system_pods.go:61] "kube-proxy-7t2sd" [27f3aa37-eca2-450b-a98d-b3246a7e48ac] Running
	I0806 08:35:22.161841   78922 system_pods.go:61] "kube-scheduler-no-preload-703340" [ad4f6239-f5e1-4a76-91ea-8da5478e8992] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0806 08:35:22.161845   78922 system_pods.go:61] "metrics-server-6867b74b74-qz4vx" [800676b3-4940-4ae2-ad37-7adbd67ea8fa] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0806 08:35:22.161849   78922 system_pods.go:61] "storage-provisioner" [8fa414bf-3923-4c20-acd7-a48fba6f4044] Running
	I0806 08:35:22.161855   78922 system_pods.go:74] duration metric: took 10.062633ms to wait for pod list to return data ...
	I0806 08:35:22.161865   78922 node_conditions.go:102] verifying NodePressure condition ...
	I0806 08:35:22.165604   78922 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0806 08:35:22.165626   78922 node_conditions.go:123] node cpu capacity is 2
	I0806 08:35:22.165637   78922 node_conditions.go:105] duration metric: took 3.767899ms to run NodePressure ...
	I0806 08:35:22.165656   78922 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:35:22.445769   78922 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0806 08:35:22.451056   78922 kubeadm.go:739] kubelet initialised
	I0806 08:35:22.451078   78922 kubeadm.go:740] duration metric: took 5.279835ms waiting for restarted kubelet to initialise ...
	I0806 08:35:22.451085   78922 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 08:35:22.458199   78922 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6f6b679f8f-ft9vg" in "kube-system" namespace to be "Ready" ...
	I0806 08:35:22.463913   78922 pod_ready.go:97] node "no-preload-703340" hosting pod "coredns-6f6b679f8f-ft9vg" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-703340" has status "Ready":"False"
	I0806 08:35:22.463936   78922 pod_ready.go:81] duration metric: took 5.712258ms for pod "coredns-6f6b679f8f-ft9vg" in "kube-system" namespace to be "Ready" ...
	E0806 08:35:22.463945   78922 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-703340" hosting pod "coredns-6f6b679f8f-ft9vg" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-703340" has status "Ready":"False"
	I0806 08:35:22.463952   78922 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-703340" in "kube-system" namespace to be "Ready" ...
	I0806 08:35:22.471896   78922 pod_ready.go:97] node "no-preload-703340" hosting pod "etcd-no-preload-703340" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-703340" has status "Ready":"False"
	I0806 08:35:22.471921   78922 pod_ready.go:81] duration metric: took 7.962236ms for pod "etcd-no-preload-703340" in "kube-system" namespace to be "Ready" ...
	E0806 08:35:22.471930   78922 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-703340" hosting pod "etcd-no-preload-703340" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-703340" has status "Ready":"False"
	I0806 08:35:22.471936   78922 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-703340" in "kube-system" namespace to be "Ready" ...
	I0806 08:35:22.478365   78922 pod_ready.go:97] node "no-preload-703340" hosting pod "kube-apiserver-no-preload-703340" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-703340" has status "Ready":"False"
	I0806 08:35:22.478394   78922 pod_ready.go:81] duration metric: took 6.451038ms for pod "kube-apiserver-no-preload-703340" in "kube-system" namespace to be "Ready" ...
	E0806 08:35:22.478405   78922 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-703340" hosting pod "kube-apiserver-no-preload-703340" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-703340" has status "Ready":"False"
	I0806 08:35:22.478414   78922 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-703340" in "kube-system" namespace to be "Ready" ...
	I0806 08:35:22.559022   78922 pod_ready.go:97] node "no-preload-703340" hosting pod "kube-controller-manager-no-preload-703340" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-703340" has status "Ready":"False"
	I0806 08:35:22.559058   78922 pod_ready.go:81] duration metric: took 80.632686ms for pod "kube-controller-manager-no-preload-703340" in "kube-system" namespace to be "Ready" ...
	E0806 08:35:22.559071   78922 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-703340" hosting pod "kube-controller-manager-no-preload-703340" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-703340" has status "Ready":"False"
	I0806 08:35:22.559080   78922 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-7t2sd" in "kube-system" namespace to be "Ready" ...
	I0806 08:35:22.955121   78922 pod_ready.go:92] pod "kube-proxy-7t2sd" in "kube-system" namespace has status "Ready":"True"
	I0806 08:35:22.955142   78922 pod_ready.go:81] duration metric: took 396.053517ms for pod "kube-proxy-7t2sd" in "kube-system" namespace to be "Ready" ...
	I0806 08:35:22.955152   78922 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-703340" in "kube-system" namespace to be "Ready" ...
	I0806 08:35:24.961266   78922 pod_ready.go:102] pod "kube-scheduler-no-preload-703340" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:24.018764   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:24.519010   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:25.019090   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:25.518260   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:26.018784   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:26.518903   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:27.019011   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:27.518740   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:28.018299   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:28.518391   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:25.787771   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:27.788719   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:24.612790   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:26.612992   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:29.112518   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:26.962028   78922 pod_ready.go:102] pod "kube-scheduler-no-preload-703340" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:29.461941   78922 pod_ready.go:102] pod "kube-scheduler-no-preload-703340" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:29.018449   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:29.518446   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:30.018179   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:30.518270   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:31.018913   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:31.518140   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:32.018621   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:32.518573   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:33.018512   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:33.518604   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:29.790220   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:32.287668   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:31.112680   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:33.612484   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:31.961716   78922 pod_ready.go:102] pod "kube-scheduler-no-preload-703340" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:34.461368   78922 pod_ready.go:102] pod "kube-scheduler-no-preload-703340" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:34.961460   78922 pod_ready.go:92] pod "kube-scheduler-no-preload-703340" in "kube-system" namespace has status "Ready":"True"
	I0806 08:35:34.961483   78922 pod_ready.go:81] duration metric: took 12.006324097s for pod "kube-scheduler-no-preload-703340" in "kube-system" namespace to be "Ready" ...
	I0806 08:35:34.961495   78922 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace to be "Ready" ...
	I0806 08:35:34.018132   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:34.518476   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:35.018797   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:35.518507   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:36.018976   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:36.518062   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:37.018463   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:37.518648   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:38.018861   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:38.518236   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:34.288032   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:36.787530   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:38.789235   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:35.612530   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:38.114833   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:36.967646   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:38.969106   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:41.469056   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:39.018424   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:39.518065   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:40.018381   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:40.518979   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:41.018310   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:41.518277   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:42.018881   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:42.518157   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:43.018166   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:43.518740   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:41.287534   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:43.288666   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:40.115187   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:42.611895   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:43.967648   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:45.968277   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:44.018075   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:44.518545   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:45.018342   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:45.519066   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:46.018796   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:46.519031   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:47.018876   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:47.518649   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:35:47.518746   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:35:47.559141   79927 cri.go:89] found id: ""
	I0806 08:35:47.559175   79927 logs.go:276] 0 containers: []
	W0806 08:35:47.559188   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:35:47.559195   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:35:47.559253   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:35:47.596840   79927 cri.go:89] found id: ""
	I0806 08:35:47.596900   79927 logs.go:276] 0 containers: []
	W0806 08:35:47.596913   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:35:47.596921   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:35:47.596995   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:35:47.634932   79927 cri.go:89] found id: ""
	I0806 08:35:47.634961   79927 logs.go:276] 0 containers: []
	W0806 08:35:47.634969   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:35:47.634975   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:35:47.635031   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:35:47.672170   79927 cri.go:89] found id: ""
	I0806 08:35:47.672202   79927 logs.go:276] 0 containers: []
	W0806 08:35:47.672213   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:35:47.672220   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:35:47.672284   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:35:47.709643   79927 cri.go:89] found id: ""
	I0806 08:35:47.709672   79927 logs.go:276] 0 containers: []
	W0806 08:35:47.709683   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:35:47.709690   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:35:47.709752   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:35:47.750036   79927 cri.go:89] found id: ""
	I0806 08:35:47.750067   79927 logs.go:276] 0 containers: []
	W0806 08:35:47.750076   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:35:47.750082   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:35:47.750149   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:35:47.791837   79927 cri.go:89] found id: ""
	I0806 08:35:47.791862   79927 logs.go:276] 0 containers: []
	W0806 08:35:47.791872   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:35:47.791879   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:35:47.791938   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:35:47.831972   79927 cri.go:89] found id: ""
	I0806 08:35:47.831997   79927 logs.go:276] 0 containers: []
	W0806 08:35:47.832005   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:35:47.832014   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:35:47.832028   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:35:47.962228   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:35:47.962252   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:35:47.962267   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:35:48.031276   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:35:48.031314   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:35:48.075869   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:35:48.075912   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:35:48.131817   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:35:48.131859   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:35:45.788042   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:47.789501   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:44.612796   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:47.112009   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:48.468289   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:50.469101   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:50.647351   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:50.661916   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:35:50.662002   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:35:50.707294   79927 cri.go:89] found id: ""
	I0806 08:35:50.707323   79927 logs.go:276] 0 containers: []
	W0806 08:35:50.707335   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:35:50.707342   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:35:50.707396   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:35:50.744111   79927 cri.go:89] found id: ""
	I0806 08:35:50.744133   79927 logs.go:276] 0 containers: []
	W0806 08:35:50.744149   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:35:50.744160   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:35:50.744205   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:35:50.781482   79927 cri.go:89] found id: ""
	I0806 08:35:50.781513   79927 logs.go:276] 0 containers: []
	W0806 08:35:50.781524   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:35:50.781531   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:35:50.781591   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:35:50.828788   79927 cri.go:89] found id: ""
	I0806 08:35:50.828817   79927 logs.go:276] 0 containers: []
	W0806 08:35:50.828827   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:35:50.828834   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:35:50.828896   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:35:50.865168   79927 cri.go:89] found id: ""
	I0806 08:35:50.865199   79927 logs.go:276] 0 containers: []
	W0806 08:35:50.865211   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:35:50.865219   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:35:50.865273   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:35:50.902814   79927 cri.go:89] found id: ""
	I0806 08:35:50.902842   79927 logs.go:276] 0 containers: []
	W0806 08:35:50.902851   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:35:50.902856   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:35:50.902907   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:35:50.939861   79927 cri.go:89] found id: ""
	I0806 08:35:50.939891   79927 logs.go:276] 0 containers: []
	W0806 08:35:50.939899   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:35:50.939905   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:35:50.939961   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:35:50.984743   79927 cri.go:89] found id: ""
	I0806 08:35:50.984773   79927 logs.go:276] 0 containers: []
	W0806 08:35:50.984782   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:35:50.984793   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:35:50.984809   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:35:50.998476   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:35:50.998515   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:35:51.081100   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:35:51.081126   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:35:51.081139   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:35:51.157634   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:35:51.157679   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:35:51.210685   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:35:51.210716   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:35:50.288343   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:52.804477   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:49.612661   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:52.113950   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:54.114200   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:52.969251   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:55.468041   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:53.778620   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:53.793090   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:35:53.793154   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:35:53.837558   79927 cri.go:89] found id: ""
	I0806 08:35:53.837584   79927 logs.go:276] 0 containers: []
	W0806 08:35:53.837594   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:35:53.837599   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:35:53.837670   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:35:53.878855   79927 cri.go:89] found id: ""
	I0806 08:35:53.878884   79927 logs.go:276] 0 containers: []
	W0806 08:35:53.878893   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:35:53.878898   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:35:53.878945   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:35:53.918330   79927 cri.go:89] found id: ""
	I0806 08:35:53.918357   79927 logs.go:276] 0 containers: []
	W0806 08:35:53.918366   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:35:53.918371   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:35:53.918422   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:35:53.959156   79927 cri.go:89] found id: ""
	I0806 08:35:53.959186   79927 logs.go:276] 0 containers: []
	W0806 08:35:53.959196   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:35:53.959204   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:35:53.959309   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:35:53.996775   79927 cri.go:89] found id: ""
	I0806 08:35:53.996802   79927 logs.go:276] 0 containers: []
	W0806 08:35:53.996810   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:35:53.996816   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:35:53.996874   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:35:54.033047   79927 cri.go:89] found id: ""
	I0806 08:35:54.033072   79927 logs.go:276] 0 containers: []
	W0806 08:35:54.033082   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:35:54.033089   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:35:54.033152   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:35:54.069426   79927 cri.go:89] found id: ""
	I0806 08:35:54.069458   79927 logs.go:276] 0 containers: []
	W0806 08:35:54.069468   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:35:54.069475   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:35:54.069536   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:35:54.109114   79927 cri.go:89] found id: ""
	I0806 08:35:54.109143   79927 logs.go:276] 0 containers: []
	W0806 08:35:54.109153   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:35:54.109164   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:35:54.109179   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:35:54.161264   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:35:54.161298   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:35:54.175216   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:35:54.175247   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:35:54.254970   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:35:54.254999   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:35:54.255014   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:35:54.331561   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:35:54.331596   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:35:56.873110   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:56.889302   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:35:56.889378   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:35:56.931875   79927 cri.go:89] found id: ""
	I0806 08:35:56.931904   79927 logs.go:276] 0 containers: []
	W0806 08:35:56.931913   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:35:56.931919   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:35:56.931975   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:35:56.995549   79927 cri.go:89] found id: ""
	I0806 08:35:56.995587   79927 logs.go:276] 0 containers: []
	W0806 08:35:56.995601   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:35:56.995608   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:35:56.995675   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:35:57.042708   79927 cri.go:89] found id: ""
	I0806 08:35:57.042735   79927 logs.go:276] 0 containers: []
	W0806 08:35:57.042743   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:35:57.042749   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:35:57.042833   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:35:57.079160   79927 cri.go:89] found id: ""
	I0806 08:35:57.079187   79927 logs.go:276] 0 containers: []
	W0806 08:35:57.079197   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:35:57.079203   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:35:57.079260   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:35:57.116358   79927 cri.go:89] found id: ""
	I0806 08:35:57.116409   79927 logs.go:276] 0 containers: []
	W0806 08:35:57.116421   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:35:57.116429   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:35:57.116489   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:35:57.151051   79927 cri.go:89] found id: ""
	I0806 08:35:57.151079   79927 logs.go:276] 0 containers: []
	W0806 08:35:57.151087   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:35:57.151096   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:35:57.151148   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:35:57.189747   79927 cri.go:89] found id: ""
	I0806 08:35:57.189791   79927 logs.go:276] 0 containers: []
	W0806 08:35:57.189800   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:35:57.189806   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:35:57.189871   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:35:57.225251   79927 cri.go:89] found id: ""
	I0806 08:35:57.225277   79927 logs.go:276] 0 containers: []
	W0806 08:35:57.225287   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:35:57.225298   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:35:57.225314   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:35:57.238746   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:35:57.238773   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:35:57.312931   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:35:57.312959   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:35:57.312976   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:35:57.394104   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:35:57.394139   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:35:57.435993   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:35:57.436028   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:35:55.288712   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:57.789512   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:56.612875   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:58.613571   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:57.468429   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:59.969618   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:59.989556   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:00.004129   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:00.004205   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:00.042192   79927 cri.go:89] found id: ""
	I0806 08:36:00.042215   79927 logs.go:276] 0 containers: []
	W0806 08:36:00.042223   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:00.042228   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:00.042273   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:00.080921   79927 cri.go:89] found id: ""
	I0806 08:36:00.080956   79927 logs.go:276] 0 containers: []
	W0806 08:36:00.080966   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:00.080973   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:00.081033   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:00.123186   79927 cri.go:89] found id: ""
	I0806 08:36:00.123206   79927 logs.go:276] 0 containers: []
	W0806 08:36:00.123216   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:00.123223   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:00.123274   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:00.165252   79927 cri.go:89] found id: ""
	I0806 08:36:00.165281   79927 logs.go:276] 0 containers: []
	W0806 08:36:00.165289   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:00.165294   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:00.165340   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:00.201601   79927 cri.go:89] found id: ""
	I0806 08:36:00.201639   79927 logs.go:276] 0 containers: []
	W0806 08:36:00.201653   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:00.201662   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:00.201730   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:00.236933   79927 cri.go:89] found id: ""
	I0806 08:36:00.236964   79927 logs.go:276] 0 containers: []
	W0806 08:36:00.236975   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:00.236983   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:00.237043   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:00.273215   79927 cri.go:89] found id: ""
	I0806 08:36:00.273247   79927 logs.go:276] 0 containers: []
	W0806 08:36:00.273256   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:00.273261   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:00.273319   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:00.308075   79927 cri.go:89] found id: ""
	I0806 08:36:00.308105   79927 logs.go:276] 0 containers: []
	W0806 08:36:00.308116   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:00.308126   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:00.308141   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:00.360096   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:00.360146   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:00.374484   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:00.374521   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:00.454297   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:00.454318   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:00.454334   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:00.542665   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:00.542702   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:03.084297   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:03.098661   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:03.098737   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:03.134751   79927 cri.go:89] found id: ""
	I0806 08:36:03.134786   79927 logs.go:276] 0 containers: []
	W0806 08:36:03.134797   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:03.134805   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:03.134873   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:03.177412   79927 cri.go:89] found id: ""
	I0806 08:36:03.177442   79927 logs.go:276] 0 containers: []
	W0806 08:36:03.177453   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:03.177461   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:03.177516   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:03.213177   79927 cri.go:89] found id: ""
	I0806 08:36:03.213209   79927 logs.go:276] 0 containers: []
	W0806 08:36:03.213221   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:03.213229   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:03.213291   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:03.254569   79927 cri.go:89] found id: ""
	I0806 08:36:03.254594   79927 logs.go:276] 0 containers: []
	W0806 08:36:03.254603   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:03.254608   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:03.254656   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:03.292564   79927 cri.go:89] found id: ""
	I0806 08:36:03.292590   79927 logs.go:276] 0 containers: []
	W0806 08:36:03.292597   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:03.292610   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:03.292671   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:03.331572   79927 cri.go:89] found id: ""
	I0806 08:36:03.331596   79927 logs.go:276] 0 containers: []
	W0806 08:36:03.331605   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:03.331610   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:03.331658   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:03.366368   79927 cri.go:89] found id: ""
	I0806 08:36:03.366398   79927 logs.go:276] 0 containers: []
	W0806 08:36:03.366409   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:03.366417   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:03.366469   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:03.404345   79927 cri.go:89] found id: ""
	I0806 08:36:03.404387   79927 logs.go:276] 0 containers: []
	W0806 08:36:03.404399   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:03.404411   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:03.404425   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:03.454825   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:03.454864   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:03.469437   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:03.469466   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:03.553247   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:03.553274   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:03.553290   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:03.635461   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:03.635509   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:00.287469   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:02.787634   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:01.111573   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:03.613750   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:02.468084   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:04.468169   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:06.469036   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:06.175216   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:06.190083   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:06.190158   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:06.226428   79927 cri.go:89] found id: ""
	I0806 08:36:06.226459   79927 logs.go:276] 0 containers: []
	W0806 08:36:06.226468   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:06.226473   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:06.226529   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:06.263781   79927 cri.go:89] found id: ""
	I0806 08:36:06.263805   79927 logs.go:276] 0 containers: []
	W0806 08:36:06.263813   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:06.263819   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:06.263895   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:06.304335   79927 cri.go:89] found id: ""
	I0806 08:36:06.304386   79927 logs.go:276] 0 containers: []
	W0806 08:36:06.304397   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:06.304404   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:06.304466   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:06.341138   79927 cri.go:89] found id: ""
	I0806 08:36:06.341166   79927 logs.go:276] 0 containers: []
	W0806 08:36:06.341177   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:06.341184   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:06.341248   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:06.381547   79927 cri.go:89] found id: ""
	I0806 08:36:06.381577   79927 logs.go:276] 0 containers: []
	W0806 08:36:06.381587   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:06.381594   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:06.381662   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:06.417606   79927 cri.go:89] found id: ""
	I0806 08:36:06.417630   79927 logs.go:276] 0 containers: []
	W0806 08:36:06.417637   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:06.417643   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:06.417700   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:06.456258   79927 cri.go:89] found id: ""
	I0806 08:36:06.456287   79927 logs.go:276] 0 containers: []
	W0806 08:36:06.456298   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:06.456307   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:06.456404   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:06.494268   79927 cri.go:89] found id: ""
	I0806 08:36:06.494309   79927 logs.go:276] 0 containers: []
	W0806 08:36:06.494321   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:06.494332   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:06.494346   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:06.547670   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:06.547710   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:06.562020   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:06.562049   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:06.632691   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:06.632717   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:06.632729   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:06.721737   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:06.721780   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:04.787931   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:07.287463   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:06.111545   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:08.113637   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:08.968767   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:11.468847   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:09.265154   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:09.278664   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:09.278743   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:09.313464   79927 cri.go:89] found id: ""
	I0806 08:36:09.313494   79927 logs.go:276] 0 containers: []
	W0806 08:36:09.313506   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:09.313513   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:09.313561   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:09.352107   79927 cri.go:89] found id: ""
	I0806 08:36:09.352133   79927 logs.go:276] 0 containers: []
	W0806 08:36:09.352146   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:09.352153   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:09.352215   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:09.387547   79927 cri.go:89] found id: ""
	I0806 08:36:09.387581   79927 logs.go:276] 0 containers: []
	W0806 08:36:09.387594   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:09.387601   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:09.387684   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:09.422157   79927 cri.go:89] found id: ""
	I0806 08:36:09.422193   79927 logs.go:276] 0 containers: []
	W0806 08:36:09.422202   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:09.422207   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:09.422275   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:09.463065   79927 cri.go:89] found id: ""
	I0806 08:36:09.463106   79927 logs.go:276] 0 containers: []
	W0806 08:36:09.463122   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:09.463145   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:09.463209   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:09.499450   79927 cri.go:89] found id: ""
	I0806 08:36:09.499474   79927 logs.go:276] 0 containers: []
	W0806 08:36:09.499484   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:09.499491   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:09.499550   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:09.536411   79927 cri.go:89] found id: ""
	I0806 08:36:09.536442   79927 logs.go:276] 0 containers: []
	W0806 08:36:09.536453   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:09.536460   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:09.536533   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:09.572044   79927 cri.go:89] found id: ""
	I0806 08:36:09.572070   79927 logs.go:276] 0 containers: []
	W0806 08:36:09.572079   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:09.572087   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:09.572138   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:09.612385   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:09.612418   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:09.666790   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:09.666852   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:09.680707   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:09.680753   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:09.753446   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:09.753473   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:09.753491   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:12.336824   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:12.352545   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:12.352626   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:12.393158   79927 cri.go:89] found id: ""
	I0806 08:36:12.393188   79927 logs.go:276] 0 containers: []
	W0806 08:36:12.393199   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:12.393206   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:12.393255   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:12.428501   79927 cri.go:89] found id: ""
	I0806 08:36:12.428525   79927 logs.go:276] 0 containers: []
	W0806 08:36:12.428534   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:12.428539   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:12.428596   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:12.463033   79927 cri.go:89] found id: ""
	I0806 08:36:12.463062   79927 logs.go:276] 0 containers: []
	W0806 08:36:12.463072   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:12.463079   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:12.463136   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:12.500350   79927 cri.go:89] found id: ""
	I0806 08:36:12.500402   79927 logs.go:276] 0 containers: []
	W0806 08:36:12.500413   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:12.500421   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:12.500477   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:12.537323   79927 cri.go:89] found id: ""
	I0806 08:36:12.537350   79927 logs.go:276] 0 containers: []
	W0806 08:36:12.537360   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:12.537366   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:12.537414   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:12.575525   79927 cri.go:89] found id: ""
	I0806 08:36:12.575551   79927 logs.go:276] 0 containers: []
	W0806 08:36:12.575561   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:12.575568   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:12.575629   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:12.615375   79927 cri.go:89] found id: ""
	I0806 08:36:12.615436   79927 logs.go:276] 0 containers: []
	W0806 08:36:12.615451   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:12.615459   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:12.615531   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:12.655269   79927 cri.go:89] found id: ""
	I0806 08:36:12.655300   79927 logs.go:276] 0 containers: []
	W0806 08:36:12.655311   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:12.655324   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:12.655342   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:12.709097   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:12.709134   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:12.724330   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:12.724380   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:12.801075   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:12.801099   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:12.801111   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:12.885670   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:12.885704   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:09.288192   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:11.789794   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:10.613121   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:12.613812   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:13.969367   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:16.468276   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:15.427222   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:15.442092   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:15.442206   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:15.485634   79927 cri.go:89] found id: ""
	I0806 08:36:15.485657   79927 logs.go:276] 0 containers: []
	W0806 08:36:15.485665   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:15.485671   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:15.485715   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:15.520791   79927 cri.go:89] found id: ""
	I0806 08:36:15.520820   79927 logs.go:276] 0 containers: []
	W0806 08:36:15.520830   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:15.520837   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:15.520897   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:15.554041   79927 cri.go:89] found id: ""
	I0806 08:36:15.554065   79927 logs.go:276] 0 containers: []
	W0806 08:36:15.554073   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:15.554079   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:15.554143   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:15.587550   79927 cri.go:89] found id: ""
	I0806 08:36:15.587579   79927 logs.go:276] 0 containers: []
	W0806 08:36:15.587591   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:15.587598   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:15.587659   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:15.628657   79927 cri.go:89] found id: ""
	I0806 08:36:15.628681   79927 logs.go:276] 0 containers: []
	W0806 08:36:15.628689   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:15.628695   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:15.628750   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:15.665212   79927 cri.go:89] found id: ""
	I0806 08:36:15.665239   79927 logs.go:276] 0 containers: []
	W0806 08:36:15.665247   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:15.665252   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:15.665298   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:15.705675   79927 cri.go:89] found id: ""
	I0806 08:36:15.705701   79927 logs.go:276] 0 containers: []
	W0806 08:36:15.705709   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:15.705714   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:15.705762   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:15.739835   79927 cri.go:89] found id: ""
	I0806 08:36:15.739862   79927 logs.go:276] 0 containers: []
	W0806 08:36:15.739871   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:15.739879   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:15.739892   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:15.795719   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:15.795749   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:15.809940   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:15.809967   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:15.888710   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:15.888737   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:15.888753   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:15.972728   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:15.972772   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:18.513565   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:18.527480   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:18.527538   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:18.562783   79927 cri.go:89] found id: ""
	I0806 08:36:18.562812   79927 logs.go:276] 0 containers: []
	W0806 08:36:18.562824   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:18.562832   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:18.562896   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:18.600613   79927 cri.go:89] found id: ""
	I0806 08:36:18.600645   79927 logs.go:276] 0 containers: []
	W0806 08:36:18.600659   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:18.600668   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:18.600735   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:18.638074   79927 cri.go:89] found id: ""
	I0806 08:36:18.638104   79927 logs.go:276] 0 containers: []
	W0806 08:36:18.638114   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:18.638119   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:18.638181   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:18.679795   79927 cri.go:89] found id: ""
	I0806 08:36:18.679825   79927 logs.go:276] 0 containers: []
	W0806 08:36:18.679836   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:18.679844   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:18.679907   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:18.715535   79927 cri.go:89] found id: ""
	I0806 08:36:18.715568   79927 logs.go:276] 0 containers: []
	W0806 08:36:18.715577   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:18.715585   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:18.715650   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:18.752705   79927 cri.go:89] found id: ""
	I0806 08:36:18.752736   79927 logs.go:276] 0 containers: []
	W0806 08:36:18.752748   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:18.752759   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:18.752821   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:14.288048   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:16.288422   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:18.788928   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:15.112857   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:17.114585   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:18.468697   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:20.967985   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:18.791653   79927 cri.go:89] found id: ""
	I0806 08:36:18.791678   79927 logs.go:276] 0 containers: []
	W0806 08:36:18.791689   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:18.791696   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:18.791763   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:18.828328   79927 cri.go:89] found id: ""
	I0806 08:36:18.828357   79927 logs.go:276] 0 containers: []
	W0806 08:36:18.828378   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:18.828389   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:18.828408   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:18.888111   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:18.888148   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:18.902718   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:18.902750   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:18.986327   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:18.986352   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:18.986365   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:19.072858   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:19.072899   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:21.615497   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:21.631434   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:21.631492   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:21.671446   79927 cri.go:89] found id: ""
	I0806 08:36:21.671471   79927 logs.go:276] 0 containers: []
	W0806 08:36:21.671479   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:21.671484   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:21.671540   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:21.707598   79927 cri.go:89] found id: ""
	I0806 08:36:21.707624   79927 logs.go:276] 0 containers: []
	W0806 08:36:21.707634   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:21.707640   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:21.707686   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:21.753647   79927 cri.go:89] found id: ""
	I0806 08:36:21.753675   79927 logs.go:276] 0 containers: []
	W0806 08:36:21.753686   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:21.753692   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:21.753749   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:21.795284   79927 cri.go:89] found id: ""
	I0806 08:36:21.795311   79927 logs.go:276] 0 containers: []
	W0806 08:36:21.795321   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:21.795329   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:21.795388   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:21.832436   79927 cri.go:89] found id: ""
	I0806 08:36:21.832466   79927 logs.go:276] 0 containers: []
	W0806 08:36:21.832476   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:21.832483   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:21.832550   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:21.870295   79927 cri.go:89] found id: ""
	I0806 08:36:21.870327   79927 logs.go:276] 0 containers: []
	W0806 08:36:21.870338   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:21.870345   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:21.870412   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:21.908210   79927 cri.go:89] found id: ""
	I0806 08:36:21.908242   79927 logs.go:276] 0 containers: []
	W0806 08:36:21.908253   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:21.908261   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:21.908322   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:21.944055   79927 cri.go:89] found id: ""
	I0806 08:36:21.944084   79927 logs.go:276] 0 containers: []
	W0806 08:36:21.944095   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:21.944104   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:21.944125   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:22.000467   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:22.000506   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:22.014590   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:22.014629   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:22.083873   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:22.083907   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:22.083927   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:22.175993   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:22.176056   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:21.288435   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:23.788424   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:19.612021   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:21.612092   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:23.612557   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:22.968059   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:24.968111   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:24.720476   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:24.734469   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:24.734551   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:24.770663   79927 cri.go:89] found id: ""
	I0806 08:36:24.770692   79927 logs.go:276] 0 containers: []
	W0806 08:36:24.770705   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:24.770712   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:24.770768   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:24.807861   79927 cri.go:89] found id: ""
	I0806 08:36:24.807889   79927 logs.go:276] 0 containers: []
	W0806 08:36:24.807897   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:24.807908   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:24.807957   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:24.849827   79927 cri.go:89] found id: ""
	I0806 08:36:24.849857   79927 logs.go:276] 0 containers: []
	W0806 08:36:24.849869   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:24.849877   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:24.849937   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:24.885260   79927 cri.go:89] found id: ""
	I0806 08:36:24.885291   79927 logs.go:276] 0 containers: []
	W0806 08:36:24.885303   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:24.885315   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:24.885381   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:24.919918   79927 cri.go:89] found id: ""
	I0806 08:36:24.919949   79927 logs.go:276] 0 containers: []
	W0806 08:36:24.919957   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:24.919964   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:24.920026   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:24.956010   79927 cri.go:89] found id: ""
	I0806 08:36:24.956036   79927 logs.go:276] 0 containers: []
	W0806 08:36:24.956045   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:24.956050   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:24.956109   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:24.993126   79927 cri.go:89] found id: ""
	I0806 08:36:24.993154   79927 logs.go:276] 0 containers: []
	W0806 08:36:24.993165   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:24.993173   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:24.993240   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:25.026732   79927 cri.go:89] found id: ""
	I0806 08:36:25.026756   79927 logs.go:276] 0 containers: []
	W0806 08:36:25.026763   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:25.026771   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:25.026781   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:25.064076   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:25.064113   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:25.118820   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:25.118856   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:25.133640   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:25.133671   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:25.201686   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:25.201714   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:25.201729   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:27.801947   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:27.815472   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:27.815539   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:27.851073   79927 cri.go:89] found id: ""
	I0806 08:36:27.851103   79927 logs.go:276] 0 containers: []
	W0806 08:36:27.851119   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:27.851127   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:27.851184   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:27.886121   79927 cri.go:89] found id: ""
	I0806 08:36:27.886151   79927 logs.go:276] 0 containers: []
	W0806 08:36:27.886162   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:27.886169   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:27.886231   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:27.920950   79927 cri.go:89] found id: ""
	I0806 08:36:27.920979   79927 logs.go:276] 0 containers: []
	W0806 08:36:27.920990   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:27.920997   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:27.921055   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:27.956529   79927 cri.go:89] found id: ""
	I0806 08:36:27.956562   79927 logs.go:276] 0 containers: []
	W0806 08:36:27.956574   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:27.956581   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:27.956631   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:27.992272   79927 cri.go:89] found id: ""
	I0806 08:36:27.992298   79927 logs.go:276] 0 containers: []
	W0806 08:36:27.992306   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:27.992322   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:27.992405   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:28.027242   79927 cri.go:89] found id: ""
	I0806 08:36:28.027269   79927 logs.go:276] 0 containers: []
	W0806 08:36:28.027277   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:28.027283   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:28.027335   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:28.063043   79927 cri.go:89] found id: ""
	I0806 08:36:28.063068   79927 logs.go:276] 0 containers: []
	W0806 08:36:28.063076   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:28.063083   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:28.063148   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:28.099743   79927 cri.go:89] found id: ""
	I0806 08:36:28.099769   79927 logs.go:276] 0 containers: []
	W0806 08:36:28.099780   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:28.099791   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:28.099806   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:28.151354   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:28.151392   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:28.166704   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:28.166731   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:28.242621   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:28.242642   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:28.242656   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:28.324352   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:28.324405   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:25.789745   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:28.290656   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:26.112264   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:28.112769   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:27.466228   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:29.469998   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:30.869063   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:30.882309   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:30.882377   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:30.916264   79927 cri.go:89] found id: ""
	I0806 08:36:30.916296   79927 logs.go:276] 0 containers: []
	W0806 08:36:30.916306   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:30.916313   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:30.916390   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:30.951995   79927 cri.go:89] found id: ""
	I0806 08:36:30.952021   79927 logs.go:276] 0 containers: []
	W0806 08:36:30.952028   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:30.952034   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:30.952111   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:31.005229   79927 cri.go:89] found id: ""
	I0806 08:36:31.005270   79927 logs.go:276] 0 containers: []
	W0806 08:36:31.005281   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:31.005288   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:31.005350   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:31.040439   79927 cri.go:89] found id: ""
	I0806 08:36:31.040465   79927 logs.go:276] 0 containers: []
	W0806 08:36:31.040476   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:31.040482   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:31.040534   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:31.078880   79927 cri.go:89] found id: ""
	I0806 08:36:31.078910   79927 logs.go:276] 0 containers: []
	W0806 08:36:31.078920   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:31.078934   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:31.078996   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:31.115520   79927 cri.go:89] found id: ""
	I0806 08:36:31.115551   79927 logs.go:276] 0 containers: []
	W0806 08:36:31.115562   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:31.115569   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:31.115615   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:31.150450   79927 cri.go:89] found id: ""
	I0806 08:36:31.150483   79927 logs.go:276] 0 containers: []
	W0806 08:36:31.150497   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:31.150506   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:31.150565   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:31.187449   79927 cri.go:89] found id: ""
	I0806 08:36:31.187476   79927 logs.go:276] 0 containers: []
	W0806 08:36:31.187485   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:31.187493   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:31.187507   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:31.246232   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:31.246272   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:31.260691   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:31.260720   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:31.342356   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:31.342383   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:31.342401   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:31.426820   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:31.426865   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:30.787370   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:32.787711   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:30.611466   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:32.611550   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:31.968048   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:34.469062   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:33.974768   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:33.988377   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:33.988442   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:34.023284   79927 cri.go:89] found id: ""
	I0806 08:36:34.023316   79927 logs.go:276] 0 containers: []
	W0806 08:36:34.023328   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:34.023336   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:34.023397   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:34.058633   79927 cri.go:89] found id: ""
	I0806 08:36:34.058676   79927 logs.go:276] 0 containers: []
	W0806 08:36:34.058695   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:34.058704   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:34.058767   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:34.098343   79927 cri.go:89] found id: ""
	I0806 08:36:34.098379   79927 logs.go:276] 0 containers: []
	W0806 08:36:34.098390   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:34.098398   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:34.098471   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:34.134638   79927 cri.go:89] found id: ""
	I0806 08:36:34.134667   79927 logs.go:276] 0 containers: []
	W0806 08:36:34.134677   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:34.134683   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:34.134729   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:34.170118   79927 cri.go:89] found id: ""
	I0806 08:36:34.170144   79927 logs.go:276] 0 containers: []
	W0806 08:36:34.170153   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:34.170159   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:34.170207   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:34.204789   79927 cri.go:89] found id: ""
	I0806 08:36:34.204815   79927 logs.go:276] 0 containers: []
	W0806 08:36:34.204823   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:34.204829   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:34.204877   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:34.245170   79927 cri.go:89] found id: ""
	I0806 08:36:34.245194   79927 logs.go:276] 0 containers: []
	W0806 08:36:34.245203   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:34.245208   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:34.245264   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:34.280026   79927 cri.go:89] found id: ""
	I0806 08:36:34.280063   79927 logs.go:276] 0 containers: []
	W0806 08:36:34.280076   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:34.280086   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:34.280101   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:34.335363   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:34.335408   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:34.350201   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:34.350226   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:34.425205   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:34.425231   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:34.425244   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:34.506385   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:34.506419   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:37.047263   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:37.060179   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:37.060254   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:37.093168   79927 cri.go:89] found id: ""
	I0806 08:36:37.093201   79927 logs.go:276] 0 containers: []
	W0806 08:36:37.093209   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:37.093216   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:37.093264   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:37.131315   79927 cri.go:89] found id: ""
	I0806 08:36:37.131343   79927 logs.go:276] 0 containers: []
	W0806 08:36:37.131353   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:37.131360   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:37.131426   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:37.169799   79927 cri.go:89] found id: ""
	I0806 08:36:37.169829   79927 logs.go:276] 0 containers: []
	W0806 08:36:37.169839   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:37.169845   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:37.169898   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:37.205852   79927 cri.go:89] found id: ""
	I0806 08:36:37.205878   79927 logs.go:276] 0 containers: []
	W0806 08:36:37.205885   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:37.205891   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:37.205938   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:37.242469   79927 cri.go:89] found id: ""
	I0806 08:36:37.242493   79927 logs.go:276] 0 containers: []
	W0806 08:36:37.242501   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:37.242507   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:37.242554   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:37.278515   79927 cri.go:89] found id: ""
	I0806 08:36:37.278545   79927 logs.go:276] 0 containers: []
	W0806 08:36:37.278556   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:37.278563   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:37.278627   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:37.316157   79927 cri.go:89] found id: ""
	I0806 08:36:37.316181   79927 logs.go:276] 0 containers: []
	W0806 08:36:37.316189   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:37.316195   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:37.316249   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:37.353752   79927 cri.go:89] found id: ""
	I0806 08:36:37.353778   79927 logs.go:276] 0 containers: []
	W0806 08:36:37.353785   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:37.353796   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:37.353807   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:37.439435   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:37.439471   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:37.488866   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:37.488903   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:37.540031   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:37.540072   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:37.556109   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:37.556140   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:37.629023   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:35.288565   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:37.788336   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:35.113182   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:37.615523   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:36.967665   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:38.973884   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:41.467445   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:40.129579   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:40.143971   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:40.144050   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:40.180490   79927 cri.go:89] found id: ""
	I0806 08:36:40.180515   79927 logs.go:276] 0 containers: []
	W0806 08:36:40.180524   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:40.180530   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:40.180586   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:40.219812   79927 cri.go:89] found id: ""
	I0806 08:36:40.219860   79927 logs.go:276] 0 containers: []
	W0806 08:36:40.219869   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:40.219877   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:40.219939   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:40.256048   79927 cri.go:89] found id: ""
	I0806 08:36:40.256076   79927 logs.go:276] 0 containers: []
	W0806 08:36:40.256088   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:40.256106   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:40.256166   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:40.294693   79927 cri.go:89] found id: ""
	I0806 08:36:40.294720   79927 logs.go:276] 0 containers: []
	W0806 08:36:40.294729   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:40.294734   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:40.294792   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:40.331246   79927 cri.go:89] found id: ""
	I0806 08:36:40.331276   79927 logs.go:276] 0 containers: []
	W0806 08:36:40.331287   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:40.331295   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:40.331351   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:40.373039   79927 cri.go:89] found id: ""
	I0806 08:36:40.373068   79927 logs.go:276] 0 containers: []
	W0806 08:36:40.373095   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:40.373103   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:40.373163   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:40.408137   79927 cri.go:89] found id: ""
	I0806 08:36:40.408167   79927 logs.go:276] 0 containers: []
	W0806 08:36:40.408177   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:40.408184   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:40.408249   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:40.442996   79927 cri.go:89] found id: ""
	I0806 08:36:40.443035   79927 logs.go:276] 0 containers: []
	W0806 08:36:40.443043   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:40.443051   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:40.443063   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:40.499559   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:40.499599   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:40.514687   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:40.514716   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:40.586034   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:40.586059   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:40.586071   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:40.669756   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:40.669796   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:43.213555   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:43.229683   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:43.229751   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:43.267107   79927 cri.go:89] found id: ""
	I0806 08:36:43.267140   79927 logs.go:276] 0 containers: []
	W0806 08:36:43.267153   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:43.267161   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:43.267235   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:43.303405   79927 cri.go:89] found id: ""
	I0806 08:36:43.303435   79927 logs.go:276] 0 containers: []
	W0806 08:36:43.303443   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:43.303449   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:43.303509   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:43.338583   79927 cri.go:89] found id: ""
	I0806 08:36:43.338610   79927 logs.go:276] 0 containers: []
	W0806 08:36:43.338619   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:43.338628   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:43.338692   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:43.377413   79927 cri.go:89] found id: ""
	I0806 08:36:43.377441   79927 logs.go:276] 0 containers: []
	W0806 08:36:43.377449   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:43.377454   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:43.377501   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:43.413036   79927 cri.go:89] found id: ""
	I0806 08:36:43.413133   79927 logs.go:276] 0 containers: []
	W0806 08:36:43.413152   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:43.413160   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:43.413221   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:43.450216   79927 cri.go:89] found id: ""
	I0806 08:36:43.450245   79927 logs.go:276] 0 containers: []
	W0806 08:36:43.450253   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:43.450259   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:43.450305   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:43.485811   79927 cri.go:89] found id: ""
	I0806 08:36:43.485856   79927 logs.go:276] 0 containers: []
	W0806 08:36:43.485864   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:43.485870   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:43.485919   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:43.521586   79927 cri.go:89] found id: ""
	I0806 08:36:43.521617   79927 logs.go:276] 0 containers: []
	W0806 08:36:43.521628   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:43.521650   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:43.521673   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:43.561842   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:43.561889   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:43.617652   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:43.617683   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:43.631239   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:43.631267   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:43.707297   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:43.707317   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:43.707330   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:39.789019   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:42.287817   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:40.111420   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:42.112503   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:43.468602   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:45.974699   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:46.287309   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:46.301917   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:46.301991   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:46.335040   79927 cri.go:89] found id: ""
	I0806 08:36:46.335069   79927 logs.go:276] 0 containers: []
	W0806 08:36:46.335080   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:46.335088   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:46.335177   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:46.371902   79927 cri.go:89] found id: ""
	I0806 08:36:46.371930   79927 logs.go:276] 0 containers: []
	W0806 08:36:46.371939   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:46.371944   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:46.372000   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:46.411945   79927 cri.go:89] found id: ""
	I0806 08:36:46.412119   79927 logs.go:276] 0 containers: []
	W0806 08:36:46.412134   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:46.412145   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:46.412248   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:46.448578   79927 cri.go:89] found id: ""
	I0806 08:36:46.448612   79927 logs.go:276] 0 containers: []
	W0806 08:36:46.448624   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:46.448631   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:46.448691   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:46.492231   79927 cri.go:89] found id: ""
	I0806 08:36:46.492263   79927 logs.go:276] 0 containers: []
	W0806 08:36:46.492276   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:46.492283   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:46.492338   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:46.538502   79927 cri.go:89] found id: ""
	I0806 08:36:46.538528   79927 logs.go:276] 0 containers: []
	W0806 08:36:46.538547   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:46.538555   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:46.538610   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:46.577750   79927 cri.go:89] found id: ""
	I0806 08:36:46.577778   79927 logs.go:276] 0 containers: []
	W0806 08:36:46.577785   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:46.577790   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:46.577872   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:46.611717   79927 cri.go:89] found id: ""
	I0806 08:36:46.611742   79927 logs.go:276] 0 containers: []
	W0806 08:36:46.611750   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:46.611758   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:46.611770   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:46.629144   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:46.629178   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:46.714703   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:46.714723   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:46.714736   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:46.798978   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:46.799025   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:46.837373   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:46.837407   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:44.288387   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:46.288470   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:48.788762   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:44.611739   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:46.615083   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:49.113041   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:48.469024   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:50.967906   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:49.393231   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:49.407372   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:49.407436   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:49.444210   79927 cri.go:89] found id: ""
	I0806 08:36:49.444240   79927 logs.go:276] 0 containers: []
	W0806 08:36:49.444249   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:49.444254   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:49.444313   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:49.484165   79927 cri.go:89] found id: ""
	I0806 08:36:49.484190   79927 logs.go:276] 0 containers: []
	W0806 08:36:49.484200   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:49.484206   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:49.484278   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:49.519880   79927 cri.go:89] found id: ""
	I0806 08:36:49.519921   79927 logs.go:276] 0 containers: []
	W0806 08:36:49.519932   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:49.519939   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:49.519999   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:49.554954   79927 cri.go:89] found id: ""
	I0806 08:36:49.554979   79927 logs.go:276] 0 containers: []
	W0806 08:36:49.554988   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:49.554995   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:49.555045   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:49.593332   79927 cri.go:89] found id: ""
	I0806 08:36:49.593359   79927 logs.go:276] 0 containers: []
	W0806 08:36:49.593367   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:49.593373   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:49.593432   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:49.630736   79927 cri.go:89] found id: ""
	I0806 08:36:49.630766   79927 logs.go:276] 0 containers: []
	W0806 08:36:49.630774   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:49.630780   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:49.630839   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:49.667460   79927 cri.go:89] found id: ""
	I0806 08:36:49.667487   79927 logs.go:276] 0 containers: []
	W0806 08:36:49.667499   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:49.667505   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:49.667566   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:49.700161   79927 cri.go:89] found id: ""
	I0806 08:36:49.700190   79927 logs.go:276] 0 containers: []
	W0806 08:36:49.700199   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:49.700207   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:49.700218   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:49.786656   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:49.786704   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:49.825711   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:49.825745   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:49.879696   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:49.879731   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:49.893336   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:49.893369   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:49.969462   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:52.469939   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:52.483906   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:52.483974   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:52.523575   79927 cri.go:89] found id: ""
	I0806 08:36:52.523603   79927 logs.go:276] 0 containers: []
	W0806 08:36:52.523615   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:52.523622   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:52.523684   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:52.561590   79927 cri.go:89] found id: ""
	I0806 08:36:52.561620   79927 logs.go:276] 0 containers: []
	W0806 08:36:52.561630   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:52.561639   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:52.561698   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:52.598204   79927 cri.go:89] found id: ""
	I0806 08:36:52.598227   79927 logs.go:276] 0 containers: []
	W0806 08:36:52.598236   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:52.598241   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:52.598302   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:52.636993   79927 cri.go:89] found id: ""
	I0806 08:36:52.637019   79927 logs.go:276] 0 containers: []
	W0806 08:36:52.637027   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:52.637034   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:52.637085   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:52.675090   79927 cri.go:89] found id: ""
	I0806 08:36:52.675118   79927 logs.go:276] 0 containers: []
	W0806 08:36:52.675130   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:52.675138   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:52.675199   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:52.711045   79927 cri.go:89] found id: ""
	I0806 08:36:52.711072   79927 logs.go:276] 0 containers: []
	W0806 08:36:52.711081   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:52.711086   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:52.711151   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:52.749001   79927 cri.go:89] found id: ""
	I0806 08:36:52.749029   79927 logs.go:276] 0 containers: []
	W0806 08:36:52.749039   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:52.749046   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:52.749113   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:52.783692   79927 cri.go:89] found id: ""
	I0806 08:36:52.783720   79927 logs.go:276] 0 containers: []
	W0806 08:36:52.783737   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:52.783748   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:52.783762   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:52.863240   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:52.863281   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:52.902471   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:52.902496   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:52.953299   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:52.953341   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:52.968786   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:52.968818   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:53.045904   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:51.288280   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:53.289276   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:51.612945   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:54.112574   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:52.968486   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:55.468458   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:55.546407   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:55.560166   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:55.560243   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:55.592961   79927 cri.go:89] found id: ""
	I0806 08:36:55.592992   79927 logs.go:276] 0 containers: []
	W0806 08:36:55.593004   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:55.593012   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:55.593075   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:55.629546   79927 cri.go:89] found id: ""
	I0806 08:36:55.629568   79927 logs.go:276] 0 containers: []
	W0806 08:36:55.629578   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:55.629585   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:55.629639   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:55.668677   79927 cri.go:89] found id: ""
	I0806 08:36:55.668707   79927 logs.go:276] 0 containers: []
	W0806 08:36:55.668718   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:55.668726   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:55.668784   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:55.704141   79927 cri.go:89] found id: ""
	I0806 08:36:55.704173   79927 logs.go:276] 0 containers: []
	W0806 08:36:55.704183   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:55.704188   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:55.704257   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:55.740125   79927 cri.go:89] found id: ""
	I0806 08:36:55.740152   79927 logs.go:276] 0 containers: []
	W0806 08:36:55.740160   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:55.740166   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:55.740210   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:55.773709   79927 cri.go:89] found id: ""
	I0806 08:36:55.773739   79927 logs.go:276] 0 containers: []
	W0806 08:36:55.773750   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:55.773759   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:55.773827   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:55.813913   79927 cri.go:89] found id: ""
	I0806 08:36:55.813940   79927 logs.go:276] 0 containers: []
	W0806 08:36:55.813953   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:55.813959   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:55.814004   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:55.850807   79927 cri.go:89] found id: ""
	I0806 08:36:55.850841   79927 logs.go:276] 0 containers: []
	W0806 08:36:55.850853   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:55.850864   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:55.850878   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:55.908202   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:55.908247   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:55.922278   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:55.922311   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:55.996448   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:55.996476   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:55.996489   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:56.076612   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:56.076647   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:58.625610   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:58.640939   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:58.641008   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:58.675638   79927 cri.go:89] found id: ""
	I0806 08:36:58.675666   79927 logs.go:276] 0 containers: []
	W0806 08:36:58.675674   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:58.675680   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:58.675728   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:58.712020   79927 cri.go:89] found id: ""
	I0806 08:36:58.712055   79927 logs.go:276] 0 containers: []
	W0806 08:36:58.712065   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:58.712072   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:58.712126   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:58.749683   79927 cri.go:89] found id: ""
	I0806 08:36:58.749712   79927 logs.go:276] 0 containers: []
	W0806 08:36:58.749724   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:58.749731   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:58.749792   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:55.788883   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:58.288825   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:56.613051   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:59.113176   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:57.968553   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:59.968580   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:58.790005   79927 cri.go:89] found id: ""
	I0806 08:36:58.790026   79927 logs.go:276] 0 containers: []
	W0806 08:36:58.790033   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:58.790039   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:58.790089   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:58.828781   79927 cri.go:89] found id: ""
	I0806 08:36:58.828806   79927 logs.go:276] 0 containers: []
	W0806 08:36:58.828815   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:58.828821   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:58.828873   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:58.867039   79927 cri.go:89] found id: ""
	I0806 08:36:58.867067   79927 logs.go:276] 0 containers: []
	W0806 08:36:58.867075   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:58.867081   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:58.867125   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:58.901362   79927 cri.go:89] found id: ""
	I0806 08:36:58.901396   79927 logs.go:276] 0 containers: []
	W0806 08:36:58.901415   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:58.901423   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:58.901472   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:58.940351   79927 cri.go:89] found id: ""
	I0806 08:36:58.940388   79927 logs.go:276] 0 containers: []
	W0806 08:36:58.940399   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:58.940410   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:58.940425   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:58.995737   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:58.995776   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:59.009191   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:59.009222   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:59.085109   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:59.085133   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:59.085150   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:59.164778   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:59.164819   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:01.712282   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:01.725701   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:01.725759   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:01.762005   79927 cri.go:89] found id: ""
	I0806 08:37:01.762029   79927 logs.go:276] 0 containers: []
	W0806 08:37:01.762038   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:01.762044   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:01.762092   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:01.799277   79927 cri.go:89] found id: ""
	I0806 08:37:01.799304   79927 logs.go:276] 0 containers: []
	W0806 08:37:01.799316   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:01.799322   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:01.799391   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:01.845100   79927 cri.go:89] found id: ""
	I0806 08:37:01.845133   79927 logs.go:276] 0 containers: []
	W0806 08:37:01.845147   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:01.845154   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:01.845214   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:01.882019   79927 cri.go:89] found id: ""
	I0806 08:37:01.882050   79927 logs.go:276] 0 containers: []
	W0806 08:37:01.882061   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:01.882069   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:01.882160   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:01.919224   79927 cri.go:89] found id: ""
	I0806 08:37:01.919251   79927 logs.go:276] 0 containers: []
	W0806 08:37:01.919262   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:01.919269   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:01.919327   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:01.957300   79927 cri.go:89] found id: ""
	I0806 08:37:01.957327   79927 logs.go:276] 0 containers: []
	W0806 08:37:01.957335   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:01.957342   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:01.957389   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:02.000717   79927 cri.go:89] found id: ""
	I0806 08:37:02.000740   79927 logs.go:276] 0 containers: []
	W0806 08:37:02.000749   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:02.000754   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:02.000801   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:02.034710   79927 cri.go:89] found id: ""
	I0806 08:37:02.034739   79927 logs.go:276] 0 containers: []
	W0806 08:37:02.034753   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:02.034763   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:02.034776   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:02.090553   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:02.090593   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:02.104247   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:02.104276   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:02.181990   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:02.182015   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:02.182030   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:02.268127   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:02.268167   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:00.789157   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:03.288510   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:01.611956   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:04.111772   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:02.467718   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:04.968898   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:04.814144   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:04.829120   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:04.829194   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:04.873503   79927 cri.go:89] found id: ""
	I0806 08:37:04.873537   79927 logs.go:276] 0 containers: []
	W0806 08:37:04.873548   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:04.873557   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:04.873614   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:04.913878   79927 cri.go:89] found id: ""
	I0806 08:37:04.913910   79927 logs.go:276] 0 containers: []
	W0806 08:37:04.913918   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:04.913924   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:04.913984   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:04.950273   79927 cri.go:89] found id: ""
	I0806 08:37:04.950303   79927 logs.go:276] 0 containers: []
	W0806 08:37:04.950316   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:04.950323   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:04.950388   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:04.985108   79927 cri.go:89] found id: ""
	I0806 08:37:04.985135   79927 logs.go:276] 0 containers: []
	W0806 08:37:04.985145   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:04.985151   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:04.985215   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:05.021033   79927 cri.go:89] found id: ""
	I0806 08:37:05.021056   79927 logs.go:276] 0 containers: []
	W0806 08:37:05.021064   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:05.021069   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:05.021120   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:05.058416   79927 cri.go:89] found id: ""
	I0806 08:37:05.058440   79927 logs.go:276] 0 containers: []
	W0806 08:37:05.058448   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:05.058454   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:05.058510   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:05.093709   79927 cri.go:89] found id: ""
	I0806 08:37:05.093737   79927 logs.go:276] 0 containers: []
	W0806 08:37:05.093745   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:05.093751   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:05.093815   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:05.129391   79927 cri.go:89] found id: ""
	I0806 08:37:05.129416   79927 logs.go:276] 0 containers: []
	W0806 08:37:05.129424   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:05.129432   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:05.129444   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:05.172149   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:05.172176   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:05.224745   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:05.224781   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:05.238967   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:05.238997   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:05.314834   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:05.314868   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:05.314883   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:07.903732   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:07.918551   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:07.918619   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:07.966068   79927 cri.go:89] found id: ""
	I0806 08:37:07.966093   79927 logs.go:276] 0 containers: []
	W0806 08:37:07.966104   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:07.966112   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:07.966180   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:08.001557   79927 cri.go:89] found id: ""
	I0806 08:37:08.001586   79927 logs.go:276] 0 containers: []
	W0806 08:37:08.001595   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:08.001602   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:08.001663   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:08.034214   79927 cri.go:89] found id: ""
	I0806 08:37:08.034239   79927 logs.go:276] 0 containers: []
	W0806 08:37:08.034249   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:08.034256   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:08.034316   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:08.070645   79927 cri.go:89] found id: ""
	I0806 08:37:08.070673   79927 logs.go:276] 0 containers: []
	W0806 08:37:08.070683   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:08.070689   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:08.070761   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:08.110537   79927 cri.go:89] found id: ""
	I0806 08:37:08.110565   79927 logs.go:276] 0 containers: []
	W0806 08:37:08.110574   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:08.110581   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:08.110635   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:08.146179   79927 cri.go:89] found id: ""
	I0806 08:37:08.146203   79927 logs.go:276] 0 containers: []
	W0806 08:37:08.146211   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:08.146224   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:08.146272   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:08.182924   79927 cri.go:89] found id: ""
	I0806 08:37:08.182953   79927 logs.go:276] 0 containers: []
	W0806 08:37:08.182962   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:08.182968   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:08.183024   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:08.219547   79927 cri.go:89] found id: ""
	I0806 08:37:08.219570   79927 logs.go:276] 0 containers: []
	W0806 08:37:08.219576   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:08.219585   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:08.219599   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:08.298919   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:08.298956   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:08.338475   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:08.338512   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:08.390859   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:08.390898   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:08.404854   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:08.404891   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:08.473553   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:05.788534   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:08.288635   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:06.112204   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:08.612960   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:06.969444   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:09.468423   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:11.469280   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:10.973824   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:10.987720   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:10.987777   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:11.022637   79927 cri.go:89] found id: ""
	I0806 08:37:11.022667   79927 logs.go:276] 0 containers: []
	W0806 08:37:11.022677   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:11.022683   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:11.022739   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:11.062625   79927 cri.go:89] found id: ""
	I0806 08:37:11.062655   79927 logs.go:276] 0 containers: []
	W0806 08:37:11.062673   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:11.062681   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:11.062736   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:11.099906   79927 cri.go:89] found id: ""
	I0806 08:37:11.099937   79927 logs.go:276] 0 containers: []
	W0806 08:37:11.099945   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:11.099951   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:11.100019   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:11.139461   79927 cri.go:89] found id: ""
	I0806 08:37:11.139485   79927 logs.go:276] 0 containers: []
	W0806 08:37:11.139493   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:11.139498   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:11.139541   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:11.177916   79927 cri.go:89] found id: ""
	I0806 08:37:11.177940   79927 logs.go:276] 0 containers: []
	W0806 08:37:11.177948   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:11.177954   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:11.178007   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:11.216335   79927 cri.go:89] found id: ""
	I0806 08:37:11.216377   79927 logs.go:276] 0 containers: []
	W0806 08:37:11.216389   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:11.216398   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:11.216453   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:11.278457   79927 cri.go:89] found id: ""
	I0806 08:37:11.278479   79927 logs.go:276] 0 containers: []
	W0806 08:37:11.278487   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:11.278493   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:11.278541   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:11.316325   79927 cri.go:89] found id: ""
	I0806 08:37:11.316348   79927 logs.go:276] 0 containers: []
	W0806 08:37:11.316356   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:11.316381   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:11.316397   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:11.357435   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:11.357469   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:11.413151   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:11.413184   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:11.427149   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:11.427182   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:11.500691   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:11.500723   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:11.500738   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:10.787717   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:12.788263   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:11.113471   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:13.612216   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:13.968959   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:16.468099   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:14.077817   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:14.090888   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:14.090960   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:14.126454   79927 cri.go:89] found id: ""
	I0806 08:37:14.126476   79927 logs.go:276] 0 containers: []
	W0806 08:37:14.126484   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:14.126489   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:14.126535   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:14.163910   79927 cri.go:89] found id: ""
	I0806 08:37:14.163939   79927 logs.go:276] 0 containers: []
	W0806 08:37:14.163950   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:14.163957   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:14.164025   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:14.202918   79927 cri.go:89] found id: ""
	I0806 08:37:14.202942   79927 logs.go:276] 0 containers: []
	W0806 08:37:14.202950   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:14.202956   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:14.202999   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:14.240997   79927 cri.go:89] found id: ""
	I0806 08:37:14.241021   79927 logs.go:276] 0 containers: []
	W0806 08:37:14.241030   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:14.241036   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:14.241083   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:14.277424   79927 cri.go:89] found id: ""
	I0806 08:37:14.277448   79927 logs.go:276] 0 containers: []
	W0806 08:37:14.277465   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:14.277473   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:14.277545   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:14.317402   79927 cri.go:89] found id: ""
	I0806 08:37:14.317426   79927 logs.go:276] 0 containers: []
	W0806 08:37:14.317435   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:14.317440   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:14.317487   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:14.364310   79927 cri.go:89] found id: ""
	I0806 08:37:14.364341   79927 logs.go:276] 0 containers: []
	W0806 08:37:14.364351   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:14.364358   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:14.364445   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:14.398836   79927 cri.go:89] found id: ""
	I0806 08:37:14.398866   79927 logs.go:276] 0 containers: []
	W0806 08:37:14.398874   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:14.398891   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:14.398904   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:14.450550   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:14.450586   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:14.467190   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:14.467220   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:14.538786   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:14.538809   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:14.538824   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:14.615049   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:14.615080   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:17.155906   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:17.170401   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:17.170479   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:17.208276   79927 cri.go:89] found id: ""
	I0806 08:37:17.208311   79927 logs.go:276] 0 containers: []
	W0806 08:37:17.208322   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:17.208330   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:17.208403   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:17.258671   79927 cri.go:89] found id: ""
	I0806 08:37:17.258696   79927 logs.go:276] 0 containers: []
	W0806 08:37:17.258705   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:17.258711   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:17.258771   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:17.296999   79927 cri.go:89] found id: ""
	I0806 08:37:17.297027   79927 logs.go:276] 0 containers: []
	W0806 08:37:17.297035   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:17.297041   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:17.297098   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:17.335818   79927 cri.go:89] found id: ""
	I0806 08:37:17.335841   79927 logs.go:276] 0 containers: []
	W0806 08:37:17.335849   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:17.335855   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:17.335903   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:17.371650   79927 cri.go:89] found id: ""
	I0806 08:37:17.371674   79927 logs.go:276] 0 containers: []
	W0806 08:37:17.371682   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:17.371688   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:17.371737   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:17.408413   79927 cri.go:89] found id: ""
	I0806 08:37:17.408440   79927 logs.go:276] 0 containers: []
	W0806 08:37:17.408448   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:17.408454   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:17.408502   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:17.443877   79927 cri.go:89] found id: ""
	I0806 08:37:17.443909   79927 logs.go:276] 0 containers: []
	W0806 08:37:17.443920   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:17.443928   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:17.443988   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:17.484292   79927 cri.go:89] found id: ""
	I0806 08:37:17.484321   79927 logs.go:276] 0 containers: []
	W0806 08:37:17.484331   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:17.484341   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:17.484356   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:17.541365   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:17.541418   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:17.555961   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:17.555993   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:17.625904   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:17.625923   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:17.625941   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:17.708997   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:17.709045   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:14.788822   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:17.288567   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:16.112034   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:18.612595   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:18.468580   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:20.967605   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:20.250351   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:20.264560   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:20.264625   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:20.301216   79927 cri.go:89] found id: ""
	I0806 08:37:20.301244   79927 logs.go:276] 0 containers: []
	W0806 08:37:20.301252   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:20.301257   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:20.301317   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:20.345820   79927 cri.go:89] found id: ""
	I0806 08:37:20.345849   79927 logs.go:276] 0 containers: []
	W0806 08:37:20.345860   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:20.345867   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:20.345923   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:20.382333   79927 cri.go:89] found id: ""
	I0806 08:37:20.382357   79927 logs.go:276] 0 containers: []
	W0806 08:37:20.382365   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:20.382371   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:20.382426   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:20.420193   79927 cri.go:89] found id: ""
	I0806 08:37:20.420228   79927 logs.go:276] 0 containers: []
	W0806 08:37:20.420238   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:20.420249   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:20.420316   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:20.456200   79927 cri.go:89] found id: ""
	I0806 08:37:20.456227   79927 logs.go:276] 0 containers: []
	W0806 08:37:20.456235   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:20.456241   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:20.456298   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:20.493321   79927 cri.go:89] found id: ""
	I0806 08:37:20.493349   79927 logs.go:276] 0 containers: []
	W0806 08:37:20.493356   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:20.493362   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:20.493414   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:20.531007   79927 cri.go:89] found id: ""
	I0806 08:37:20.531034   79927 logs.go:276] 0 containers: []
	W0806 08:37:20.531044   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:20.531051   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:20.531118   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:20.572014   79927 cri.go:89] found id: ""
	I0806 08:37:20.572044   79927 logs.go:276] 0 containers: []
	W0806 08:37:20.572062   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:20.572071   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:20.572083   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:20.623368   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:20.623399   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:20.638941   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:20.638984   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:20.717287   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:20.717313   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:20.717324   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:20.800157   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:20.800192   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:23.342061   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:23.356954   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:23.357026   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:23.391415   79927 cri.go:89] found id: ""
	I0806 08:37:23.391445   79927 logs.go:276] 0 containers: []
	W0806 08:37:23.391456   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:23.391463   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:23.391526   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:23.426460   79927 cri.go:89] found id: ""
	I0806 08:37:23.426482   79927 logs.go:276] 0 containers: []
	W0806 08:37:23.426491   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:23.426496   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:23.426541   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:23.461078   79927 cri.go:89] found id: ""
	I0806 08:37:23.461111   79927 logs.go:276] 0 containers: []
	W0806 08:37:23.461122   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:23.461129   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:23.461183   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:23.496566   79927 cri.go:89] found id: ""
	I0806 08:37:23.496594   79927 logs.go:276] 0 containers: []
	W0806 08:37:23.496602   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:23.496607   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:23.496665   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:23.529097   79927 cri.go:89] found id: ""
	I0806 08:37:23.529123   79927 logs.go:276] 0 containers: []
	W0806 08:37:23.529134   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:23.529153   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:23.529243   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:23.563690   79927 cri.go:89] found id: ""
	I0806 08:37:23.563720   79927 logs.go:276] 0 containers: []
	W0806 08:37:23.563730   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:23.563737   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:23.563796   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:23.598787   79927 cri.go:89] found id: ""
	I0806 08:37:23.598861   79927 logs.go:276] 0 containers: []
	W0806 08:37:23.598875   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:23.598884   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:23.598942   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:23.655675   79927 cri.go:89] found id: ""
	I0806 08:37:23.655705   79927 logs.go:276] 0 containers: []
	W0806 08:37:23.655715   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:23.655723   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:23.655734   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:23.715853   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:23.715887   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:23.735805   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:23.735844   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 08:37:19.288768   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:21.788292   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:23.788808   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:20.612865   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:22.613432   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:22.968064   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:24.968171   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	W0806 08:37:23.815238   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:23.815261   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:23.815274   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:23.899436   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:23.899479   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:26.441837   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:26.456410   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:26.456472   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:26.497707   79927 cri.go:89] found id: ""
	I0806 08:37:26.497733   79927 logs.go:276] 0 containers: []
	W0806 08:37:26.497741   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:26.497746   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:26.497795   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:26.536101   79927 cri.go:89] found id: ""
	I0806 08:37:26.536126   79927 logs.go:276] 0 containers: []
	W0806 08:37:26.536133   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:26.536145   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:26.536190   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:26.573067   79927 cri.go:89] found id: ""
	I0806 08:37:26.573091   79927 logs.go:276] 0 containers: []
	W0806 08:37:26.573102   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:26.573110   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:26.573162   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:26.609820   79927 cri.go:89] found id: ""
	I0806 08:37:26.609846   79927 logs.go:276] 0 containers: []
	W0806 08:37:26.609854   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:26.609860   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:26.609914   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:26.645721   79927 cri.go:89] found id: ""
	I0806 08:37:26.645748   79927 logs.go:276] 0 containers: []
	W0806 08:37:26.645758   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:26.645764   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:26.645825   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:26.679941   79927 cri.go:89] found id: ""
	I0806 08:37:26.679988   79927 logs.go:276] 0 containers: []
	W0806 08:37:26.679999   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:26.680006   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:26.680068   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:26.717247   79927 cri.go:89] found id: ""
	I0806 08:37:26.717277   79927 logs.go:276] 0 containers: []
	W0806 08:37:26.717288   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:26.717295   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:26.717382   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:26.760157   79927 cri.go:89] found id: ""
	I0806 08:37:26.760184   79927 logs.go:276] 0 containers: []
	W0806 08:37:26.760194   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:26.760205   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:26.760221   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:26.816034   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:26.816071   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:26.830141   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:26.830170   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:26.900078   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:26.900103   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:26.900121   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:26.984460   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:26.984501   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:26.287692   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:28.288990   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:25.112666   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:27.611170   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:27.466925   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:29.469989   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:29.527184   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:29.541974   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:29.542052   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:29.576062   79927 cri.go:89] found id: ""
	I0806 08:37:29.576095   79927 logs.go:276] 0 containers: []
	W0806 08:37:29.576107   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:29.576114   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:29.576176   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:29.610841   79927 cri.go:89] found id: ""
	I0806 08:37:29.610865   79927 logs.go:276] 0 containers: []
	W0806 08:37:29.610881   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:29.610892   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:29.610948   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:29.650326   79927 cri.go:89] found id: ""
	I0806 08:37:29.650350   79927 logs.go:276] 0 containers: []
	W0806 08:37:29.650360   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:29.650367   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:29.650434   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:29.689382   79927 cri.go:89] found id: ""
	I0806 08:37:29.689418   79927 logs.go:276] 0 containers: []
	W0806 08:37:29.689429   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:29.689436   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:29.689495   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:29.725893   79927 cri.go:89] found id: ""
	I0806 08:37:29.725918   79927 logs.go:276] 0 containers: []
	W0806 08:37:29.725926   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:29.725932   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:29.725980   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:29.767354   79927 cri.go:89] found id: ""
	I0806 08:37:29.767379   79927 logs.go:276] 0 containers: []
	W0806 08:37:29.767386   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:29.767392   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:29.767441   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:29.811139   79927 cri.go:89] found id: ""
	I0806 08:37:29.811161   79927 logs.go:276] 0 containers: []
	W0806 08:37:29.811169   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:29.811174   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:29.811219   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:29.848252   79927 cri.go:89] found id: ""
	I0806 08:37:29.848285   79927 logs.go:276] 0 containers: []
	W0806 08:37:29.848294   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:29.848303   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:29.848315   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:29.899850   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:29.899884   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:29.913744   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:29.913774   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:29.988016   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:29.988039   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:29.988054   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:30.069701   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:30.069735   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:32.606958   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:32.622621   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:32.622684   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:32.658816   79927 cri.go:89] found id: ""
	I0806 08:37:32.658844   79927 logs.go:276] 0 containers: []
	W0806 08:37:32.658855   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:32.658862   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:32.658928   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:32.697734   79927 cri.go:89] found id: ""
	I0806 08:37:32.697769   79927 logs.go:276] 0 containers: []
	W0806 08:37:32.697780   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:32.697786   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:32.697868   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:32.735540   79927 cri.go:89] found id: ""
	I0806 08:37:32.735572   79927 logs.go:276] 0 containers: []
	W0806 08:37:32.735584   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:32.735591   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:32.735658   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:32.771319   79927 cri.go:89] found id: ""
	I0806 08:37:32.771343   79927 logs.go:276] 0 containers: []
	W0806 08:37:32.771357   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:32.771365   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:32.771421   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:32.808137   79927 cri.go:89] found id: ""
	I0806 08:37:32.808162   79927 logs.go:276] 0 containers: []
	W0806 08:37:32.808172   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:32.808179   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:32.808234   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:32.843307   79927 cri.go:89] found id: ""
	I0806 08:37:32.843333   79927 logs.go:276] 0 containers: []
	W0806 08:37:32.843343   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:32.843350   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:32.843404   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:32.883855   79927 cri.go:89] found id: ""
	I0806 08:37:32.883922   79927 logs.go:276] 0 containers: []
	W0806 08:37:32.883935   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:32.883944   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:32.884014   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:32.920965   79927 cri.go:89] found id: ""
	I0806 08:37:32.921008   79927 logs.go:276] 0 containers: []
	W0806 08:37:32.921019   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:32.921027   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:32.921039   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:32.974933   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:32.974966   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:32.989326   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:32.989352   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:33.058274   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:33.058302   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:33.058314   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:33.136632   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:33.136670   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:30.789771   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:33.289001   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:29.613189   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:32.111086   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:34.112398   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:31.973697   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:34.469358   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:35.680315   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:35.694438   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:35.694497   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:35.729541   79927 cri.go:89] found id: ""
	I0806 08:37:35.729569   79927 logs.go:276] 0 containers: []
	W0806 08:37:35.729580   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:35.729587   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:35.729650   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:35.765783   79927 cri.go:89] found id: ""
	I0806 08:37:35.765813   79927 logs.go:276] 0 containers: []
	W0806 08:37:35.765822   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:35.765827   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:35.765885   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:35.802975   79927 cri.go:89] found id: ""
	I0806 08:37:35.803002   79927 logs.go:276] 0 containers: []
	W0806 08:37:35.803010   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:35.803016   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:35.803062   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:35.838588   79927 cri.go:89] found id: ""
	I0806 08:37:35.838614   79927 logs.go:276] 0 containers: []
	W0806 08:37:35.838623   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:35.838629   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:35.838675   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:35.874413   79927 cri.go:89] found id: ""
	I0806 08:37:35.874442   79927 logs.go:276] 0 containers: []
	W0806 08:37:35.874452   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:35.874460   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:35.874528   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:35.913816   79927 cri.go:89] found id: ""
	I0806 08:37:35.913852   79927 logs.go:276] 0 containers: []
	W0806 08:37:35.913871   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:35.913885   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:35.913963   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:35.951719   79927 cri.go:89] found id: ""
	I0806 08:37:35.951750   79927 logs.go:276] 0 containers: []
	W0806 08:37:35.951761   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:35.951770   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:35.951828   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:35.987720   79927 cri.go:89] found id: ""
	I0806 08:37:35.987749   79927 logs.go:276] 0 containers: []
	W0806 08:37:35.987760   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:35.987769   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:35.987780   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:36.043300   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:36.043339   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:36.057138   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:36.057169   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:36.126382   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:36.126399   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:36.126415   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:36.205243   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:36.205284   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:38.747978   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:35.289346   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:37.788785   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:36.114392   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:38.612174   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:36.967654   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:38.969141   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:40.969752   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:38.762123   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:38.762190   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:38.805074   79927 cri.go:89] found id: ""
	I0806 08:37:38.805101   79927 logs.go:276] 0 containers: []
	W0806 08:37:38.805111   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:38.805117   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:38.805185   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:38.839305   79927 cri.go:89] found id: ""
	I0806 08:37:38.839330   79927 logs.go:276] 0 containers: []
	W0806 08:37:38.839338   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:38.839344   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:38.839406   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:38.874174   79927 cri.go:89] found id: ""
	I0806 08:37:38.874203   79927 logs.go:276] 0 containers: []
	W0806 08:37:38.874214   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:38.874221   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:38.874277   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:38.909607   79927 cri.go:89] found id: ""
	I0806 08:37:38.909630   79927 logs.go:276] 0 containers: []
	W0806 08:37:38.909639   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:38.909645   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:38.909702   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:38.945584   79927 cri.go:89] found id: ""
	I0806 08:37:38.945615   79927 logs.go:276] 0 containers: []
	W0806 08:37:38.945626   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:38.945634   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:38.945696   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:38.988993   79927 cri.go:89] found id: ""
	I0806 08:37:38.989022   79927 logs.go:276] 0 containers: []
	W0806 08:37:38.989033   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:38.989041   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:38.989100   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:39.028902   79927 cri.go:89] found id: ""
	I0806 08:37:39.028934   79927 logs.go:276] 0 containers: []
	W0806 08:37:39.028942   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:39.028948   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:39.029005   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:39.070163   79927 cri.go:89] found id: ""
	I0806 08:37:39.070192   79927 logs.go:276] 0 containers: []
	W0806 08:37:39.070202   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:39.070213   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:39.070229   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:39.124891   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:39.124927   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:39.139898   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:39.139926   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:39.216157   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:39.216181   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:39.216196   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:39.301089   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:39.301136   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:41.840165   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:41.855659   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:41.855792   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:41.892620   79927 cri.go:89] found id: ""
	I0806 08:37:41.892648   79927 logs.go:276] 0 containers: []
	W0806 08:37:41.892660   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:41.892668   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:41.892728   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:41.929064   79927 cri.go:89] found id: ""
	I0806 08:37:41.929097   79927 logs.go:276] 0 containers: []
	W0806 08:37:41.929107   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:41.929114   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:41.929165   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:41.968334   79927 cri.go:89] found id: ""
	I0806 08:37:41.968371   79927 logs.go:276] 0 containers: []
	W0806 08:37:41.968382   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:41.968389   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:41.968450   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:42.008566   79927 cri.go:89] found id: ""
	I0806 08:37:42.008587   79927 logs.go:276] 0 containers: []
	W0806 08:37:42.008595   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:42.008601   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:42.008649   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:42.044281   79927 cri.go:89] found id: ""
	I0806 08:37:42.044308   79927 logs.go:276] 0 containers: []
	W0806 08:37:42.044315   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:42.044321   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:42.044392   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:42.081987   79927 cri.go:89] found id: ""
	I0806 08:37:42.082012   79927 logs.go:276] 0 containers: []
	W0806 08:37:42.082020   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:42.082026   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:42.082072   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:42.120818   79927 cri.go:89] found id: ""
	I0806 08:37:42.120847   79927 logs.go:276] 0 containers: []
	W0806 08:37:42.120857   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:42.120865   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:42.120926   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:42.160685   79927 cri.go:89] found id: ""
	I0806 08:37:42.160719   79927 logs.go:276] 0 containers: []
	W0806 08:37:42.160731   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:42.160742   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:42.160759   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:42.236322   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:42.236344   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:42.236357   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:42.328488   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:42.328526   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:42.369520   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:42.369545   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:42.425449   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:42.425490   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:39.788990   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:41.789236   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:43.789318   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:40.614326   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:43.112061   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:43.469284   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:45.469725   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:44.941557   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:44.954984   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:44.955043   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:44.992186   79927 cri.go:89] found id: ""
	I0806 08:37:44.992216   79927 logs.go:276] 0 containers: []
	W0806 08:37:44.992228   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:44.992234   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:44.992285   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:45.027114   79927 cri.go:89] found id: ""
	I0806 08:37:45.027143   79927 logs.go:276] 0 containers: []
	W0806 08:37:45.027154   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:45.027161   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:45.027222   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:45.062629   79927 cri.go:89] found id: ""
	I0806 08:37:45.062660   79927 logs.go:276] 0 containers: []
	W0806 08:37:45.062672   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:45.062679   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:45.062741   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:45.097499   79927 cri.go:89] found id: ""
	I0806 08:37:45.097530   79927 logs.go:276] 0 containers: []
	W0806 08:37:45.097541   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:45.097548   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:45.097605   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:45.137130   79927 cri.go:89] found id: ""
	I0806 08:37:45.137162   79927 logs.go:276] 0 containers: []
	W0806 08:37:45.137184   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:45.137191   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:45.137264   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:45.170617   79927 cri.go:89] found id: ""
	I0806 08:37:45.170642   79927 logs.go:276] 0 containers: []
	W0806 08:37:45.170659   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:45.170667   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:45.170726   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:45.205729   79927 cri.go:89] found id: ""
	I0806 08:37:45.205758   79927 logs.go:276] 0 containers: []
	W0806 08:37:45.205768   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:45.205774   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:45.205827   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:45.242345   79927 cri.go:89] found id: ""
	I0806 08:37:45.242373   79927 logs.go:276] 0 containers: []
	W0806 08:37:45.242384   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:45.242397   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:45.242414   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:45.283778   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:45.283810   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:45.341421   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:45.341457   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:45.356581   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:45.356606   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:45.428512   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:45.428541   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:45.428559   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:48.011744   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:48.025452   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:48.025533   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:48.064315   79927 cri.go:89] found id: ""
	I0806 08:37:48.064339   79927 logs.go:276] 0 containers: []
	W0806 08:37:48.064348   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:48.064353   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:48.064430   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:48.103855   79927 cri.go:89] found id: ""
	I0806 08:37:48.103891   79927 logs.go:276] 0 containers: []
	W0806 08:37:48.103902   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:48.103909   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:48.103968   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:48.141402   79927 cri.go:89] found id: ""
	I0806 08:37:48.141427   79927 logs.go:276] 0 containers: []
	W0806 08:37:48.141435   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:48.141440   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:48.141491   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:48.177449   79927 cri.go:89] found id: ""
	I0806 08:37:48.177479   79927 logs.go:276] 0 containers: []
	W0806 08:37:48.177490   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:48.177497   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:48.177566   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:48.214613   79927 cri.go:89] found id: ""
	I0806 08:37:48.214637   79927 logs.go:276] 0 containers: []
	W0806 08:37:48.214644   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:48.214650   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:48.214695   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:48.254049   79927 cri.go:89] found id: ""
	I0806 08:37:48.254075   79927 logs.go:276] 0 containers: []
	W0806 08:37:48.254083   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:48.254089   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:48.254147   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:48.296206   79927 cri.go:89] found id: ""
	I0806 08:37:48.296230   79927 logs.go:276] 0 containers: []
	W0806 08:37:48.296240   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:48.296246   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:48.296306   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:48.337614   79927 cri.go:89] found id: ""
	I0806 08:37:48.337641   79927 logs.go:276] 0 containers: []
	W0806 08:37:48.337651   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:48.337662   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:48.337679   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:48.391932   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:48.391971   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:48.405422   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:48.405450   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:48.477374   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:48.477402   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:48.477419   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:48.559252   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:48.559288   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:46.287167   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:48.288652   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:45.113308   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:47.611780   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:47.968109   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:50.469370   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:51.104881   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:51.120113   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:51.120181   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:51.155703   79927 cri.go:89] found id: ""
	I0806 08:37:51.155733   79927 logs.go:276] 0 containers: []
	W0806 08:37:51.155741   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:51.155747   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:51.155792   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:51.191211   79927 cri.go:89] found id: ""
	I0806 08:37:51.191240   79927 logs.go:276] 0 containers: []
	W0806 08:37:51.191250   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:51.191257   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:51.191324   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:51.226740   79927 cri.go:89] found id: ""
	I0806 08:37:51.226771   79927 logs.go:276] 0 containers: []
	W0806 08:37:51.226783   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:51.226790   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:51.226852   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:51.269101   79927 cri.go:89] found id: ""
	I0806 08:37:51.269132   79927 logs.go:276] 0 containers: []
	W0806 08:37:51.269141   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:51.269146   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:51.269196   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:51.307176   79927 cri.go:89] found id: ""
	I0806 08:37:51.307206   79927 logs.go:276] 0 containers: []
	W0806 08:37:51.307216   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:51.307224   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:51.307290   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:51.342824   79927 cri.go:89] found id: ""
	I0806 08:37:51.342854   79927 logs.go:276] 0 containers: []
	W0806 08:37:51.342864   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:51.342871   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:51.342935   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:51.383309   79927 cri.go:89] found id: ""
	I0806 08:37:51.383337   79927 logs.go:276] 0 containers: []
	W0806 08:37:51.383348   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:51.383355   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:51.383420   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:51.424851   79927 cri.go:89] found id: ""
	I0806 08:37:51.424896   79927 logs.go:276] 0 containers: []
	W0806 08:37:51.424908   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:51.424918   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:51.424950   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:51.507286   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:51.507307   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:51.507318   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:51.590512   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:51.590556   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:51.640453   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:51.640482   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:51.694699   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:51.694740   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:50.788163   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:52.788396   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:49.616347   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:52.112323   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:54.112911   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:52.470121   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:54.473264   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:54.211106   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:54.225858   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:54.225936   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:54.267219   79927 cri.go:89] found id: ""
	I0806 08:37:54.267247   79927 logs.go:276] 0 containers: []
	W0806 08:37:54.267258   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:54.267266   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:54.267323   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:54.306734   79927 cri.go:89] found id: ""
	I0806 08:37:54.306763   79927 logs.go:276] 0 containers: []
	W0806 08:37:54.306774   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:54.306781   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:54.306839   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:54.342257   79927 cri.go:89] found id: ""
	I0806 08:37:54.342294   79927 logs.go:276] 0 containers: []
	W0806 08:37:54.342306   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:54.342314   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:54.342383   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:54.380661   79927 cri.go:89] found id: ""
	I0806 08:37:54.380685   79927 logs.go:276] 0 containers: []
	W0806 08:37:54.380693   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:54.380699   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:54.380743   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:54.416503   79927 cri.go:89] found id: ""
	I0806 08:37:54.416525   79927 logs.go:276] 0 containers: []
	W0806 08:37:54.416533   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:54.416539   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:54.416589   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:54.453470   79927 cri.go:89] found id: ""
	I0806 08:37:54.453508   79927 logs.go:276] 0 containers: []
	W0806 08:37:54.453520   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:54.453528   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:54.453591   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:54.497630   79927 cri.go:89] found id: ""
	I0806 08:37:54.497671   79927 logs.go:276] 0 containers: []
	W0806 08:37:54.497682   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:54.497691   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:54.497750   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:54.536516   79927 cri.go:89] found id: ""
	I0806 08:37:54.536546   79927 logs.go:276] 0 containers: []
	W0806 08:37:54.536557   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:54.536568   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:54.536587   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:54.592997   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:54.593032   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:54.607568   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:54.607598   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:54.679369   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:54.679402   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:54.679423   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:54.760001   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:54.760041   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:57.305405   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:57.319427   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:57.319493   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:57.357465   79927 cri.go:89] found id: ""
	I0806 08:37:57.357493   79927 logs.go:276] 0 containers: []
	W0806 08:37:57.357504   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:57.357513   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:57.357571   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:57.390131   79927 cri.go:89] found id: ""
	I0806 08:37:57.390155   79927 logs.go:276] 0 containers: []
	W0806 08:37:57.390165   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:57.390172   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:57.390238   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:57.428571   79927 cri.go:89] found id: ""
	I0806 08:37:57.428600   79927 logs.go:276] 0 containers: []
	W0806 08:37:57.428608   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:57.428613   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:57.428675   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:57.462688   79927 cri.go:89] found id: ""
	I0806 08:37:57.462717   79927 logs.go:276] 0 containers: []
	W0806 08:37:57.462727   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:57.462733   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:57.462791   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:57.525008   79927 cri.go:89] found id: ""
	I0806 08:37:57.525033   79927 logs.go:276] 0 containers: []
	W0806 08:37:57.525041   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:57.525046   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:57.525108   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:57.565992   79927 cri.go:89] found id: ""
	I0806 08:37:57.566024   79927 logs.go:276] 0 containers: []
	W0806 08:37:57.566035   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:57.566045   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:57.566123   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:57.610170   79927 cri.go:89] found id: ""
	I0806 08:37:57.610198   79927 logs.go:276] 0 containers: []
	W0806 08:37:57.610208   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:57.610215   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:57.610262   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:57.647054   79927 cri.go:89] found id: ""
	I0806 08:37:57.647085   79927 logs.go:276] 0 containers: []
	W0806 08:37:57.647096   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:57.647106   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:57.647121   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:57.698177   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:57.698214   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:57.712244   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:57.712271   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:57.781683   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:57.781707   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:57.781723   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:57.861097   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:57.861133   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:54.789102   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:57.287564   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:56.613662   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:59.112248   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:56.967534   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:59.469122   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:00.401186   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:00.414546   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:00.414602   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:00.454163   79927 cri.go:89] found id: ""
	I0806 08:38:00.454190   79927 logs.go:276] 0 containers: []
	W0806 08:38:00.454201   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:00.454208   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:00.454269   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:00.497427   79927 cri.go:89] found id: ""
	I0806 08:38:00.497455   79927 logs.go:276] 0 containers: []
	W0806 08:38:00.497466   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:00.497473   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:00.497538   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:00.537590   79927 cri.go:89] found id: ""
	I0806 08:38:00.537619   79927 logs.go:276] 0 containers: []
	W0806 08:38:00.537627   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:00.537633   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:00.537695   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:00.572337   79927 cri.go:89] found id: ""
	I0806 08:38:00.572383   79927 logs.go:276] 0 containers: []
	W0806 08:38:00.572396   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:00.572403   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:00.572460   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:00.607729   79927 cri.go:89] found id: ""
	I0806 08:38:00.607772   79927 logs.go:276] 0 containers: []
	W0806 08:38:00.607783   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:00.607791   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:00.607852   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:00.644777   79927 cri.go:89] found id: ""
	I0806 08:38:00.644843   79927 logs.go:276] 0 containers: []
	W0806 08:38:00.644856   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:00.644864   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:00.644935   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:00.684698   79927 cri.go:89] found id: ""
	I0806 08:38:00.684727   79927 logs.go:276] 0 containers: []
	W0806 08:38:00.684739   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:00.684747   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:00.684863   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:00.718472   79927 cri.go:89] found id: ""
	I0806 08:38:00.718506   79927 logs.go:276] 0 containers: []
	W0806 08:38:00.718518   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:00.718529   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:00.718546   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:00.770265   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:00.770309   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:00.787438   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:00.787469   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:00.854538   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:00.854558   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:00.854570   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:00.936810   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:00.936847   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:03.477109   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:03.494379   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:03.494439   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:03.536166   79927 cri.go:89] found id: ""
	I0806 08:38:03.536193   79927 logs.go:276] 0 containers: []
	W0806 08:38:03.536202   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:03.536208   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:03.536255   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:03.577497   79927 cri.go:89] found id: ""
	I0806 08:38:03.577522   79927 logs.go:276] 0 containers: []
	W0806 08:38:03.577531   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:03.577536   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:03.577585   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:03.614534   79927 cri.go:89] found id: ""
	I0806 08:38:03.614560   79927 logs.go:276] 0 containers: []
	W0806 08:38:03.614569   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:03.614576   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:03.614634   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:03.655824   79927 cri.go:89] found id: ""
	I0806 08:38:03.655860   79927 logs.go:276] 0 containers: []
	W0806 08:38:03.655871   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:03.655878   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:03.655952   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:03.699843   79927 cri.go:89] found id: ""
	I0806 08:38:03.699875   79927 logs.go:276] 0 containers: []
	W0806 08:38:03.699888   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:03.699896   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:03.699957   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:03.740131   79927 cri.go:89] found id: ""
	I0806 08:38:03.740159   79927 logs.go:276] 0 containers: []
	W0806 08:38:03.740168   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:03.740174   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:03.740223   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:59.787634   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:01.788942   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:03.790134   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:01.112582   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:03.113004   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:01.972073   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:04.468917   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:03.774103   79927 cri.go:89] found id: ""
	I0806 08:38:03.774149   79927 logs.go:276] 0 containers: []
	W0806 08:38:03.774163   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:03.774170   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:03.774230   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:03.814051   79927 cri.go:89] found id: ""
	I0806 08:38:03.814075   79927 logs.go:276] 0 containers: []
	W0806 08:38:03.814083   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:03.814091   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:03.814102   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:03.854869   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:03.854898   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:03.908284   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:03.908318   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:03.923265   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:03.923303   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:04.001785   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:04.001806   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:04.001819   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:06.583809   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:06.596764   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:06.596827   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:06.636611   79927 cri.go:89] found id: ""
	I0806 08:38:06.636639   79927 logs.go:276] 0 containers: []
	W0806 08:38:06.636650   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:06.636657   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:06.636707   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:06.675392   79927 cri.go:89] found id: ""
	I0806 08:38:06.675419   79927 logs.go:276] 0 containers: []
	W0806 08:38:06.675430   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:06.675438   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:06.675492   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:06.711270   79927 cri.go:89] found id: ""
	I0806 08:38:06.711302   79927 logs.go:276] 0 containers: []
	W0806 08:38:06.711313   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:06.711320   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:06.711388   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:06.747252   79927 cri.go:89] found id: ""
	I0806 08:38:06.747280   79927 logs.go:276] 0 containers: []
	W0806 08:38:06.747291   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:06.747297   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:06.747360   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:06.781999   79927 cri.go:89] found id: ""
	I0806 08:38:06.782028   79927 logs.go:276] 0 containers: []
	W0806 08:38:06.782038   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:06.782043   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:06.782095   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:06.818810   79927 cri.go:89] found id: ""
	I0806 08:38:06.818847   79927 logs.go:276] 0 containers: []
	W0806 08:38:06.818860   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:06.818869   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:06.818935   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:06.857937   79927 cri.go:89] found id: ""
	I0806 08:38:06.857972   79927 logs.go:276] 0 containers: []
	W0806 08:38:06.857983   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:06.857989   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:06.858049   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:06.897920   79927 cri.go:89] found id: ""
	I0806 08:38:06.897949   79927 logs.go:276] 0 containers: []
	W0806 08:38:06.897957   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:06.897965   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:06.897979   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:06.912199   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:06.912228   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:06.983129   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:06.983157   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:06.983173   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:07.067783   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:07.067853   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:07.107756   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:07.107787   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:06.286856   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:08.287070   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:05.113664   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:07.611129   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:06.973447   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:09.468163   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:11.469075   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:09.657958   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:09.670726   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:09.670807   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:09.709184   79927 cri.go:89] found id: ""
	I0806 08:38:09.709220   79927 logs.go:276] 0 containers: []
	W0806 08:38:09.709232   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:09.709239   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:09.709299   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:09.747032   79927 cri.go:89] found id: ""
	I0806 08:38:09.747061   79927 logs.go:276] 0 containers: []
	W0806 08:38:09.747073   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:09.747080   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:09.747141   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:09.785134   79927 cri.go:89] found id: ""
	I0806 08:38:09.785170   79927 logs.go:276] 0 containers: []
	W0806 08:38:09.785180   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:09.785188   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:09.785248   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:09.825136   79927 cri.go:89] found id: ""
	I0806 08:38:09.825168   79927 logs.go:276] 0 containers: []
	W0806 08:38:09.825179   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:09.825186   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:09.825249   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:09.865292   79927 cri.go:89] found id: ""
	I0806 08:38:09.865327   79927 logs.go:276] 0 containers: []
	W0806 08:38:09.865336   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:09.865345   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:09.865415   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:09.911145   79927 cri.go:89] found id: ""
	I0806 08:38:09.911172   79927 logs.go:276] 0 containers: []
	W0806 08:38:09.911183   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:09.911190   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:09.911247   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:09.961259   79927 cri.go:89] found id: ""
	I0806 08:38:09.961289   79927 logs.go:276] 0 containers: []
	W0806 08:38:09.961299   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:09.961307   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:09.961364   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:10.009572   79927 cri.go:89] found id: ""
	I0806 08:38:10.009594   79927 logs.go:276] 0 containers: []
	W0806 08:38:10.009602   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:10.009610   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:10.009623   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:10.063733   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:10.063770   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:10.080004   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:10.080044   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:10.158792   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:10.158823   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:10.158840   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:10.242126   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:10.242162   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:12.781497   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:12.796715   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:12.796804   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:12.831223   79927 cri.go:89] found id: ""
	I0806 08:38:12.831254   79927 logs.go:276] 0 containers: []
	W0806 08:38:12.831265   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:12.831273   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:12.831332   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:12.872202   79927 cri.go:89] found id: ""
	I0806 08:38:12.872224   79927 logs.go:276] 0 containers: []
	W0806 08:38:12.872231   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:12.872236   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:12.872284   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:12.913457   79927 cri.go:89] found id: ""
	I0806 08:38:12.913486   79927 logs.go:276] 0 containers: []
	W0806 08:38:12.913497   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:12.913505   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:12.913563   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:12.950382   79927 cri.go:89] found id: ""
	I0806 08:38:12.950413   79927 logs.go:276] 0 containers: []
	W0806 08:38:12.950424   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:12.950432   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:12.950497   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:12.988956   79927 cri.go:89] found id: ""
	I0806 08:38:12.988984   79927 logs.go:276] 0 containers: []
	W0806 08:38:12.988992   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:12.988998   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:12.989054   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:13.025308   79927 cri.go:89] found id: ""
	I0806 08:38:13.025334   79927 logs.go:276] 0 containers: []
	W0806 08:38:13.025342   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:13.025348   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:13.025401   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:13.060156   79927 cri.go:89] found id: ""
	I0806 08:38:13.060184   79927 logs.go:276] 0 containers: []
	W0806 08:38:13.060196   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:13.060204   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:13.060268   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:13.096302   79927 cri.go:89] found id: ""
	I0806 08:38:13.096331   79927 logs.go:276] 0 containers: []
	W0806 08:38:13.096344   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:13.096360   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:13.096387   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:13.149567   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:13.149603   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:13.164680   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:13.164715   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:13.236695   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:13.236718   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:13.236733   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:13.323043   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:13.323081   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:10.288582   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:12.788556   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:09.611677   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:12.111364   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:14.112265   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:13.469940   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:15.969381   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:15.867128   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:15.885504   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:15.885579   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:15.929060   79927 cri.go:89] found id: ""
	I0806 08:38:15.929087   79927 logs.go:276] 0 containers: []
	W0806 08:38:15.929098   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:15.929106   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:15.929157   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:15.996919   79927 cri.go:89] found id: ""
	I0806 08:38:15.996946   79927 logs.go:276] 0 containers: []
	W0806 08:38:15.996954   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:15.996959   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:15.997013   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:16.033299   79927 cri.go:89] found id: ""
	I0806 08:38:16.033323   79927 logs.go:276] 0 containers: []
	W0806 08:38:16.033331   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:16.033338   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:16.033399   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:16.070754   79927 cri.go:89] found id: ""
	I0806 08:38:16.070781   79927 logs.go:276] 0 containers: []
	W0806 08:38:16.070789   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:16.070795   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:16.070853   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:16.107906   79927 cri.go:89] found id: ""
	I0806 08:38:16.107931   79927 logs.go:276] 0 containers: []
	W0806 08:38:16.107943   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:16.107951   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:16.108011   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:16.145814   79927 cri.go:89] found id: ""
	I0806 08:38:16.145844   79927 logs.go:276] 0 containers: []
	W0806 08:38:16.145853   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:16.145861   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:16.145921   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:16.183752   79927 cri.go:89] found id: ""
	I0806 08:38:16.183782   79927 logs.go:276] 0 containers: []
	W0806 08:38:16.183793   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:16.183828   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:16.183890   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:16.220973   79927 cri.go:89] found id: ""
	I0806 08:38:16.221000   79927 logs.go:276] 0 containers: []
	W0806 08:38:16.221010   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:16.221020   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:16.221034   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:16.274514   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:16.274555   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:16.291387   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:16.291417   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:16.365737   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:16.365769   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:16.365785   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:16.446578   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:16.446617   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:14.789440   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:17.288354   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:16.112334   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:18.612150   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:18.468700   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:20.469520   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:18.989406   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:19.003113   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:19.003174   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:19.038794   79927 cri.go:89] found id: ""
	I0806 08:38:19.038821   79927 logs.go:276] 0 containers: []
	W0806 08:38:19.038832   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:19.038839   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:19.038899   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:19.072521   79927 cri.go:89] found id: ""
	I0806 08:38:19.072546   79927 logs.go:276] 0 containers: []
	W0806 08:38:19.072554   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:19.072561   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:19.072606   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:19.105949   79927 cri.go:89] found id: ""
	I0806 08:38:19.105982   79927 logs.go:276] 0 containers: []
	W0806 08:38:19.105994   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:19.106002   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:19.106053   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:19.141277   79927 cri.go:89] found id: ""
	I0806 08:38:19.141303   79927 logs.go:276] 0 containers: []
	W0806 08:38:19.141311   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:19.141317   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:19.141373   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:19.179144   79927 cri.go:89] found id: ""
	I0806 08:38:19.179173   79927 logs.go:276] 0 containers: []
	W0806 08:38:19.179184   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:19.179191   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:19.179252   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:19.218970   79927 cri.go:89] found id: ""
	I0806 08:38:19.218998   79927 logs.go:276] 0 containers: []
	W0806 08:38:19.219010   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:19.219022   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:19.219084   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:19.257623   79927 cri.go:89] found id: ""
	I0806 08:38:19.257652   79927 logs.go:276] 0 containers: []
	W0806 08:38:19.257663   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:19.257671   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:19.257740   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:19.294222   79927 cri.go:89] found id: ""
	I0806 08:38:19.294250   79927 logs.go:276] 0 containers: []
	W0806 08:38:19.294258   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:19.294267   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:19.294279   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:19.374327   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:19.374352   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:19.374368   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:19.458570   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:19.458604   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:19.504329   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:19.504382   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:19.554199   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:19.554237   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:22.069574   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:22.082919   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:22.082981   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:22.115952   79927 cri.go:89] found id: ""
	I0806 08:38:22.115974   79927 logs.go:276] 0 containers: []
	W0806 08:38:22.115981   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:22.115987   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:22.116031   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:22.158322   79927 cri.go:89] found id: ""
	I0806 08:38:22.158349   79927 logs.go:276] 0 containers: []
	W0806 08:38:22.158359   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:22.158366   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:22.158425   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:22.203375   79927 cri.go:89] found id: ""
	I0806 08:38:22.203403   79927 logs.go:276] 0 containers: []
	W0806 08:38:22.203412   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:22.203418   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:22.203473   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:22.240808   79927 cri.go:89] found id: ""
	I0806 08:38:22.240837   79927 logs.go:276] 0 containers: []
	W0806 08:38:22.240849   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:22.240857   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:22.240913   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:22.277885   79927 cri.go:89] found id: ""
	I0806 08:38:22.277914   79927 logs.go:276] 0 containers: []
	W0806 08:38:22.277925   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:22.277932   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:22.277993   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:22.314990   79927 cri.go:89] found id: ""
	I0806 08:38:22.315020   79927 logs.go:276] 0 containers: []
	W0806 08:38:22.315031   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:22.315039   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:22.315098   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:22.352159   79927 cri.go:89] found id: ""
	I0806 08:38:22.352189   79927 logs.go:276] 0 containers: []
	W0806 08:38:22.352198   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:22.352204   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:22.352267   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:22.387349   79927 cri.go:89] found id: ""
	I0806 08:38:22.387382   79927 logs.go:276] 0 containers: []
	W0806 08:38:22.387393   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:22.387404   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:22.387416   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:22.425834   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:22.425871   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:22.479680   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:22.479712   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:22.495104   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:22.495140   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:22.567013   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:22.567030   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:22.567043   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:19.288896   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:21.291455   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:23.789282   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:21.111745   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:23.111513   79219 pod_ready.go:81] duration metric: took 4m0.005888372s for pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace to be "Ready" ...
	E0806 08:38:23.111545   79219 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0806 08:38:23.111555   79219 pod_ready.go:38] duration metric: took 4m6.006750463s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 08:38:23.111579   79219 api_server.go:52] waiting for apiserver process to appear ...
	I0806 08:38:23.111610   79219 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:23.111669   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:23.182880   79219 cri.go:89] found id: "f7edb6bece3347d88a263663a1d6549165dafdcb67ad723494b937766cba6da6"
	I0806 08:38:23.182907   79219 cri.go:89] found id: ""
	I0806 08:38:23.182918   79219 logs.go:276] 1 containers: [f7edb6bece3347d88a263663a1d6549165dafdcb67ad723494b937766cba6da6]
	I0806 08:38:23.182974   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:23.188039   79219 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:23.188103   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:23.235090   79219 cri.go:89] found id: "df2efdb62a4af10f89a1b37599d4258ac3b72660353795f8c2141f3ef70c1f65"
	I0806 08:38:23.235113   79219 cri.go:89] found id: ""
	I0806 08:38:23.235135   79219 logs.go:276] 1 containers: [df2efdb62a4af10f89a1b37599d4258ac3b72660353795f8c2141f3ef70c1f65]
	I0806 08:38:23.235195   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:23.240260   79219 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:23.240335   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:23.280619   79219 cri.go:89] found id: "9e1c91bc3a920b6f7db4477b8219f13fa092c68ff53abe2390325a68dad253e5"
	I0806 08:38:23.280645   79219 cri.go:89] found id: ""
	I0806 08:38:23.280655   79219 logs.go:276] 1 containers: [9e1c91bc3a920b6f7db4477b8219f13fa092c68ff53abe2390325a68dad253e5]
	I0806 08:38:23.280715   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:23.286155   79219 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:23.286234   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:23.329583   79219 cri.go:89] found id: "a25c5660a3a2b7226bf308b44307a3912efa7a9a73e79ec63f83133b80a44683"
	I0806 08:38:23.329609   79219 cri.go:89] found id: ""
	I0806 08:38:23.329618   79219 logs.go:276] 1 containers: [a25c5660a3a2b7226bf308b44307a3912efa7a9a73e79ec63f83133b80a44683]
	I0806 08:38:23.329678   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:23.334342   79219 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:23.334407   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:23.372955   79219 cri.go:89] found id: "612fc6a8f2c1208097a6d6287c5fecb20026608a278af5d0d4efc9662e065eb9"
	I0806 08:38:23.372974   79219 cri.go:89] found id: ""
	I0806 08:38:23.372981   79219 logs.go:276] 1 containers: [612fc6a8f2c1208097a6d6287c5fecb20026608a278af5d0d4efc9662e065eb9]
	I0806 08:38:23.373037   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:23.377637   79219 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:23.377692   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:23.416165   79219 cri.go:89] found id: "d8651a7ed6615e826b950eab003841c2ec872b48160f207e77dd32a684125a23"
	I0806 08:38:23.416188   79219 cri.go:89] found id: ""
	I0806 08:38:23.416197   79219 logs.go:276] 1 containers: [d8651a7ed6615e826b950eab003841c2ec872b48160f207e77dd32a684125a23]
	I0806 08:38:23.416248   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:23.420970   79219 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:23.421020   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:23.459247   79219 cri.go:89] found id: ""
	I0806 08:38:23.459277   79219 logs.go:276] 0 containers: []
	W0806 08:38:23.459287   79219 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:23.459295   79219 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0806 08:38:23.459354   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0806 08:38:23.503329   79219 cri.go:89] found id: "367eb6487187a37acafb1fa74364a2d9ec5d9241888ed2e871fb14f2af161975"
	I0806 08:38:23.503354   79219 cri.go:89] found id: "78abe2efa92d2d6c9171ed65bd4d941693be3207554f9318a7cefb91544eeb20"
	I0806 08:38:23.503360   79219 cri.go:89] found id: ""
	I0806 08:38:23.503369   79219 logs.go:276] 2 containers: [367eb6487187a37acafb1fa74364a2d9ec5d9241888ed2e871fb14f2af161975 78abe2efa92d2d6c9171ed65bd4d941693be3207554f9318a7cefb91544eeb20]
	I0806 08:38:23.503431   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:23.508375   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:23.512355   79219 logs.go:123] Gathering logs for kube-scheduler [a25c5660a3a2b7226bf308b44307a3912efa7a9a73e79ec63f83133b80a44683] ...
	I0806 08:38:23.512401   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a25c5660a3a2b7226bf308b44307a3912efa7a9a73e79ec63f83133b80a44683"
	I0806 08:38:23.548545   79219 logs.go:123] Gathering logs for kube-controller-manager [d8651a7ed6615e826b950eab003841c2ec872b48160f207e77dd32a684125a23] ...
	I0806 08:38:23.548574   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8651a7ed6615e826b950eab003841c2ec872b48160f207e77dd32a684125a23"
	I0806 08:38:23.607970   79219 logs.go:123] Gathering logs for storage-provisioner [78abe2efa92d2d6c9171ed65bd4d941693be3207554f9318a7cefb91544eeb20] ...
	I0806 08:38:23.608004   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78abe2efa92d2d6c9171ed65bd4d941693be3207554f9318a7cefb91544eeb20"
	I0806 08:38:23.646553   79219 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:23.646582   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:24.180521   79219 logs.go:123] Gathering logs for container status ...
	I0806 08:38:24.180558   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:24.228240   79219 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:24.228273   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:24.285975   79219 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:24.286007   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:24.301117   79219 logs.go:123] Gathering logs for etcd [df2efdb62a4af10f89a1b37599d4258ac3b72660353795f8c2141f3ef70c1f65] ...
	I0806 08:38:24.301152   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df2efdb62a4af10f89a1b37599d4258ac3b72660353795f8c2141f3ef70c1f65"
	I0806 08:38:22.968463   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:25.469138   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:25.146383   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:25.160013   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:25.160078   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:25.200981   79927 cri.go:89] found id: ""
	I0806 08:38:25.201006   79927 logs.go:276] 0 containers: []
	W0806 08:38:25.201018   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:25.201026   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:25.201091   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:25.239852   79927 cri.go:89] found id: ""
	I0806 08:38:25.239895   79927 logs.go:276] 0 containers: []
	W0806 08:38:25.239907   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:25.239914   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:25.239975   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:25.275521   79927 cri.go:89] found id: ""
	I0806 08:38:25.275549   79927 logs.go:276] 0 containers: []
	W0806 08:38:25.275561   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:25.275568   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:25.275630   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:25.311921   79927 cri.go:89] found id: ""
	I0806 08:38:25.311952   79927 logs.go:276] 0 containers: []
	W0806 08:38:25.311962   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:25.311969   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:25.312036   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:25.353047   79927 cri.go:89] found id: ""
	I0806 08:38:25.353081   79927 logs.go:276] 0 containers: []
	W0806 08:38:25.353093   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:25.353102   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:25.353177   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:25.389442   79927 cri.go:89] found id: ""
	I0806 08:38:25.389470   79927 logs.go:276] 0 containers: []
	W0806 08:38:25.389479   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:25.389485   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:25.389529   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:25.426435   79927 cri.go:89] found id: ""
	I0806 08:38:25.426464   79927 logs.go:276] 0 containers: []
	W0806 08:38:25.426474   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:25.426481   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:25.426541   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:25.460095   79927 cri.go:89] found id: ""
	I0806 08:38:25.460132   79927 logs.go:276] 0 containers: []
	W0806 08:38:25.460157   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:25.460169   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:25.460192   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:25.511363   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:25.511398   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:25.525953   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:25.525980   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:25.598690   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:25.598711   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:25.598723   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:25.674996   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:25.675034   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:28.216533   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:28.230080   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:28.230171   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:28.269239   79927 cri.go:89] found id: ""
	I0806 08:38:28.269274   79927 logs.go:276] 0 containers: []
	W0806 08:38:28.269288   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:28.269296   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:28.269361   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:28.311773   79927 cri.go:89] found id: ""
	I0806 08:38:28.311809   79927 logs.go:276] 0 containers: []
	W0806 08:38:28.311820   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:28.311828   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:28.311896   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:28.351492   79927 cri.go:89] found id: ""
	I0806 08:38:28.351522   79927 logs.go:276] 0 containers: []
	W0806 08:38:28.351530   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:28.351535   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:28.351581   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:28.389039   79927 cri.go:89] found id: ""
	I0806 08:38:28.389067   79927 logs.go:276] 0 containers: []
	W0806 08:38:28.389079   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:28.389086   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:28.389175   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:28.432249   79927 cri.go:89] found id: ""
	I0806 08:38:28.432277   79927 logs.go:276] 0 containers: []
	W0806 08:38:28.432287   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:28.432296   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:28.432354   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:28.478277   79927 cri.go:89] found id: ""
	I0806 08:38:28.478303   79927 logs.go:276] 0 containers: []
	W0806 08:38:28.478313   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:28.478321   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:28.478380   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:28.516167   79927 cri.go:89] found id: ""
	I0806 08:38:28.516199   79927 logs.go:276] 0 containers: []
	W0806 08:38:28.516211   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:28.516219   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:28.516286   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:28.555493   79927 cri.go:89] found id: ""
	I0806 08:38:28.555523   79927 logs.go:276] 0 containers: []
	W0806 08:38:28.555534   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:28.555545   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:28.555560   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:28.627101   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:28.627124   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:28.627147   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:28.716655   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:28.716709   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:28.755469   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:28.755510   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:26.286925   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:28.288654   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:24.355686   79219 logs.go:123] Gathering logs for coredns [9e1c91bc3a920b6f7db4477b8219f13fa092c68ff53abe2390325a68dad253e5] ...
	I0806 08:38:24.355729   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e1c91bc3a920b6f7db4477b8219f13fa092c68ff53abe2390325a68dad253e5"
	I0806 08:38:24.396350   79219 logs.go:123] Gathering logs for kube-proxy [612fc6a8f2c1208097a6d6287c5fecb20026608a278af5d0d4efc9662e065eb9] ...
	I0806 08:38:24.396402   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 612fc6a8f2c1208097a6d6287c5fecb20026608a278af5d0d4efc9662e065eb9"
	I0806 08:38:24.435066   79219 logs.go:123] Gathering logs for storage-provisioner [367eb6487187a37acafb1fa74364a2d9ec5d9241888ed2e871fb14f2af161975] ...
	I0806 08:38:24.435098   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 367eb6487187a37acafb1fa74364a2d9ec5d9241888ed2e871fb14f2af161975"
	I0806 08:38:24.474159   79219 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:24.474188   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 08:38:24.601081   79219 logs.go:123] Gathering logs for kube-apiserver [f7edb6bece3347d88a263663a1d6549165dafdcb67ad723494b937766cba6da6] ...
	I0806 08:38:24.601112   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7edb6bece3347d88a263663a1d6549165dafdcb67ad723494b937766cba6da6"
	I0806 08:38:27.170209   79219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:27.187573   79219 api_server.go:72] duration metric: took 4m15.285938182s to wait for apiserver process to appear ...
	I0806 08:38:27.187597   79219 api_server.go:88] waiting for apiserver healthz status ...
	I0806 08:38:27.187626   79219 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:27.187673   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:27.224204   79219 cri.go:89] found id: "f7edb6bece3347d88a263663a1d6549165dafdcb67ad723494b937766cba6da6"
	I0806 08:38:27.224227   79219 cri.go:89] found id: ""
	I0806 08:38:27.224237   79219 logs.go:276] 1 containers: [f7edb6bece3347d88a263663a1d6549165dafdcb67ad723494b937766cba6da6]
	I0806 08:38:27.224284   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:27.228883   79219 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:27.228957   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:27.275914   79219 cri.go:89] found id: "df2efdb62a4af10f89a1b37599d4258ac3b72660353795f8c2141f3ef70c1f65"
	I0806 08:38:27.275939   79219 cri.go:89] found id: ""
	I0806 08:38:27.275948   79219 logs.go:276] 1 containers: [df2efdb62a4af10f89a1b37599d4258ac3b72660353795f8c2141f3ef70c1f65]
	I0806 08:38:27.276009   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:27.280979   79219 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:27.281037   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:27.326292   79219 cri.go:89] found id: "9e1c91bc3a920b6f7db4477b8219f13fa092c68ff53abe2390325a68dad253e5"
	I0806 08:38:27.326312   79219 cri.go:89] found id: ""
	I0806 08:38:27.326319   79219 logs.go:276] 1 containers: [9e1c91bc3a920b6f7db4477b8219f13fa092c68ff53abe2390325a68dad253e5]
	I0806 08:38:27.326365   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:27.331249   79219 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:27.331326   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:27.374229   79219 cri.go:89] found id: "a25c5660a3a2b7226bf308b44307a3912efa7a9a73e79ec63f83133b80a44683"
	I0806 08:38:27.374250   79219 cri.go:89] found id: ""
	I0806 08:38:27.374257   79219 logs.go:276] 1 containers: [a25c5660a3a2b7226bf308b44307a3912efa7a9a73e79ec63f83133b80a44683]
	I0806 08:38:27.374300   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:27.378875   79219 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:27.378957   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:27.416159   79219 cri.go:89] found id: "612fc6a8f2c1208097a6d6287c5fecb20026608a278af5d0d4efc9662e065eb9"
	I0806 08:38:27.416183   79219 cri.go:89] found id: ""
	I0806 08:38:27.416192   79219 logs.go:276] 1 containers: [612fc6a8f2c1208097a6d6287c5fecb20026608a278af5d0d4efc9662e065eb9]
	I0806 08:38:27.416243   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:27.420463   79219 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:27.420531   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:27.461673   79219 cri.go:89] found id: "d8651a7ed6615e826b950eab003841c2ec872b48160f207e77dd32a684125a23"
	I0806 08:38:27.461700   79219 cri.go:89] found id: ""
	I0806 08:38:27.461710   79219 logs.go:276] 1 containers: [d8651a7ed6615e826b950eab003841c2ec872b48160f207e77dd32a684125a23]
	I0806 08:38:27.461762   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:27.466404   79219 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:27.466474   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:27.507684   79219 cri.go:89] found id: ""
	I0806 08:38:27.507708   79219 logs.go:276] 0 containers: []
	W0806 08:38:27.507716   79219 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:27.507722   79219 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0806 08:38:27.507787   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0806 08:38:27.547812   79219 cri.go:89] found id: "367eb6487187a37acafb1fa74364a2d9ec5d9241888ed2e871fb14f2af161975"
	I0806 08:38:27.547844   79219 cri.go:89] found id: "78abe2efa92d2d6c9171ed65bd4d941693be3207554f9318a7cefb91544eeb20"
	I0806 08:38:27.547850   79219 cri.go:89] found id: ""
	I0806 08:38:27.547859   79219 logs.go:276] 2 containers: [367eb6487187a37acafb1fa74364a2d9ec5d9241888ed2e871fb14f2af161975 78abe2efa92d2d6c9171ed65bd4d941693be3207554f9318a7cefb91544eeb20]
	I0806 08:38:27.547929   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:27.552639   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:27.556993   79219 logs.go:123] Gathering logs for kube-proxy [612fc6a8f2c1208097a6d6287c5fecb20026608a278af5d0d4efc9662e065eb9] ...
	I0806 08:38:27.557022   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 612fc6a8f2c1208097a6d6287c5fecb20026608a278af5d0d4efc9662e065eb9"
	I0806 08:38:27.599317   79219 logs.go:123] Gathering logs for storage-provisioner [78abe2efa92d2d6c9171ed65bd4d941693be3207554f9318a7cefb91544eeb20] ...
	I0806 08:38:27.599345   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78abe2efa92d2d6c9171ed65bd4d941693be3207554f9318a7cefb91544eeb20"
	I0806 08:38:27.643812   79219 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:27.643845   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:28.119873   79219 logs.go:123] Gathering logs for container status ...
	I0806 08:38:28.119909   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:28.160698   79219 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:28.160744   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 08:38:28.275895   79219 logs.go:123] Gathering logs for coredns [9e1c91bc3a920b6f7db4477b8219f13fa092c68ff53abe2390325a68dad253e5] ...
	I0806 08:38:28.275927   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e1c91bc3a920b6f7db4477b8219f13fa092c68ff53abe2390325a68dad253e5"
	I0806 08:38:28.324525   79219 logs.go:123] Gathering logs for kube-scheduler [a25c5660a3a2b7226bf308b44307a3912efa7a9a73e79ec63f83133b80a44683] ...
	I0806 08:38:28.324559   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a25c5660a3a2b7226bf308b44307a3912efa7a9a73e79ec63f83133b80a44683"
	I0806 08:38:28.372860   79219 logs.go:123] Gathering logs for etcd [df2efdb62a4af10f89a1b37599d4258ac3b72660353795f8c2141f3ef70c1f65] ...
	I0806 08:38:28.372891   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df2efdb62a4af10f89a1b37599d4258ac3b72660353795f8c2141f3ef70c1f65"
	I0806 08:38:28.438909   79219 logs.go:123] Gathering logs for kube-controller-manager [d8651a7ed6615e826b950eab003841c2ec872b48160f207e77dd32a684125a23] ...
	I0806 08:38:28.438943   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8651a7ed6615e826b950eab003841c2ec872b48160f207e77dd32a684125a23"
	I0806 08:38:28.514238   79219 logs.go:123] Gathering logs for storage-provisioner [367eb6487187a37acafb1fa74364a2d9ec5d9241888ed2e871fb14f2af161975] ...
	I0806 08:38:28.514279   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 367eb6487187a37acafb1fa74364a2d9ec5d9241888ed2e871fb14f2af161975"
	I0806 08:38:28.552420   79219 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:28.552451   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:28.605414   79219 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:28.605452   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:28.621364   79219 logs.go:123] Gathering logs for kube-apiserver [f7edb6bece3347d88a263663a1d6549165dafdcb67ad723494b937766cba6da6] ...
	I0806 08:38:28.621404   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7edb6bece3347d88a263663a1d6549165dafdcb67ad723494b937766cba6da6"
	I0806 08:38:27.469521   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:29.967734   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:28.812203   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:28.812237   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:31.327103   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:31.340745   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:31.340826   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:31.382470   79927 cri.go:89] found id: ""
	I0806 08:38:31.382503   79927 logs.go:276] 0 containers: []
	W0806 08:38:31.382515   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:31.382523   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:31.382593   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:31.423319   79927 cri.go:89] found id: ""
	I0806 08:38:31.423343   79927 logs.go:276] 0 containers: []
	W0806 08:38:31.423353   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:31.423359   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:31.423423   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:31.463073   79927 cri.go:89] found id: ""
	I0806 08:38:31.463099   79927 logs.go:276] 0 containers: []
	W0806 08:38:31.463108   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:31.463115   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:31.463165   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:31.499169   79927 cri.go:89] found id: ""
	I0806 08:38:31.499209   79927 logs.go:276] 0 containers: []
	W0806 08:38:31.499220   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:31.499228   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:31.499284   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:31.535030   79927 cri.go:89] found id: ""
	I0806 08:38:31.535060   79927 logs.go:276] 0 containers: []
	W0806 08:38:31.535071   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:31.535079   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:31.535145   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:31.575872   79927 cri.go:89] found id: ""
	I0806 08:38:31.575894   79927 logs.go:276] 0 containers: []
	W0806 08:38:31.575902   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:31.575907   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:31.575952   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:31.614766   79927 cri.go:89] found id: ""
	I0806 08:38:31.614787   79927 logs.go:276] 0 containers: []
	W0806 08:38:31.614797   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:31.614804   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:31.614860   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:31.655929   79927 cri.go:89] found id: ""
	I0806 08:38:31.655954   79927 logs.go:276] 0 containers: []
	W0806 08:38:31.655964   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:31.655992   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:31.656009   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:31.728563   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:31.728611   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:31.745445   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:31.745479   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:31.827965   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:31.827998   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:31.828014   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:31.935051   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:31.935093   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:30.788021   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:32.788768   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:31.179830   79219 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8443/healthz ...
	I0806 08:38:31.184262   79219 api_server.go:279] https://192.168.39.190:8443/healthz returned 200:
	ok
	I0806 08:38:31.185245   79219 api_server.go:141] control plane version: v1.30.3
	I0806 08:38:31.185267   79219 api_server.go:131] duration metric: took 3.997664057s to wait for apiserver health ...
	I0806 08:38:31.185274   79219 system_pods.go:43] waiting for kube-system pods to appear ...
	I0806 08:38:31.185297   79219 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:31.185339   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:31.222960   79219 cri.go:89] found id: "f7edb6bece3347d88a263663a1d6549165dafdcb67ad723494b937766cba6da6"
	I0806 08:38:31.222986   79219 cri.go:89] found id: ""
	I0806 08:38:31.222993   79219 logs.go:276] 1 containers: [f7edb6bece3347d88a263663a1d6549165dafdcb67ad723494b937766cba6da6]
	I0806 08:38:31.223042   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:31.227448   79219 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:31.227514   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:31.265326   79219 cri.go:89] found id: "df2efdb62a4af10f89a1b37599d4258ac3b72660353795f8c2141f3ef70c1f65"
	I0806 08:38:31.265349   79219 cri.go:89] found id: ""
	I0806 08:38:31.265356   79219 logs.go:276] 1 containers: [df2efdb62a4af10f89a1b37599d4258ac3b72660353795f8c2141f3ef70c1f65]
	I0806 08:38:31.265402   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:31.269672   79219 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:31.269732   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:31.310202   79219 cri.go:89] found id: "9e1c91bc3a920b6f7db4477b8219f13fa092c68ff53abe2390325a68dad253e5"
	I0806 08:38:31.310228   79219 cri.go:89] found id: ""
	I0806 08:38:31.310237   79219 logs.go:276] 1 containers: [9e1c91bc3a920b6f7db4477b8219f13fa092c68ff53abe2390325a68dad253e5]
	I0806 08:38:31.310283   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:31.314486   79219 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:31.314540   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:31.352612   79219 cri.go:89] found id: "a25c5660a3a2b7226bf308b44307a3912efa7a9a73e79ec63f83133b80a44683"
	I0806 08:38:31.352637   79219 cri.go:89] found id: ""
	I0806 08:38:31.352647   79219 logs.go:276] 1 containers: [a25c5660a3a2b7226bf308b44307a3912efa7a9a73e79ec63f83133b80a44683]
	I0806 08:38:31.352707   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:31.358230   79219 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:31.358300   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:31.402435   79219 cri.go:89] found id: "612fc6a8f2c1208097a6d6287c5fecb20026608a278af5d0d4efc9662e065eb9"
	I0806 08:38:31.402462   79219 cri.go:89] found id: ""
	I0806 08:38:31.402472   79219 logs.go:276] 1 containers: [612fc6a8f2c1208097a6d6287c5fecb20026608a278af5d0d4efc9662e065eb9]
	I0806 08:38:31.402530   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:31.407723   79219 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:31.407793   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:31.455484   79219 cri.go:89] found id: "d8651a7ed6615e826b950eab003841c2ec872b48160f207e77dd32a684125a23"
	I0806 08:38:31.455511   79219 cri.go:89] found id: ""
	I0806 08:38:31.455520   79219 logs.go:276] 1 containers: [d8651a7ed6615e826b950eab003841c2ec872b48160f207e77dd32a684125a23]
	I0806 08:38:31.455582   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:31.461155   79219 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:31.461231   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:31.501317   79219 cri.go:89] found id: ""
	I0806 08:38:31.501342   79219 logs.go:276] 0 containers: []
	W0806 08:38:31.501352   79219 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:31.501359   79219 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0806 08:38:31.501418   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0806 08:38:31.552712   79219 cri.go:89] found id: "367eb6487187a37acafb1fa74364a2d9ec5d9241888ed2e871fb14f2af161975"
	I0806 08:38:31.552742   79219 cri.go:89] found id: "78abe2efa92d2d6c9171ed65bd4d941693be3207554f9318a7cefb91544eeb20"
	I0806 08:38:31.552748   79219 cri.go:89] found id: ""
	I0806 08:38:31.552758   79219 logs.go:276] 2 containers: [367eb6487187a37acafb1fa74364a2d9ec5d9241888ed2e871fb14f2af161975 78abe2efa92d2d6c9171ed65bd4d941693be3207554f9318a7cefb91544eeb20]
	I0806 08:38:31.552820   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:31.561505   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:31.566613   79219 logs.go:123] Gathering logs for storage-provisioner [367eb6487187a37acafb1fa74364a2d9ec5d9241888ed2e871fb14f2af161975] ...
	I0806 08:38:31.566640   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 367eb6487187a37acafb1fa74364a2d9ec5d9241888ed2e871fb14f2af161975"
	I0806 08:38:31.613033   79219 logs.go:123] Gathering logs for storage-provisioner [78abe2efa92d2d6c9171ed65bd4d941693be3207554f9318a7cefb91544eeb20] ...
	I0806 08:38:31.613073   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78abe2efa92d2d6c9171ed65bd4d941693be3207554f9318a7cefb91544eeb20"
	I0806 08:38:31.655353   79219 logs.go:123] Gathering logs for kube-scheduler [a25c5660a3a2b7226bf308b44307a3912efa7a9a73e79ec63f83133b80a44683] ...
	I0806 08:38:31.655389   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a25c5660a3a2b7226bf308b44307a3912efa7a9a73e79ec63f83133b80a44683"
	I0806 08:38:31.703934   79219 logs.go:123] Gathering logs for kube-controller-manager [d8651a7ed6615e826b950eab003841c2ec872b48160f207e77dd32a684125a23] ...
	I0806 08:38:31.703960   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8651a7ed6615e826b950eab003841c2ec872b48160f207e77dd32a684125a23"
	I0806 08:38:31.768880   79219 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:31.768916   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 08:38:31.896045   79219 logs.go:123] Gathering logs for kube-apiserver [f7edb6bece3347d88a263663a1d6549165dafdcb67ad723494b937766cba6da6] ...
	I0806 08:38:31.896079   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7edb6bece3347d88a263663a1d6549165dafdcb67ad723494b937766cba6da6"
	I0806 08:38:31.966646   79219 logs.go:123] Gathering logs for etcd [df2efdb62a4af10f89a1b37599d4258ac3b72660353795f8c2141f3ef70c1f65] ...
	I0806 08:38:31.966682   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df2efdb62a4af10f89a1b37599d4258ac3b72660353795f8c2141f3ef70c1f65"
	I0806 08:38:32.029471   79219 logs.go:123] Gathering logs for coredns [9e1c91bc3a920b6f7db4477b8219f13fa092c68ff53abe2390325a68dad253e5] ...
	I0806 08:38:32.029509   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e1c91bc3a920b6f7db4477b8219f13fa092c68ff53abe2390325a68dad253e5"
	I0806 08:38:32.071406   79219 logs.go:123] Gathering logs for kube-proxy [612fc6a8f2c1208097a6d6287c5fecb20026608a278af5d0d4efc9662e065eb9] ...
	I0806 08:38:32.071435   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 612fc6a8f2c1208097a6d6287c5fecb20026608a278af5d0d4efc9662e065eb9"
	I0806 08:38:32.122333   79219 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:32.122368   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:32.478494   79219 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:32.478540   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:32.530649   79219 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:32.530696   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:32.545866   79219 logs.go:123] Gathering logs for container status ...
	I0806 08:38:32.545895   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:35.101370   79219 system_pods.go:59] 8 kube-system pods found
	I0806 08:38:35.101397   79219 system_pods.go:61] "coredns-7db6d8ff4d-4xxwm" [eeb47dea-32e9-458f-9ad9-2af6d4e68cc0] Running
	I0806 08:38:35.101402   79219 system_pods.go:61] "etcd-embed-certs-324808" [1e0a82cd-6704-4da3-8992-15c68a7aaf70] Running
	I0806 08:38:35.101405   79219 system_pods.go:61] "kube-apiserver-embed-certs-324808" [a33c54d9-1fc5-4243-8fe9-d535c40908c3] Running
	I0806 08:38:35.101409   79219 system_pods.go:61] "kube-controller-manager-embed-certs-324808" [b1c519b8-9c96-4c87-90f3-cc277cfe7763] Running
	I0806 08:38:35.101412   79219 system_pods.go:61] "kube-proxy-8lbzv" [0c45e6c0-9b7f-4c48-bff5-38d4a5949bec] Running
	I0806 08:38:35.101415   79219 system_pods.go:61] "kube-scheduler-embed-certs-324808" [78c2827f-341a-4441-bf7e-eff947fc3ea8] Running
	I0806 08:38:35.101421   79219 system_pods.go:61] "metrics-server-569cc877fc-8xb9k" [7a5fb326-11a7-48fa-bcde-a22026a1dccb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0806 08:38:35.101424   79219 system_pods.go:61] "storage-provisioner" [15c7d89a-e994-4cca-931c-f646157a15e8] Running
	I0806 08:38:35.101431   79219 system_pods.go:74] duration metric: took 3.916151772s to wait for pod list to return data ...
	I0806 08:38:35.101438   79219 default_sa.go:34] waiting for default service account to be created ...
	I0806 08:38:35.103518   79219 default_sa.go:45] found service account: "default"
	I0806 08:38:35.103536   79219 default_sa.go:55] duration metric: took 2.092582ms for default service account to be created ...
	I0806 08:38:35.103546   79219 system_pods.go:116] waiting for k8s-apps to be running ...
	I0806 08:38:35.108383   79219 system_pods.go:86] 8 kube-system pods found
	I0806 08:38:35.108409   79219 system_pods.go:89] "coredns-7db6d8ff4d-4xxwm" [eeb47dea-32e9-458f-9ad9-2af6d4e68cc0] Running
	I0806 08:38:35.108417   79219 system_pods.go:89] "etcd-embed-certs-324808" [1e0a82cd-6704-4da3-8992-15c68a7aaf70] Running
	I0806 08:38:35.108423   79219 system_pods.go:89] "kube-apiserver-embed-certs-324808" [a33c54d9-1fc5-4243-8fe9-d535c40908c3] Running
	I0806 08:38:35.108427   79219 system_pods.go:89] "kube-controller-manager-embed-certs-324808" [b1c519b8-9c96-4c87-90f3-cc277cfe7763] Running
	I0806 08:38:35.108431   79219 system_pods.go:89] "kube-proxy-8lbzv" [0c45e6c0-9b7f-4c48-bff5-38d4a5949bec] Running
	I0806 08:38:35.108436   79219 system_pods.go:89] "kube-scheduler-embed-certs-324808" [78c2827f-341a-4441-bf7e-eff947fc3ea8] Running
	I0806 08:38:35.108442   79219 system_pods.go:89] "metrics-server-569cc877fc-8xb9k" [7a5fb326-11a7-48fa-bcde-a22026a1dccb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0806 08:38:35.108447   79219 system_pods.go:89] "storage-provisioner" [15c7d89a-e994-4cca-931c-f646157a15e8] Running
	I0806 08:38:35.108453   79219 system_pods.go:126] duration metric: took 4.901954ms to wait for k8s-apps to be running ...
	I0806 08:38:35.108463   79219 system_svc.go:44] waiting for kubelet service to be running ....
	I0806 08:38:35.108505   79219 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 08:38:35.126018   79219 system_svc.go:56] duration metric: took 17.549117ms WaitForService to wait for kubelet
	I0806 08:38:35.126045   79219 kubeadm.go:582] duration metric: took 4m23.224415251s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0806 08:38:35.126065   79219 node_conditions.go:102] verifying NodePressure condition ...
	I0806 08:38:35.129214   79219 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0806 08:38:35.129231   79219 node_conditions.go:123] node cpu capacity is 2
	I0806 08:38:35.129241   79219 node_conditions.go:105] duration metric: took 3.172893ms to run NodePressure ...
	I0806 08:38:35.129252   79219 start.go:241] waiting for startup goroutines ...
	I0806 08:38:35.129259   79219 start.go:246] waiting for cluster config update ...
	I0806 08:38:35.129273   79219 start.go:255] writing updated cluster config ...
	I0806 08:38:35.129540   79219 ssh_runner.go:195] Run: rm -f paused
	I0806 08:38:35.178651   79219 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0806 08:38:35.180921   79219 out.go:177] * Done! kubectl is now configured to use "embed-certs-324808" cluster and "default" namespace by default
	I0806 08:38:31.968170   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:34.468911   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:34.478890   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:34.493573   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:34.493650   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:34.529054   79927 cri.go:89] found id: ""
	I0806 08:38:34.529083   79927 logs.go:276] 0 containers: []
	W0806 08:38:34.529094   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:34.529101   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:34.529171   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:34.570214   79927 cri.go:89] found id: ""
	I0806 08:38:34.570245   79927 logs.go:276] 0 containers: []
	W0806 08:38:34.570256   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:34.570264   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:34.570325   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:34.607198   79927 cri.go:89] found id: ""
	I0806 08:38:34.607232   79927 logs.go:276] 0 containers: []
	W0806 08:38:34.607243   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:34.607250   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:34.607296   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:34.646238   79927 cri.go:89] found id: ""
	I0806 08:38:34.646266   79927 logs.go:276] 0 containers: []
	W0806 08:38:34.646278   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:34.646287   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:34.646354   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:34.682137   79927 cri.go:89] found id: ""
	I0806 08:38:34.682161   79927 logs.go:276] 0 containers: []
	W0806 08:38:34.682169   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:34.682175   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:34.682227   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:34.721211   79927 cri.go:89] found id: ""
	I0806 08:38:34.721238   79927 logs.go:276] 0 containers: []
	W0806 08:38:34.721246   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:34.721252   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:34.721306   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:34.759693   79927 cri.go:89] found id: ""
	I0806 08:38:34.759728   79927 logs.go:276] 0 containers: []
	W0806 08:38:34.759737   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:34.759743   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:34.759793   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:34.799224   79927 cri.go:89] found id: ""
	I0806 08:38:34.799246   79927 logs.go:276] 0 containers: []
	W0806 08:38:34.799254   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:34.799262   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:34.799273   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:34.852682   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:34.852727   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:34.866456   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:34.866494   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:34.933029   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:34.933055   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:34.933076   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:35.017215   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:35.017253   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:37.567662   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:37.582220   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:37.582284   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:37.622112   79927 cri.go:89] found id: ""
	I0806 08:38:37.622145   79927 logs.go:276] 0 containers: []
	W0806 08:38:37.622153   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:37.622164   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:37.622228   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:37.657767   79927 cri.go:89] found id: ""
	I0806 08:38:37.657791   79927 logs.go:276] 0 containers: []
	W0806 08:38:37.657798   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:37.657804   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:37.657849   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:37.691442   79927 cri.go:89] found id: ""
	I0806 08:38:37.691471   79927 logs.go:276] 0 containers: []
	W0806 08:38:37.691480   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:37.691485   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:37.691539   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:37.725810   79927 cri.go:89] found id: ""
	I0806 08:38:37.725837   79927 logs.go:276] 0 containers: []
	W0806 08:38:37.725859   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:37.725866   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:37.725922   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:37.764708   79927 cri.go:89] found id: ""
	I0806 08:38:37.764737   79927 logs.go:276] 0 containers: []
	W0806 08:38:37.764748   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:37.764755   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:37.764818   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:37.802915   79927 cri.go:89] found id: ""
	I0806 08:38:37.802939   79927 logs.go:276] 0 containers: []
	W0806 08:38:37.802947   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:37.802953   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:37.803008   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:37.838492   79927 cri.go:89] found id: ""
	I0806 08:38:37.838525   79927 logs.go:276] 0 containers: []
	W0806 08:38:37.838536   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:37.838544   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:37.838607   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:37.873937   79927 cri.go:89] found id: ""
	I0806 08:38:37.873969   79927 logs.go:276] 0 containers: []
	W0806 08:38:37.873979   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:37.873990   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:37.874005   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:37.912169   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:37.912193   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:37.968269   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:37.968305   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:37.983414   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:37.983440   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:38.055031   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:38.055049   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:38.055062   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:35.289081   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:37.789220   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:36.968687   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:39.468051   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:41.468488   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:40.635449   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:40.649962   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:40.650045   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:40.688975   79927 cri.go:89] found id: ""
	I0806 08:38:40.689006   79927 logs.go:276] 0 containers: []
	W0806 08:38:40.689017   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:40.689029   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:40.689093   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:40.726740   79927 cri.go:89] found id: ""
	I0806 08:38:40.726770   79927 logs.go:276] 0 containers: []
	W0806 08:38:40.726780   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:40.726788   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:40.726886   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:40.767402   79927 cri.go:89] found id: ""
	I0806 08:38:40.767432   79927 logs.go:276] 0 containers: []
	W0806 08:38:40.767445   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:40.767453   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:40.767510   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:40.809994   79927 cri.go:89] found id: ""
	I0806 08:38:40.810023   79927 logs.go:276] 0 containers: []
	W0806 08:38:40.810033   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:40.810042   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:40.810102   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:40.845624   79927 cri.go:89] found id: ""
	I0806 08:38:40.845653   79927 logs.go:276] 0 containers: []
	W0806 08:38:40.845662   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:40.845668   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:40.845715   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:40.883830   79927 cri.go:89] found id: ""
	I0806 08:38:40.883856   79927 logs.go:276] 0 containers: []
	W0806 08:38:40.883871   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:40.883879   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:40.883937   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:40.919930   79927 cri.go:89] found id: ""
	I0806 08:38:40.919958   79927 logs.go:276] 0 containers: []
	W0806 08:38:40.919966   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:40.919971   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:40.920018   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:40.954563   79927 cri.go:89] found id: ""
	I0806 08:38:40.954594   79927 logs.go:276] 0 containers: []
	W0806 08:38:40.954605   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:40.954616   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:40.954629   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:41.008780   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:41.008813   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:41.022292   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:41.022321   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:41.090241   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:41.090273   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:41.090294   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:41.171739   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:41.171779   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:43.712410   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:43.726676   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:43.726747   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:40.288192   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:42.289039   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:42.789083   79614 pod_ready.go:81] duration metric: took 4m0.00771712s for pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace to be "Ready" ...
	E0806 08:38:42.789107   79614 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0806 08:38:42.789117   79614 pod_ready.go:38] duration metric: took 4m5.553596655s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 08:38:42.789135   79614 api_server.go:52] waiting for apiserver process to appear ...
	I0806 08:38:42.789163   79614 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:42.789215   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:42.837092   79614 cri.go:89] found id: "49843be03d16addb301ffd26e517770ff29b00f049efb524a2139b8b2153325c"
	I0806 08:38:42.837119   79614 cri.go:89] found id: ""
	I0806 08:38:42.837128   79614 logs.go:276] 1 containers: [49843be03d16addb301ffd26e517770ff29b00f049efb524a2139b8b2153325c]
	I0806 08:38:42.837188   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:42.842270   79614 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:42.842334   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:42.878018   79614 cri.go:89] found id: "b5fa6fe8f1db6b59f47ea637515efb2864e0408ce1df0e6851b3bda0e240f5d3"
	I0806 08:38:42.878044   79614 cri.go:89] found id: ""
	I0806 08:38:42.878053   79614 logs.go:276] 1 containers: [b5fa6fe8f1db6b59f47ea637515efb2864e0408ce1df0e6851b3bda0e240f5d3]
	I0806 08:38:42.878115   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:42.882437   79614 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:42.882509   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:42.925635   79614 cri.go:89] found id: "59f63eeae644f7c8944cdb7b1d05d9e49fe84eb28e9ccbad2d2dd79846d7eb52"
	I0806 08:38:42.925655   79614 cri.go:89] found id: ""
	I0806 08:38:42.925662   79614 logs.go:276] 1 containers: [59f63eeae644f7c8944cdb7b1d05d9e49fe84eb28e9ccbad2d2dd79846d7eb52]
	I0806 08:38:42.925704   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:42.930525   79614 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:42.930583   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:42.972716   79614 cri.go:89] found id: "e596abcad3dc13d27548701f515beebba051e04a9e20f8ce1d018731d880ff1c"
	I0806 08:38:42.972736   79614 cri.go:89] found id: ""
	I0806 08:38:42.972744   79614 logs.go:276] 1 containers: [e596abcad3dc13d27548701f515beebba051e04a9e20f8ce1d018731d880ff1c]
	I0806 08:38:42.972791   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:42.977840   79614 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:42.977894   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:43.017316   79614 cri.go:89] found id: "75990046864b4a4db9f8c02f44de4eefbb0cf45584880b053bce4ca0b677ce4e"
	I0806 08:38:43.017339   79614 cri.go:89] found id: ""
	I0806 08:38:43.017346   79614 logs.go:276] 1 containers: [75990046864b4a4db9f8c02f44de4eefbb0cf45584880b053bce4ca0b677ce4e]
	I0806 08:38:43.017396   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:43.022341   79614 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:43.022416   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:43.067113   79614 cri.go:89] found id: "05bbffa4c989e42bd020ebe6b943459ec1d7b31a6c5e5ba9a37056da15542658"
	I0806 08:38:43.067145   79614 cri.go:89] found id: ""
	I0806 08:38:43.067155   79614 logs.go:276] 1 containers: [05bbffa4c989e42bd020ebe6b943459ec1d7b31a6c5e5ba9a37056da15542658]
	I0806 08:38:43.067211   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:43.071674   79614 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:43.071748   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:43.114123   79614 cri.go:89] found id: ""
	I0806 08:38:43.114146   79614 logs.go:276] 0 containers: []
	W0806 08:38:43.114154   79614 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:43.114159   79614 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0806 08:38:43.114211   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0806 08:38:43.155927   79614 cri.go:89] found id: "e668e4a5fa59e279bc227b6110983cc4660b639d5b1e1291ae60d361f682d82e"
	I0806 08:38:43.155955   79614 cri.go:89] found id: "abd677ac30faba3fef9e9d562601ad0012d7f1b48852e08b8f2684efda6b0f2e"
	I0806 08:38:43.155961   79614 cri.go:89] found id: ""
	I0806 08:38:43.155969   79614 logs.go:276] 2 containers: [e668e4a5fa59e279bc227b6110983cc4660b639d5b1e1291ae60d361f682d82e abd677ac30faba3fef9e9d562601ad0012d7f1b48852e08b8f2684efda6b0f2e]
	I0806 08:38:43.156017   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:43.165735   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:43.171834   79614 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:43.171861   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:43.232588   79614 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:43.232628   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:43.248033   79614 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:43.248080   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 08:38:43.383729   79614 logs.go:123] Gathering logs for etcd [b5fa6fe8f1db6b59f47ea637515efb2864e0408ce1df0e6851b3bda0e240f5d3] ...
	I0806 08:38:43.383760   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5fa6fe8f1db6b59f47ea637515efb2864e0408ce1df0e6851b3bda0e240f5d3"
	I0806 08:38:43.428082   79614 logs.go:123] Gathering logs for coredns [59f63eeae644f7c8944cdb7b1d05d9e49fe84eb28e9ccbad2d2dd79846d7eb52] ...
	I0806 08:38:43.428116   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59f63eeae644f7c8944cdb7b1d05d9e49fe84eb28e9ccbad2d2dd79846d7eb52"
	I0806 08:38:43.467054   79614 logs.go:123] Gathering logs for storage-provisioner [e668e4a5fa59e279bc227b6110983cc4660b639d5b1e1291ae60d361f682d82e] ...
	I0806 08:38:43.467081   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e668e4a5fa59e279bc227b6110983cc4660b639d5b1e1291ae60d361f682d82e"
	I0806 08:38:43.502900   79614 logs.go:123] Gathering logs for storage-provisioner [abd677ac30faba3fef9e9d562601ad0012d7f1b48852e08b8f2684efda6b0f2e] ...
	I0806 08:38:43.502928   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abd677ac30faba3fef9e9d562601ad0012d7f1b48852e08b8f2684efda6b0f2e"
	I0806 08:38:43.539338   79614 logs.go:123] Gathering logs for kube-apiserver [49843be03d16addb301ffd26e517770ff29b00f049efb524a2139b8b2153325c] ...
	I0806 08:38:43.539369   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49843be03d16addb301ffd26e517770ff29b00f049efb524a2139b8b2153325c"
	I0806 08:38:43.585813   79614 logs.go:123] Gathering logs for kube-scheduler [e596abcad3dc13d27548701f515beebba051e04a9e20f8ce1d018731d880ff1c] ...
	I0806 08:38:43.585855   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e596abcad3dc13d27548701f515beebba051e04a9e20f8ce1d018731d880ff1c"
	I0806 08:38:43.627141   79614 logs.go:123] Gathering logs for kube-proxy [75990046864b4a4db9f8c02f44de4eefbb0cf45584880b053bce4ca0b677ce4e] ...
	I0806 08:38:43.627173   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75990046864b4a4db9f8c02f44de4eefbb0cf45584880b053bce4ca0b677ce4e"
	I0806 08:38:43.663754   79614 logs.go:123] Gathering logs for kube-controller-manager [05bbffa4c989e42bd020ebe6b943459ec1d7b31a6c5e5ba9a37056da15542658] ...
	I0806 08:38:43.663783   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 05bbffa4c989e42bd020ebe6b943459ec1d7b31a6c5e5ba9a37056da15542658"
	I0806 08:38:43.717594   79614 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:43.717631   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:43.469134   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:45.967796   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:43.765343   79927 cri.go:89] found id: ""
	I0806 08:38:43.765379   79927 logs.go:276] 0 containers: []
	W0806 08:38:43.765390   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:43.765398   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:43.765457   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:43.803813   79927 cri.go:89] found id: ""
	I0806 08:38:43.803846   79927 logs.go:276] 0 containers: []
	W0806 08:38:43.803859   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:43.803868   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:43.803933   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:43.841307   79927 cri.go:89] found id: ""
	I0806 08:38:43.841339   79927 logs.go:276] 0 containers: []
	W0806 08:38:43.841349   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:43.841356   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:43.841418   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:43.876488   79927 cri.go:89] found id: ""
	I0806 08:38:43.876521   79927 logs.go:276] 0 containers: []
	W0806 08:38:43.876534   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:43.876544   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:43.876606   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:43.910972   79927 cri.go:89] found id: ""
	I0806 08:38:43.911000   79927 logs.go:276] 0 containers: []
	W0806 08:38:43.911010   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:43.911017   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:43.911077   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:43.945541   79927 cri.go:89] found id: ""
	I0806 08:38:43.945573   79927 logs.go:276] 0 containers: []
	W0806 08:38:43.945584   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:43.945592   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:43.945655   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:43.981366   79927 cri.go:89] found id: ""
	I0806 08:38:43.981399   79927 logs.go:276] 0 containers: []
	W0806 08:38:43.981408   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:43.981413   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:43.981463   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:44.015349   79927 cri.go:89] found id: ""
	I0806 08:38:44.015375   79927 logs.go:276] 0 containers: []
	W0806 08:38:44.015384   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:44.015394   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:44.015408   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:44.068089   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:44.068129   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:44.082496   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:44.082524   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:44.157765   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:44.157788   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:44.157805   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:44.242842   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:44.242885   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:46.789280   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:46.803490   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:46.803562   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:46.844164   79927 cri.go:89] found id: ""
	I0806 08:38:46.844192   79927 logs.go:276] 0 containers: []
	W0806 08:38:46.844200   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:46.844206   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:46.844247   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:46.880968   79927 cri.go:89] found id: ""
	I0806 08:38:46.881003   79927 logs.go:276] 0 containers: []
	W0806 08:38:46.881015   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:46.881022   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:46.881082   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:46.919291   79927 cri.go:89] found id: ""
	I0806 08:38:46.919311   79927 logs.go:276] 0 containers: []
	W0806 08:38:46.919319   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:46.919324   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:46.919369   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:46.960591   79927 cri.go:89] found id: ""
	I0806 08:38:46.960616   79927 logs.go:276] 0 containers: []
	W0806 08:38:46.960626   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:46.960632   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:46.960681   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:46.995913   79927 cri.go:89] found id: ""
	I0806 08:38:46.995943   79927 logs.go:276] 0 containers: []
	W0806 08:38:46.995954   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:46.995961   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:46.996045   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:47.035185   79927 cri.go:89] found id: ""
	I0806 08:38:47.035213   79927 logs.go:276] 0 containers: []
	W0806 08:38:47.035225   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:47.035236   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:47.035292   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:47.079605   79927 cri.go:89] found id: ""
	I0806 08:38:47.079634   79927 logs.go:276] 0 containers: []
	W0806 08:38:47.079646   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:47.079653   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:47.079701   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:47.118819   79927 cri.go:89] found id: ""
	I0806 08:38:47.118849   79927 logs.go:276] 0 containers: []
	W0806 08:38:47.118861   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:47.118873   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:47.118888   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:47.181357   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:47.181394   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:47.197676   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:47.197707   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:47.270304   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:47.270331   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:47.270348   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:47.354145   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:47.354176   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:44.291978   79614 logs.go:123] Gathering logs for container status ...
	I0806 08:38:44.292026   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:46.843357   79614 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:46.860793   79614 api_server.go:72] duration metric: took 4m14.882891742s to wait for apiserver process to appear ...
	I0806 08:38:46.860820   79614 api_server.go:88] waiting for apiserver healthz status ...
	I0806 08:38:46.860858   79614 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:46.860923   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:46.909044   79614 cri.go:89] found id: "49843be03d16addb301ffd26e517770ff29b00f049efb524a2139b8b2153325c"
	I0806 08:38:46.909068   79614 cri.go:89] found id: ""
	I0806 08:38:46.909076   79614 logs.go:276] 1 containers: [49843be03d16addb301ffd26e517770ff29b00f049efb524a2139b8b2153325c]
	I0806 08:38:46.909134   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:46.913443   79614 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:46.913522   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:46.955246   79614 cri.go:89] found id: "b5fa6fe8f1db6b59f47ea637515efb2864e0408ce1df0e6851b3bda0e240f5d3"
	I0806 08:38:46.955273   79614 cri.go:89] found id: ""
	I0806 08:38:46.955281   79614 logs.go:276] 1 containers: [b5fa6fe8f1db6b59f47ea637515efb2864e0408ce1df0e6851b3bda0e240f5d3]
	I0806 08:38:46.955347   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:46.959914   79614 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:46.959994   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:47.001639   79614 cri.go:89] found id: "59f63eeae644f7c8944cdb7b1d05d9e49fe84eb28e9ccbad2d2dd79846d7eb52"
	I0806 08:38:47.001665   79614 cri.go:89] found id: ""
	I0806 08:38:47.001673   79614 logs.go:276] 1 containers: [59f63eeae644f7c8944cdb7b1d05d9e49fe84eb28e9ccbad2d2dd79846d7eb52]
	I0806 08:38:47.001717   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:47.006260   79614 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:47.006344   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:47.049226   79614 cri.go:89] found id: "e596abcad3dc13d27548701f515beebba051e04a9e20f8ce1d018731d880ff1c"
	I0806 08:38:47.049252   79614 cri.go:89] found id: ""
	I0806 08:38:47.049261   79614 logs.go:276] 1 containers: [e596abcad3dc13d27548701f515beebba051e04a9e20f8ce1d018731d880ff1c]
	I0806 08:38:47.049317   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:47.053544   79614 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:47.053626   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:47.095805   79614 cri.go:89] found id: "75990046864b4a4db9f8c02f44de4eefbb0cf45584880b053bce4ca0b677ce4e"
	I0806 08:38:47.095832   79614 cri.go:89] found id: ""
	I0806 08:38:47.095841   79614 logs.go:276] 1 containers: [75990046864b4a4db9f8c02f44de4eefbb0cf45584880b053bce4ca0b677ce4e]
	I0806 08:38:47.095895   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:47.100272   79614 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:47.100329   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:47.146108   79614 cri.go:89] found id: "05bbffa4c989e42bd020ebe6b943459ec1d7b31a6c5e5ba9a37056da15542658"
	I0806 08:38:47.146132   79614 cri.go:89] found id: ""
	I0806 08:38:47.146142   79614 logs.go:276] 1 containers: [05bbffa4c989e42bd020ebe6b943459ec1d7b31a6c5e5ba9a37056da15542658]
	I0806 08:38:47.146200   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:47.150808   79614 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:47.150863   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:47.194122   79614 cri.go:89] found id: ""
	I0806 08:38:47.194157   79614 logs.go:276] 0 containers: []
	W0806 08:38:47.194169   79614 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:47.194177   79614 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0806 08:38:47.194241   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0806 08:38:47.237872   79614 cri.go:89] found id: "e668e4a5fa59e279bc227b6110983cc4660b639d5b1e1291ae60d361f682d82e"
	I0806 08:38:47.237906   79614 cri.go:89] found id: "abd677ac30faba3fef9e9d562601ad0012d7f1b48852e08b8f2684efda6b0f2e"
	I0806 08:38:47.237915   79614 cri.go:89] found id: ""
	I0806 08:38:47.237925   79614 logs.go:276] 2 containers: [e668e4a5fa59e279bc227b6110983cc4660b639d5b1e1291ae60d361f682d82e abd677ac30faba3fef9e9d562601ad0012d7f1b48852e08b8f2684efda6b0f2e]
	I0806 08:38:47.237988   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:47.242528   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:47.247317   79614 logs.go:123] Gathering logs for kube-proxy [75990046864b4a4db9f8c02f44de4eefbb0cf45584880b053bce4ca0b677ce4e] ...
	I0806 08:38:47.247341   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75990046864b4a4db9f8c02f44de4eefbb0cf45584880b053bce4ca0b677ce4e"
	I0806 08:38:47.289612   79614 logs.go:123] Gathering logs for kube-controller-manager [05bbffa4c989e42bd020ebe6b943459ec1d7b31a6c5e5ba9a37056da15542658] ...
	I0806 08:38:47.289644   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 05bbffa4c989e42bd020ebe6b943459ec1d7b31a6c5e5ba9a37056da15542658"
	I0806 08:38:47.345619   79614 logs.go:123] Gathering logs for storage-provisioner [e668e4a5fa59e279bc227b6110983cc4660b639d5b1e1291ae60d361f682d82e] ...
	I0806 08:38:47.345653   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e668e4a5fa59e279bc227b6110983cc4660b639d5b1e1291ae60d361f682d82e"
	I0806 08:38:47.401281   79614 logs.go:123] Gathering logs for storage-provisioner [abd677ac30faba3fef9e9d562601ad0012d7f1b48852e08b8f2684efda6b0f2e] ...
	I0806 08:38:47.401306   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abd677ac30faba3fef9e9d562601ad0012d7f1b48852e08b8f2684efda6b0f2e"
	I0806 08:38:47.438335   79614 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:47.438364   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 08:38:47.537334   79614 logs.go:123] Gathering logs for etcd [b5fa6fe8f1db6b59f47ea637515efb2864e0408ce1df0e6851b3bda0e240f5d3] ...
	I0806 08:38:47.537360   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5fa6fe8f1db6b59f47ea637515efb2864e0408ce1df0e6851b3bda0e240f5d3"
	I0806 08:38:47.592101   79614 logs.go:123] Gathering logs for coredns [59f63eeae644f7c8944cdb7b1d05d9e49fe84eb28e9ccbad2d2dd79846d7eb52] ...
	I0806 08:38:47.592132   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59f63eeae644f7c8944cdb7b1d05d9e49fe84eb28e9ccbad2d2dd79846d7eb52"
	I0806 08:38:47.626714   79614 logs.go:123] Gathering logs for kube-scheduler [e596abcad3dc13d27548701f515beebba051e04a9e20f8ce1d018731d880ff1c] ...
	I0806 08:38:47.626738   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e596abcad3dc13d27548701f515beebba051e04a9e20f8ce1d018731d880ff1c"
	I0806 08:38:47.663232   79614 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:47.663262   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:48.133308   79614 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:48.133344   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:48.189669   79614 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:48.189709   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:48.208530   79614 logs.go:123] Gathering logs for kube-apiserver [49843be03d16addb301ffd26e517770ff29b00f049efb524a2139b8b2153325c] ...
	I0806 08:38:48.208557   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49843be03d16addb301ffd26e517770ff29b00f049efb524a2139b8b2153325c"
	I0806 08:38:48.264499   79614 logs.go:123] Gathering logs for container status ...
	I0806 08:38:48.264535   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:49.901481   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:49.914862   79927 kubeadm.go:597] duration metric: took 4m3.80396481s to restartPrimaryControlPlane
	W0806 08:38:49.914926   79927 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0806 08:38:49.914952   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0806 08:38:50.394829   79927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 08:38:50.409425   79927 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0806 08:38:50.419349   79927 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0806 08:38:50.429688   79927 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0806 08:38:50.429711   79927 kubeadm.go:157] found existing configuration files:
	
	I0806 08:38:50.429756   79927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0806 08:38:50.440055   79927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0806 08:38:50.440126   79927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0806 08:38:50.450734   79927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0806 08:38:50.461860   79927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0806 08:38:50.461943   79927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0806 08:38:50.473324   79927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0806 08:38:50.483381   79927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0806 08:38:50.483440   79927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0806 08:38:50.493247   79927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0806 08:38:50.502800   79927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0806 08:38:50.502861   79927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0806 08:38:50.512897   79927 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0806 08:38:50.585658   79927 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0806 08:38:50.585795   79927 kubeadm.go:310] [preflight] Running pre-flight checks
	I0806 08:38:50.740936   79927 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0806 08:38:50.741079   79927 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0806 08:38:50.741200   79927 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0806 08:38:50.938086   79927 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0806 08:38:48.468228   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:50.968702   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:50.940205   79927 out.go:204]   - Generating certificates and keys ...
	I0806 08:38:50.940315   79927 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0806 08:38:50.940428   79927 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0806 08:38:50.940543   79927 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0806 08:38:50.940621   79927 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0806 08:38:50.940723   79927 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0806 08:38:50.940800   79927 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0806 08:38:50.940881   79927 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0806 08:38:50.941150   79927 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0806 08:38:50.941954   79927 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0806 08:38:50.942765   79927 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0806 08:38:50.942910   79927 kubeadm.go:310] [certs] Using the existing "sa" key
	I0806 08:38:50.942993   79927 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0806 08:38:51.039558   79927 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0806 08:38:51.265243   79927 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0806 08:38:51.348501   79927 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0806 08:38:51.509558   79927 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0806 08:38:51.530725   79927 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0806 08:38:51.534108   79927 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0806 08:38:51.534297   79927 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0806 08:38:51.680149   79927 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0806 08:38:51.682195   79927 out.go:204]   - Booting up control plane ...
	I0806 08:38:51.682331   79927 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0806 08:38:51.693736   79927 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0806 08:38:51.695315   79927 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0806 08:38:51.697107   79927 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0806 08:38:51.700906   79927 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0806 08:38:50.822580   79614 api_server.go:253] Checking apiserver healthz at https://192.168.50.235:8444/healthz ...
	I0806 08:38:50.827121   79614 api_server.go:279] https://192.168.50.235:8444/healthz returned 200:
	ok
	I0806 08:38:50.828327   79614 api_server.go:141] control plane version: v1.30.3
	I0806 08:38:50.828355   79614 api_server.go:131] duration metric: took 3.967524972s to wait for apiserver health ...
	I0806 08:38:50.828381   79614 system_pods.go:43] waiting for kube-system pods to appear ...
	I0806 08:38:50.828406   79614 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:50.828460   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:50.875518   79614 cri.go:89] found id: "49843be03d16addb301ffd26e517770ff29b00f049efb524a2139b8b2153325c"
	I0806 08:38:50.875545   79614 cri.go:89] found id: ""
	I0806 08:38:50.875555   79614 logs.go:276] 1 containers: [49843be03d16addb301ffd26e517770ff29b00f049efb524a2139b8b2153325c]
	I0806 08:38:50.875611   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:50.881260   79614 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:50.881321   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:50.923252   79614 cri.go:89] found id: "b5fa6fe8f1db6b59f47ea637515efb2864e0408ce1df0e6851b3bda0e240f5d3"
	I0806 08:38:50.923281   79614 cri.go:89] found id: ""
	I0806 08:38:50.923292   79614 logs.go:276] 1 containers: [b5fa6fe8f1db6b59f47ea637515efb2864e0408ce1df0e6851b3bda0e240f5d3]
	I0806 08:38:50.923350   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:50.928115   79614 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:50.928176   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:50.971427   79614 cri.go:89] found id: "59f63eeae644f7c8944cdb7b1d05d9e49fe84eb28e9ccbad2d2dd79846d7eb52"
	I0806 08:38:50.971453   79614 cri.go:89] found id: ""
	I0806 08:38:50.971463   79614 logs.go:276] 1 containers: [59f63eeae644f7c8944cdb7b1d05d9e49fe84eb28e9ccbad2d2dd79846d7eb52]
	I0806 08:38:50.971520   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:50.975665   79614 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:50.975789   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:51.014601   79614 cri.go:89] found id: "e596abcad3dc13d27548701f515beebba051e04a9e20f8ce1d018731d880ff1c"
	I0806 08:38:51.014633   79614 cri.go:89] found id: ""
	I0806 08:38:51.014644   79614 logs.go:276] 1 containers: [e596abcad3dc13d27548701f515beebba051e04a9e20f8ce1d018731d880ff1c]
	I0806 08:38:51.014722   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:51.019197   79614 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:51.019266   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:51.057063   79614 cri.go:89] found id: "75990046864b4a4db9f8c02f44de4eefbb0cf45584880b053bce4ca0b677ce4e"
	I0806 08:38:51.057089   79614 cri.go:89] found id: ""
	I0806 08:38:51.057098   79614 logs.go:276] 1 containers: [75990046864b4a4db9f8c02f44de4eefbb0cf45584880b053bce4ca0b677ce4e]
	I0806 08:38:51.057159   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:51.061504   79614 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:51.061571   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:51.106496   79614 cri.go:89] found id: "05bbffa4c989e42bd020ebe6b943459ec1d7b31a6c5e5ba9a37056da15542658"
	I0806 08:38:51.106523   79614 cri.go:89] found id: ""
	I0806 08:38:51.106532   79614 logs.go:276] 1 containers: [05bbffa4c989e42bd020ebe6b943459ec1d7b31a6c5e5ba9a37056da15542658]
	I0806 08:38:51.106590   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:51.111040   79614 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:51.111109   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:51.156042   79614 cri.go:89] found id: ""
	I0806 08:38:51.156072   79614 logs.go:276] 0 containers: []
	W0806 08:38:51.156083   79614 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:51.156090   79614 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0806 08:38:51.156152   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0806 08:38:51.208712   79614 cri.go:89] found id: "e668e4a5fa59e279bc227b6110983cc4660b639d5b1e1291ae60d361f682d82e"
	I0806 08:38:51.208733   79614 cri.go:89] found id: "abd677ac30faba3fef9e9d562601ad0012d7f1b48852e08b8f2684efda6b0f2e"
	I0806 08:38:51.208737   79614 cri.go:89] found id: ""
	I0806 08:38:51.208744   79614 logs.go:276] 2 containers: [e668e4a5fa59e279bc227b6110983cc4660b639d5b1e1291ae60d361f682d82e abd677ac30faba3fef9e9d562601ad0012d7f1b48852e08b8f2684efda6b0f2e]
	I0806 08:38:51.208796   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:51.213508   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:51.219082   79614 logs.go:123] Gathering logs for etcd [b5fa6fe8f1db6b59f47ea637515efb2864e0408ce1df0e6851b3bda0e240f5d3] ...
	I0806 08:38:51.219104   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5fa6fe8f1db6b59f47ea637515efb2864e0408ce1df0e6851b3bda0e240f5d3"
	I0806 08:38:51.266352   79614 logs.go:123] Gathering logs for storage-provisioner [e668e4a5fa59e279bc227b6110983cc4660b639d5b1e1291ae60d361f682d82e] ...
	I0806 08:38:51.266380   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e668e4a5fa59e279bc227b6110983cc4660b639d5b1e1291ae60d361f682d82e"
	I0806 08:38:51.315128   79614 logs.go:123] Gathering logs for storage-provisioner [abd677ac30faba3fef9e9d562601ad0012d7f1b48852e08b8f2684efda6b0f2e] ...
	I0806 08:38:51.315168   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abd677ac30faba3fef9e9d562601ad0012d7f1b48852e08b8f2684efda6b0f2e"
	I0806 08:38:51.357872   79614 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:51.357914   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:51.768288   79614 logs.go:123] Gathering logs for container status ...
	I0806 08:38:51.768328   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:51.825304   79614 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:51.825341   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:51.888532   79614 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:51.888567   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 08:38:51.998271   79614 logs.go:123] Gathering logs for coredns [59f63eeae644f7c8944cdb7b1d05d9e49fe84eb28e9ccbad2d2dd79846d7eb52] ...
	I0806 08:38:51.998307   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59f63eeae644f7c8944cdb7b1d05d9e49fe84eb28e9ccbad2d2dd79846d7eb52"
	I0806 08:38:52.034549   79614 logs.go:123] Gathering logs for kube-scheduler [e596abcad3dc13d27548701f515beebba051e04a9e20f8ce1d018731d880ff1c] ...
	I0806 08:38:52.034574   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e596abcad3dc13d27548701f515beebba051e04a9e20f8ce1d018731d880ff1c"
	I0806 08:38:52.079289   79614 logs.go:123] Gathering logs for kube-proxy [75990046864b4a4db9f8c02f44de4eefbb0cf45584880b053bce4ca0b677ce4e] ...
	I0806 08:38:52.079317   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75990046864b4a4db9f8c02f44de4eefbb0cf45584880b053bce4ca0b677ce4e"
	I0806 08:38:52.124134   79614 logs.go:123] Gathering logs for kube-controller-manager [05bbffa4c989e42bd020ebe6b943459ec1d7b31a6c5e5ba9a37056da15542658] ...
	I0806 08:38:52.124163   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 05bbffa4c989e42bd020ebe6b943459ec1d7b31a6c5e5ba9a37056da15542658"
	I0806 08:38:52.191877   79614 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:52.191914   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:52.210712   79614 logs.go:123] Gathering logs for kube-apiserver [49843be03d16addb301ffd26e517770ff29b00f049efb524a2139b8b2153325c] ...
	I0806 08:38:52.210740   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49843be03d16addb301ffd26e517770ff29b00f049efb524a2139b8b2153325c"
	I0806 08:38:54.770330   79614 system_pods.go:59] 8 kube-system pods found
	I0806 08:38:54.770356   79614 system_pods.go:61] "coredns-7db6d8ff4d-gx8tt" [48f83ba8-a238-4baf-94af-88370fefcb02] Running
	I0806 08:38:54.770361   79614 system_pods.go:61] "etcd-default-k8s-diff-port-990751" [bf342ba4-1e11-40bf-80fd-02b402043ecf] Running
	I0806 08:38:54.770365   79614 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-990751" [11ee42c4-c9e9-41cf-ace2-deac0f30153a] Running
	I0806 08:38:54.770369   79614 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-990751" [f17fca30-0afc-4ed8-a8cc-cb64e69723f3] Running
	I0806 08:38:54.770373   79614 system_pods.go:61] "kube-proxy-lpf88" [866c10f0-2c44-4c03-b460-ff2194bd1ec6] Running
	I0806 08:38:54.770377   79614 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-990751" [2e591ea7-07bd-464f-bf0e-bc24bfcca016] Running
	I0806 08:38:54.770383   79614 system_pods.go:61] "metrics-server-569cc877fc-hnxd4" [f369ac70-49b7-448a-9852-89232fb66da9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0806 08:38:54.770388   79614 system_pods.go:61] "storage-provisioner" [0faff3fc-d34b-418e-972b-95a2e36b577a] Running
	I0806 08:38:54.770395   79614 system_pods.go:74] duration metric: took 3.942006529s to wait for pod list to return data ...
	I0806 08:38:54.770401   79614 default_sa.go:34] waiting for default service account to be created ...
	I0806 08:38:54.773935   79614 default_sa.go:45] found service account: "default"
	I0806 08:38:54.773956   79614 default_sa.go:55] duration metric: took 3.549307ms for default service account to be created ...
	I0806 08:38:54.773963   79614 system_pods.go:116] waiting for k8s-apps to be running ...
	I0806 08:38:54.779748   79614 system_pods.go:86] 8 kube-system pods found
	I0806 08:38:54.779773   79614 system_pods.go:89] "coredns-7db6d8ff4d-gx8tt" [48f83ba8-a238-4baf-94af-88370fefcb02] Running
	I0806 08:38:54.779780   79614 system_pods.go:89] "etcd-default-k8s-diff-port-990751" [bf342ba4-1e11-40bf-80fd-02b402043ecf] Running
	I0806 08:38:54.779787   79614 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-990751" [11ee42c4-c9e9-41cf-ace2-deac0f30153a] Running
	I0806 08:38:54.779794   79614 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-990751" [f17fca30-0afc-4ed8-a8cc-cb64e69723f3] Running
	I0806 08:38:54.779801   79614 system_pods.go:89] "kube-proxy-lpf88" [866c10f0-2c44-4c03-b460-ff2194bd1ec6] Running
	I0806 08:38:54.779807   79614 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-990751" [2e591ea7-07bd-464f-bf0e-bc24bfcca016] Running
	I0806 08:38:54.779818   79614 system_pods.go:89] "metrics-server-569cc877fc-hnxd4" [f369ac70-49b7-448a-9852-89232fb66da9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0806 08:38:54.779825   79614 system_pods.go:89] "storage-provisioner" [0faff3fc-d34b-418e-972b-95a2e36b577a] Running
	I0806 08:38:54.779834   79614 system_pods.go:126] duration metric: took 5.864748ms to wait for k8s-apps to be running ...
	I0806 08:38:54.779842   79614 system_svc.go:44] waiting for kubelet service to be running ....
	I0806 08:38:54.779898   79614 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 08:38:54.796325   79614 system_svc.go:56] duration metric: took 16.472895ms WaitForService to wait for kubelet
	I0806 08:38:54.796353   79614 kubeadm.go:582] duration metric: took 4m22.818456744s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0806 08:38:54.796397   79614 node_conditions.go:102] verifying NodePressure condition ...
	I0806 08:38:54.800536   79614 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0806 08:38:54.800564   79614 node_conditions.go:123] node cpu capacity is 2
	I0806 08:38:54.800577   79614 node_conditions.go:105] duration metric: took 4.173499ms to run NodePressure ...
	I0806 08:38:54.800591   79614 start.go:241] waiting for startup goroutines ...
	I0806 08:38:54.800600   79614 start.go:246] waiting for cluster config update ...
	I0806 08:38:54.800613   79614 start.go:255] writing updated cluster config ...
	I0806 08:38:54.800936   79614 ssh_runner.go:195] Run: rm -f paused
	I0806 08:38:54.850703   79614 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0806 08:38:54.852771   79614 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-990751" cluster and "default" namespace by default
	I0806 08:38:53.469241   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:55.968335   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:58.469143   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:39:00.470785   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:39:02.967983   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:39:05.468164   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:39:07.469643   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:39:09.971774   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:39:12.469590   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:39:14.968935   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:39:17.468202   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:39:19.968322   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:39:21.968455   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:39:24.467291   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:39:26.475005   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:39:28.968164   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:39:31.467876   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:39:31.701723   79927 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0806 08:39:31.702058   79927 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0806 08:39:31.702242   79927 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0806 08:39:33.468293   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:39:34.961921   78922 pod_ready.go:81] duration metric: took 4m0.000413299s for pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace to be "Ready" ...
	E0806 08:39:34.961944   78922 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace to be "Ready" (will not retry!)
	I0806 08:39:34.961967   78922 pod_ready.go:38] duration metric: took 4m12.510868918s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 08:39:34.962010   78922 kubeadm.go:597] duration metric: took 4m19.878821661s to restartPrimaryControlPlane
	W0806 08:39:34.962072   78922 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0806 08:39:34.962105   78922 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0806 08:39:36.702773   79927 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0806 08:39:36.703067   79927 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0806 08:39:46.703330   79927 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0806 08:39:46.703594   79927 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0806 08:40:01.383913   78922 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.421778806s)
	I0806 08:40:01.383994   78922 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 08:40:01.409726   78922 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0806 08:40:01.432284   78922 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0806 08:40:01.445162   78922 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0806 08:40:01.445187   78922 kubeadm.go:157] found existing configuration files:
	
	I0806 08:40:01.445242   78922 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0806 08:40:01.466287   78922 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0806 08:40:01.466344   78922 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0806 08:40:01.481355   78922 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0806 08:40:01.499933   78922 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0806 08:40:01.500003   78922 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0806 08:40:01.518094   78922 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0806 08:40:01.527345   78922 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0806 08:40:01.527408   78922 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0806 08:40:01.539328   78922 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0806 08:40:01.549752   78922 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0806 08:40:01.549863   78922 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0806 08:40:01.561309   78922 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0806 08:40:01.609660   78922 kubeadm.go:310] W0806 08:40:01.587147    2983 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0806 08:40:01.610476   78922 kubeadm.go:310] W0806 08:40:01.588016    2983 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0806 08:40:01.749335   78922 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0806 08:40:06.704889   79927 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0806 08:40:06.705136   79927 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0806 08:40:10.616781   78922 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-rc.0
	I0806 08:40:10.616868   78922 kubeadm.go:310] [preflight] Running pre-flight checks
	I0806 08:40:10.616966   78922 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0806 08:40:10.617110   78922 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0806 08:40:10.617234   78922 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0806 08:40:10.617332   78922 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0806 08:40:10.618729   78922 out.go:204]   - Generating certificates and keys ...
	I0806 08:40:10.618805   78922 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0806 08:40:10.618886   78922 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0806 08:40:10.618961   78922 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0806 08:40:10.619025   78922 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0806 08:40:10.619127   78922 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0806 08:40:10.619200   78922 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0806 08:40:10.619289   78922 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0806 08:40:10.619373   78922 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0806 08:40:10.619485   78922 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0806 08:40:10.619596   78922 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0806 08:40:10.619644   78922 kubeadm.go:310] [certs] Using the existing "sa" key
	I0806 08:40:10.619721   78922 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0806 08:40:10.619787   78922 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0806 08:40:10.619871   78922 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0806 08:40:10.619916   78922 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0806 08:40:10.619968   78922 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0806 08:40:10.620028   78922 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0806 08:40:10.620097   78922 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0806 08:40:10.620153   78922 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0806 08:40:10.621538   78922 out.go:204]   - Booting up control plane ...
	I0806 08:40:10.621632   78922 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0806 08:40:10.621739   78922 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0806 08:40:10.621802   78922 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0806 08:40:10.621898   78922 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0806 08:40:10.621979   78922 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0806 08:40:10.622011   78922 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0806 08:40:10.622117   78922 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0806 08:40:10.622225   78922 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0806 08:40:10.622291   78922 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.639662ms
	I0806 08:40:10.622351   78922 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0806 08:40:10.622399   78922 kubeadm.go:310] [api-check] The API server is healthy after 5.503393689s
	I0806 08:40:10.622505   78922 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0806 08:40:10.622639   78922 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0806 08:40:10.622692   78922 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0806 08:40:10.622843   78922 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-703340 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0806 08:40:10.622895   78922 kubeadm.go:310] [bootstrap-token] Using token: 5r8726.myu0qo9tvx4sxjd3
	I0806 08:40:10.624254   78922 out.go:204]   - Configuring RBAC rules ...
	I0806 08:40:10.624381   78922 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0806 08:40:10.624463   78922 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0806 08:40:10.624613   78922 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0806 08:40:10.624813   78922 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0806 08:40:10.624990   78922 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0806 08:40:10.625114   78922 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0806 08:40:10.625253   78922 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0806 08:40:10.625312   78922 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0806 08:40:10.625378   78922 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0806 08:40:10.625387   78922 kubeadm.go:310] 
	I0806 08:40:10.625464   78922 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0806 08:40:10.625472   78922 kubeadm.go:310] 
	I0806 08:40:10.625580   78922 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0806 08:40:10.625595   78922 kubeadm.go:310] 
	I0806 08:40:10.625615   78922 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0806 08:40:10.625668   78922 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0806 08:40:10.625708   78922 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0806 08:40:10.625712   78922 kubeadm.go:310] 
	I0806 08:40:10.625822   78922 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0806 08:40:10.625846   78922 kubeadm.go:310] 
	I0806 08:40:10.625891   78922 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0806 08:40:10.625898   78922 kubeadm.go:310] 
	I0806 08:40:10.625940   78922 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0806 08:40:10.626007   78922 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0806 08:40:10.626068   78922 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0806 08:40:10.626074   78922 kubeadm.go:310] 
	I0806 08:40:10.626173   78922 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0806 08:40:10.626272   78922 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0806 08:40:10.626282   78922 kubeadm.go:310] 
	I0806 08:40:10.626381   78922 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 5r8726.myu0qo9tvx4sxjd3 \
	I0806 08:40:10.626499   78922 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d36e5c1dfe989c7d561c809d7c06e0fc13926ac8f6d664cc5d6865ea5f79931b \
	I0806 08:40:10.626529   78922 kubeadm.go:310] 	--control-plane 
	I0806 08:40:10.626538   78922 kubeadm.go:310] 
	I0806 08:40:10.626639   78922 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0806 08:40:10.626647   78922 kubeadm.go:310] 
	I0806 08:40:10.626745   78922 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 5r8726.myu0qo9tvx4sxjd3 \
	I0806 08:40:10.626893   78922 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d36e5c1dfe989c7d561c809d7c06e0fc13926ac8f6d664cc5d6865ea5f79931b 
	I0806 08:40:10.626920   78922 cni.go:84] Creating CNI manager for ""
	I0806 08:40:10.626929   78922 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0806 08:40:10.628682   78922 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0806 08:40:10.630096   78922 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0806 08:40:10.642492   78922 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0806 08:40:10.667673   78922 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0806 08:40:10.667763   78922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:40:10.667833   78922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-703340 minikube.k8s.io/updated_at=2024_08_06T08_40_10_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=e92cb06692f5ea1ba801d10d148e5e92e807f9c8 minikube.k8s.io/name=no-preload-703340 minikube.k8s.io/primary=true
	I0806 08:40:10.705177   78922 ops.go:34] apiserver oom_adj: -16
	I0806 08:40:10.901976   78922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:40:11.401990   78922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:40:11.902906   78922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:40:12.402461   78922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:40:12.902909   78922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:40:13.402018   78922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:40:13.902098   78922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:40:14.402869   78922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:40:14.902034   78922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:40:14.999571   78922 kubeadm.go:1113] duration metric: took 4.331874752s to wait for elevateKubeSystemPrivileges
	I0806 08:40:14.999615   78922 kubeadm.go:394] duration metric: took 4m59.975766448s to StartCluster
	I0806 08:40:14.999636   78922 settings.go:142] acquiring lock: {Name:mkb0bfbcae3b4205ef43487a9de84cfd8afd2e5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:40:14.999722   78922 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19370-13057/kubeconfig
	I0806 08:40:15.001429   78922 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/kubeconfig: {Name:mk5c066914acf8e36556b29792f75b98076ca34a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:40:15.001696   78922 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.126 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0806 08:40:15.001753   78922 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0806 08:40:15.001864   78922 addons.go:69] Setting storage-provisioner=true in profile "no-preload-703340"
	I0806 08:40:15.001909   78922 addons.go:234] Setting addon storage-provisioner=true in "no-preload-703340"
	W0806 08:40:15.001921   78922 addons.go:243] addon storage-provisioner should already be in state true
	I0806 08:40:15.001919   78922 addons.go:69] Setting default-storageclass=true in profile "no-preload-703340"
	I0806 08:40:15.001945   78922 addons.go:69] Setting metrics-server=true in profile "no-preload-703340"
	I0806 08:40:15.001968   78922 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-703340"
	I0806 08:40:15.001984   78922 addons.go:234] Setting addon metrics-server=true in "no-preload-703340"
	W0806 08:40:15.001997   78922 addons.go:243] addon metrics-server should already be in state true
	I0806 08:40:15.001958   78922 host.go:66] Checking if "no-preload-703340" exists ...
	I0806 08:40:15.002030   78922 host.go:66] Checking if "no-preload-703340" exists ...
	I0806 08:40:15.001918   78922 config.go:182] Loaded profile config "no-preload-703340": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-rc.0
	I0806 08:40:15.002284   78922 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:40:15.002295   78922 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:40:15.002310   78922 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:40:15.002319   78922 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:40:15.002354   78922 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:40:15.002312   78922 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:40:15.003644   78922 out.go:177] * Verifying Kubernetes components...
	I0806 08:40:15.005157   78922 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 08:40:15.018061   78922 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35339
	I0806 08:40:15.018548   78922 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:40:15.018866   78922 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44375
	I0806 08:40:15.019201   78922 main.go:141] libmachine: Using API Version  1
	I0806 08:40:15.019226   78922 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:40:15.019270   78922 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:40:15.019582   78922 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:40:15.019743   78922 main.go:141] libmachine: Using API Version  1
	I0806 08:40:15.019748   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetState
	I0806 08:40:15.019765   78922 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:40:15.020089   78922 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:40:15.020617   78922 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:40:15.020664   78922 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:40:15.020815   78922 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32773
	I0806 08:40:15.021303   78922 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:40:15.021775   78922 main.go:141] libmachine: Using API Version  1
	I0806 08:40:15.021796   78922 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:40:15.022125   78922 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:40:15.022549   78922 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:40:15.022577   78922 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:40:15.023653   78922 addons.go:234] Setting addon default-storageclass=true in "no-preload-703340"
	W0806 08:40:15.023678   78922 addons.go:243] addon default-storageclass should already be in state true
	I0806 08:40:15.023708   78922 host.go:66] Checking if "no-preload-703340" exists ...
	I0806 08:40:15.024066   78922 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:40:15.024120   78922 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:40:15.036138   78922 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44447
	I0806 08:40:15.036586   78922 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:40:15.037132   78922 main.go:141] libmachine: Using API Version  1
	I0806 08:40:15.037153   78922 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:40:15.037609   78922 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:40:15.037921   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetState
	I0806 08:40:15.038640   78922 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42341
	I0806 08:40:15.038777   78922 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34793
	I0806 08:40:15.039003   78922 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:40:15.039098   78922 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:40:15.039473   78922 main.go:141] libmachine: Using API Version  1
	I0806 08:40:15.039489   78922 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:40:15.039607   78922 main.go:141] libmachine: Using API Version  1
	I0806 08:40:15.039623   78922 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:40:15.039820   78922 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:40:15.040041   78922 main.go:141] libmachine: (no-preload-703340) Calling .DriverName
	I0806 08:40:15.040181   78922 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:40:15.040414   78922 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:40:15.040456   78922 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:40:15.040477   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetState
	I0806 08:40:15.042137   78922 main.go:141] libmachine: (no-preload-703340) Calling .DriverName
	I0806 08:40:15.042223   78922 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0806 08:40:15.043522   78922 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 08:40:15.043585   78922 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0806 08:40:15.043601   78922 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0806 08:40:15.043619   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHHostname
	I0806 08:40:15.044812   78922 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0806 08:40:15.044829   78922 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0806 08:40:15.044854   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHHostname
	I0806 08:40:15.047376   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:40:15.047978   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:40:15.048001   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:40:15.048228   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:40:15.048269   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHPort
	I0806 08:40:15.048485   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHKeyPath
	I0806 08:40:15.048617   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHUsername
	I0806 08:40:15.048667   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:40:15.048688   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:40:15.048800   78922 sshutil.go:53] new ssh client: &{IP:192.168.72.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/no-preload-703340/id_rsa Username:docker}
	I0806 08:40:15.049107   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHPort
	I0806 08:40:15.049267   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHKeyPath
	I0806 08:40:15.049403   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHUsername
	I0806 08:40:15.049518   78922 sshutil.go:53] new ssh client: &{IP:192.168.72.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/no-preload-703340/id_rsa Username:docker}
	I0806 08:40:15.059440   78922 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33377
	I0806 08:40:15.059910   78922 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:40:15.060423   78922 main.go:141] libmachine: Using API Version  1
	I0806 08:40:15.060441   78922 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:40:15.060741   78922 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:40:15.060922   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetState
	I0806 08:40:15.062303   78922 main.go:141] libmachine: (no-preload-703340) Calling .DriverName
	I0806 08:40:15.062491   78922 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0806 08:40:15.062503   78922 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0806 08:40:15.062516   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHHostname
	I0806 08:40:15.065588   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:40:15.066008   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:40:15.066024   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:40:15.066147   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHPort
	I0806 08:40:15.066302   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHKeyPath
	I0806 08:40:15.066407   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHUsername
	I0806 08:40:15.066500   78922 sshutil.go:53] new ssh client: &{IP:192.168.72.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/no-preload-703340/id_rsa Username:docker}
	I0806 08:40:15.209763   78922 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 08:40:15.231626   78922 node_ready.go:35] waiting up to 6m0s for node "no-preload-703340" to be "Ready" ...
	I0806 08:40:15.239924   78922 node_ready.go:49] node "no-preload-703340" has status "Ready":"True"
	I0806 08:40:15.239952   78922 node_ready.go:38] duration metric: took 8.292674ms for node "no-preload-703340" to be "Ready" ...
	I0806 08:40:15.239964   78922 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 08:40:15.247043   78922 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-9dt4q" in "kube-system" namespace to be "Ready" ...
	I0806 08:40:15.320295   78922 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0806 08:40:15.343811   78922 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0806 08:40:15.343838   78922 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0806 08:40:15.367466   78922 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0806 08:40:15.392170   78922 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0806 08:40:15.392200   78922 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0806 08:40:15.440711   78922 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0806 08:40:15.440741   78922 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0806 08:40:15.483982   78922 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0806 08:40:16.105714   78922 main.go:141] libmachine: Making call to close driver server
	I0806 08:40:16.105739   78922 main.go:141] libmachine: (no-preload-703340) Calling .Close
	I0806 08:40:16.105768   78922 main.go:141] libmachine: Making call to close driver server
	I0806 08:40:16.105789   78922 main.go:141] libmachine: (no-preload-703340) Calling .Close
	I0806 08:40:16.106051   78922 main.go:141] libmachine: (no-preload-703340) DBG | Closing plugin on server side
	I0806 08:40:16.106086   78922 main.go:141] libmachine: (no-preload-703340) DBG | Closing plugin on server side
	I0806 08:40:16.106097   78922 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:40:16.106113   78922 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:40:16.106133   78922 main.go:141] libmachine: Making call to close driver server
	I0806 08:40:16.106144   78922 main.go:141] libmachine: (no-preload-703340) Calling .Close
	I0806 08:40:16.106176   78922 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:40:16.106254   78922 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:40:16.106280   78922 main.go:141] libmachine: Making call to close driver server
	I0806 08:40:16.106287   78922 main.go:141] libmachine: (no-preload-703340) Calling .Close
	I0806 08:40:16.107761   78922 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:40:16.107779   78922 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:40:16.107767   78922 main.go:141] libmachine: (no-preload-703340) DBG | Closing plugin on server side
	I0806 08:40:16.107805   78922 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:40:16.107820   78922 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:40:16.107874   78922 main.go:141] libmachine: (no-preload-703340) DBG | Closing plugin on server side
	I0806 08:40:16.133295   78922 main.go:141] libmachine: Making call to close driver server
	I0806 08:40:16.133318   78922 main.go:141] libmachine: (no-preload-703340) Calling .Close
	I0806 08:40:16.133642   78922 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:40:16.133660   78922 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:40:16.133680   78922 main.go:141] libmachine: (no-preload-703340) DBG | Closing plugin on server side
	I0806 08:40:16.628763   78922 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.144732374s)
	I0806 08:40:16.628864   78922 main.go:141] libmachine: Making call to close driver server
	I0806 08:40:16.628891   78922 main.go:141] libmachine: (no-preload-703340) Calling .Close
	I0806 08:40:16.629348   78922 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:40:16.629375   78922 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:40:16.629385   78922 main.go:141] libmachine: Making call to close driver server
	I0806 08:40:16.629396   78922 main.go:141] libmachine: (no-preload-703340) Calling .Close
	I0806 08:40:16.629406   78922 main.go:141] libmachine: (no-preload-703340) DBG | Closing plugin on server side
	I0806 08:40:16.629673   78922 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:40:16.629690   78922 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:40:16.629702   78922 addons.go:475] Verifying addon metrics-server=true in "no-preload-703340"
	I0806 08:40:16.631683   78922 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0806 08:40:16.632723   78922 addons.go:510] duration metric: took 1.630970284s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0806 08:40:17.264829   78922 pod_ready.go:102] pod "coredns-6f6b679f8f-9dt4q" in "kube-system" namespace has status "Ready":"False"
	I0806 08:40:19.754652   78922 pod_ready.go:102] pod "coredns-6f6b679f8f-9dt4q" in "kube-system" namespace has status "Ready":"False"
	I0806 08:40:21.753116   78922 pod_ready.go:92] pod "coredns-6f6b679f8f-9dt4q" in "kube-system" namespace has status "Ready":"True"
	I0806 08:40:21.753139   78922 pod_ready.go:81] duration metric: took 6.506065964s for pod "coredns-6f6b679f8f-9dt4q" in "kube-system" namespace to be "Ready" ...
	I0806 08:40:21.753149   78922 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-mpfwj" in "kube-system" namespace to be "Ready" ...
	I0806 08:40:21.757521   78922 pod_ready.go:92] pod "coredns-6f6b679f8f-mpfwj" in "kube-system" namespace has status "Ready":"True"
	I0806 08:40:21.757542   78922 pod_ready.go:81] duration metric: took 4.3867ms for pod "coredns-6f6b679f8f-mpfwj" in "kube-system" namespace to be "Ready" ...
	I0806 08:40:21.757553   78922 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-703340" in "kube-system" namespace to be "Ready" ...
	I0806 08:40:21.762161   78922 pod_ready.go:92] pod "etcd-no-preload-703340" in "kube-system" namespace has status "Ready":"True"
	I0806 08:40:21.762179   78922 pod_ready.go:81] duration metric: took 4.619377ms for pod "etcd-no-preload-703340" in "kube-system" namespace to be "Ready" ...
	I0806 08:40:21.762188   78922 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-703340" in "kube-system" namespace to be "Ready" ...
	I0806 08:40:21.766394   78922 pod_ready.go:92] pod "kube-apiserver-no-preload-703340" in "kube-system" namespace has status "Ready":"True"
	I0806 08:40:21.766411   78922 pod_ready.go:81] duration metric: took 4.216869ms for pod "kube-apiserver-no-preload-703340" in "kube-system" namespace to be "Ready" ...
	I0806 08:40:21.766421   78922 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-703340" in "kube-system" namespace to be "Ready" ...
	I0806 08:40:21.770569   78922 pod_ready.go:92] pod "kube-controller-manager-no-preload-703340" in "kube-system" namespace has status "Ready":"True"
	I0806 08:40:21.770592   78922 pod_ready.go:81] duration metric: took 4.160997ms for pod "kube-controller-manager-no-preload-703340" in "kube-system" namespace to be "Ready" ...
	I0806 08:40:21.770600   78922 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zsv9x" in "kube-system" namespace to be "Ready" ...
	I0806 08:40:22.151795   78922 pod_ready.go:92] pod "kube-proxy-zsv9x" in "kube-system" namespace has status "Ready":"True"
	I0806 08:40:22.151819   78922 pod_ready.go:81] duration metric: took 381.212951ms for pod "kube-proxy-zsv9x" in "kube-system" namespace to be "Ready" ...
	I0806 08:40:22.151829   78922 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-703340" in "kube-system" namespace to be "Ready" ...
	I0806 08:40:22.550798   78922 pod_ready.go:92] pod "kube-scheduler-no-preload-703340" in "kube-system" namespace has status "Ready":"True"
	I0806 08:40:22.550821   78922 pod_ready.go:81] duration metric: took 398.985533ms for pod "kube-scheduler-no-preload-703340" in "kube-system" namespace to be "Ready" ...
	I0806 08:40:22.550830   78922 pod_ready.go:38] duration metric: took 7.310852309s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 08:40:22.550842   78922 api_server.go:52] waiting for apiserver process to appear ...
	I0806 08:40:22.550888   78922 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:40:22.567410   78922 api_server.go:72] duration metric: took 7.565683505s to wait for apiserver process to appear ...
	I0806 08:40:22.567441   78922 api_server.go:88] waiting for apiserver healthz status ...
	I0806 08:40:22.567464   78922 api_server.go:253] Checking apiserver healthz at https://192.168.72.126:8443/healthz ...
	I0806 08:40:22.572514   78922 api_server.go:279] https://192.168.72.126:8443/healthz returned 200:
	ok
	I0806 08:40:22.573572   78922 api_server.go:141] control plane version: v1.31.0-rc.0
	I0806 08:40:22.573596   78922 api_server.go:131] duration metric: took 6.147703ms to wait for apiserver health ...
	I0806 08:40:22.573606   78922 system_pods.go:43] waiting for kube-system pods to appear ...
	I0806 08:40:22.754910   78922 system_pods.go:59] 9 kube-system pods found
	I0806 08:40:22.754937   78922 system_pods.go:61] "coredns-6f6b679f8f-9dt4q" [a62f9c28-2e06-4072-b459-3a879cba65d6] Running
	I0806 08:40:22.754942   78922 system_pods.go:61] "coredns-6f6b679f8f-mpfwj" [9da40ee8-73c2-4479-a2aa-32a2dce7e21d] Running
	I0806 08:40:22.754945   78922 system_pods.go:61] "etcd-no-preload-703340" [2b89a18a-69f6-429b-8616-b84bf9e666b4] Running
	I0806 08:40:22.754950   78922 system_pods.go:61] "kube-apiserver-no-preload-703340" [8b863a60-03ad-4d18-855a-12d29dc63612] Running
	I0806 08:40:22.754953   78922 system_pods.go:61] "kube-controller-manager-no-preload-703340" [8490ec19-da74-4c39-9a69-17918506ea10] Running
	I0806 08:40:22.754956   78922 system_pods.go:61] "kube-proxy-zsv9x" [2859a7f2-f880-4be4-806e-163eb9caf535] Running
	I0806 08:40:22.754959   78922 system_pods.go:61] "kube-scheduler-no-preload-703340" [e003ccd3-48e7-4e4e-9f05-166003935575] Running
	I0806 08:40:22.754964   78922 system_pods.go:61] "metrics-server-6867b74b74-tqnjm" [4ab2bdd5-bd70-445a-84ac-6a45bdaf06a7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0806 08:40:22.754972   78922 system_pods.go:61] "storage-provisioner" [5497ddbc-8ed5-4b42-80e8-81aa7c1b8fa7] Running
	I0806 08:40:22.754980   78922 system_pods.go:74] duration metric: took 181.367063ms to wait for pod list to return data ...
	I0806 08:40:22.754990   78922 default_sa.go:34] waiting for default service account to be created ...
	I0806 08:40:22.951215   78922 default_sa.go:45] found service account: "default"
	I0806 08:40:22.951242   78922 default_sa.go:55] duration metric: took 196.244443ms for default service account to be created ...
	I0806 08:40:22.951250   78922 system_pods.go:116] waiting for k8s-apps to be running ...
	I0806 08:40:23.154822   78922 system_pods.go:86] 9 kube-system pods found
	I0806 08:40:23.154852   78922 system_pods.go:89] "coredns-6f6b679f8f-9dt4q" [a62f9c28-2e06-4072-b459-3a879cba65d6] Running
	I0806 08:40:23.154858   78922 system_pods.go:89] "coredns-6f6b679f8f-mpfwj" [9da40ee8-73c2-4479-a2aa-32a2dce7e21d] Running
	I0806 08:40:23.154862   78922 system_pods.go:89] "etcd-no-preload-703340" [2b89a18a-69f6-429b-8616-b84bf9e666b4] Running
	I0806 08:40:23.154865   78922 system_pods.go:89] "kube-apiserver-no-preload-703340" [8b863a60-03ad-4d18-855a-12d29dc63612] Running
	I0806 08:40:23.154870   78922 system_pods.go:89] "kube-controller-manager-no-preload-703340" [8490ec19-da74-4c39-9a69-17918506ea10] Running
	I0806 08:40:23.154875   78922 system_pods.go:89] "kube-proxy-zsv9x" [2859a7f2-f880-4be4-806e-163eb9caf535] Running
	I0806 08:40:23.154879   78922 system_pods.go:89] "kube-scheduler-no-preload-703340" [e003ccd3-48e7-4e4e-9f05-166003935575] Running
	I0806 08:40:23.154886   78922 system_pods.go:89] "metrics-server-6867b74b74-tqnjm" [4ab2bdd5-bd70-445a-84ac-6a45bdaf06a7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0806 08:40:23.154890   78922 system_pods.go:89] "storage-provisioner" [5497ddbc-8ed5-4b42-80e8-81aa7c1b8fa7] Running
	I0806 08:40:23.154898   78922 system_pods.go:126] duration metric: took 203.642884ms to wait for k8s-apps to be running ...
	I0806 08:40:23.154905   78922 system_svc.go:44] waiting for kubelet service to be running ....
	I0806 08:40:23.154948   78922 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 08:40:23.170577   78922 system_svc.go:56] duration metric: took 15.661926ms WaitForService to wait for kubelet
	I0806 08:40:23.170609   78922 kubeadm.go:582] duration metric: took 8.168885934s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0806 08:40:23.170631   78922 node_conditions.go:102] verifying NodePressure condition ...
	I0806 08:40:23.352727   78922 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0806 08:40:23.352753   78922 node_conditions.go:123] node cpu capacity is 2
	I0806 08:40:23.352763   78922 node_conditions.go:105] duration metric: took 182.12736ms to run NodePressure ...
	I0806 08:40:23.352774   78922 start.go:241] waiting for startup goroutines ...
	I0806 08:40:23.352780   78922 start.go:246] waiting for cluster config update ...
	I0806 08:40:23.352790   78922 start.go:255] writing updated cluster config ...
	I0806 08:40:23.353072   78922 ssh_runner.go:195] Run: rm -f paused
	I0806 08:40:23.401609   78922 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-rc.0 (minor skew: 1)
	I0806 08:40:23.403639   78922 out.go:177] * Done! kubectl is now configured to use "no-preload-703340" cluster and "default" namespace by default
	I0806 08:40:46.706828   79927 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0806 08:40:46.707130   79927 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0806 08:40:46.707156   79927 kubeadm.go:310] 
	I0806 08:40:46.707218   79927 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0806 08:40:46.707278   79927 kubeadm.go:310] 		timed out waiting for the condition
	I0806 08:40:46.707304   79927 kubeadm.go:310] 
	I0806 08:40:46.707381   79927 kubeadm.go:310] 	This error is likely caused by:
	I0806 08:40:46.707439   79927 kubeadm.go:310] 		- The kubelet is not running
	I0806 08:40:46.707569   79927 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0806 08:40:46.707580   79927 kubeadm.go:310] 
	I0806 08:40:46.707705   79927 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0806 08:40:46.707756   79927 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0806 08:40:46.707805   79927 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0806 08:40:46.707814   79927 kubeadm.go:310] 
	I0806 08:40:46.707974   79927 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0806 08:40:46.708118   79927 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0806 08:40:46.708130   79927 kubeadm.go:310] 
	I0806 08:40:46.708297   79927 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0806 08:40:46.708448   79927 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0806 08:40:46.708577   79927 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0806 08:40:46.708673   79927 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0806 08:40:46.708684   79927 kubeadm.go:310] 
	I0806 08:40:46.709441   79927 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0806 08:40:46.709593   79927 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0806 08:40:46.709791   79927 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0806 08:40:46.709862   79927 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0806 08:40:46.709928   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0806 08:40:47.169443   79927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 08:40:47.187784   79927 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0806 08:40:47.199239   79927 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0806 08:40:47.199267   79927 kubeadm.go:157] found existing configuration files:
	
	I0806 08:40:47.199325   79927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0806 08:40:47.209294   79927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0806 08:40:47.209361   79927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0806 08:40:47.219483   79927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0806 08:40:47.229674   79927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0806 08:40:47.229732   79927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0806 08:40:47.240729   79927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0806 08:40:47.250964   79927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0806 08:40:47.251018   79927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0806 08:40:47.261322   79927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0806 08:40:47.271411   79927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0806 08:40:47.271502   79927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0806 08:40:47.281986   79927 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0806 08:40:47.514409   79927 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0806 08:42:43.760446   79927 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0806 08:42:43.760586   79927 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0806 08:42:43.762233   79927 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0806 08:42:43.762301   79927 kubeadm.go:310] [preflight] Running pre-flight checks
	I0806 08:42:43.762391   79927 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0806 08:42:43.762504   79927 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0806 08:42:43.762602   79927 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0806 08:42:43.762653   79927 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0806 08:42:43.764435   79927 out.go:204]   - Generating certificates and keys ...
	I0806 08:42:43.764523   79927 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0806 08:42:43.764587   79927 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0806 08:42:43.764684   79927 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0806 08:42:43.764761   79927 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0806 08:42:43.764833   79927 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0806 08:42:43.764905   79927 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0806 08:42:43.765002   79927 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0806 08:42:43.765100   79927 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0806 08:42:43.765202   79927 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0806 08:42:43.765338   79927 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0806 08:42:43.765400   79927 kubeadm.go:310] [certs] Using the existing "sa" key
	I0806 08:42:43.765516   79927 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0806 08:42:43.765601   79927 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0806 08:42:43.765680   79927 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0806 08:42:43.765774   79927 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0806 08:42:43.765854   79927 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0806 08:42:43.766002   79927 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0806 08:42:43.766133   79927 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0806 08:42:43.766194   79927 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0806 08:42:43.766273   79927 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0806 08:42:43.767861   79927 out.go:204]   - Booting up control plane ...
	I0806 08:42:43.767957   79927 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0806 08:42:43.768051   79927 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0806 08:42:43.768136   79927 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0806 08:42:43.768232   79927 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0806 08:42:43.768487   79927 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0806 08:42:43.768559   79927 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0806 08:42:43.768635   79927 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0806 08:42:43.768862   79927 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0806 08:42:43.768945   79927 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0806 08:42:43.769124   79927 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0806 08:42:43.769195   79927 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0806 08:42:43.769379   79927 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0806 08:42:43.769463   79927 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0806 08:42:43.769721   79927 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0806 08:42:43.769823   79927 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0806 08:42:43.770089   79927 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0806 08:42:43.770102   79927 kubeadm.go:310] 
	I0806 08:42:43.770161   79927 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0806 08:42:43.770223   79927 kubeadm.go:310] 		timed out waiting for the condition
	I0806 08:42:43.770234   79927 kubeadm.go:310] 
	I0806 08:42:43.770274   79927 kubeadm.go:310] 	This error is likely caused by:
	I0806 08:42:43.770303   79927 kubeadm.go:310] 		- The kubelet is not running
	I0806 08:42:43.770389   79927 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0806 08:42:43.770399   79927 kubeadm.go:310] 
	I0806 08:42:43.770487   79927 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0806 08:42:43.770519   79927 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0806 08:42:43.770547   79927 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0806 08:42:43.770554   79927 kubeadm.go:310] 
	I0806 08:42:43.770657   79927 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0806 08:42:43.770756   79927 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0806 08:42:43.770769   79927 kubeadm.go:310] 
	I0806 08:42:43.770932   79927 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0806 08:42:43.771079   79927 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0806 08:42:43.771178   79927 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0806 08:42:43.771266   79927 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0806 08:42:43.771315   79927 kubeadm.go:310] 
	I0806 08:42:43.771336   79927 kubeadm.go:394] duration metric: took 7m57.719144545s to StartCluster
	I0806 08:42:43.771381   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:42:43.771451   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:42:43.817981   79927 cri.go:89] found id: ""
	I0806 08:42:43.818010   79927 logs.go:276] 0 containers: []
	W0806 08:42:43.818021   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:42:43.818028   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:42:43.818087   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:42:43.855481   79927 cri.go:89] found id: ""
	I0806 08:42:43.855506   79927 logs.go:276] 0 containers: []
	W0806 08:42:43.855517   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:42:43.855524   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:42:43.855584   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:42:43.893110   79927 cri.go:89] found id: ""
	I0806 08:42:43.893141   79927 logs.go:276] 0 containers: []
	W0806 08:42:43.893152   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:42:43.893172   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:42:43.893255   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:42:43.929785   79927 cri.go:89] found id: ""
	I0806 08:42:43.929844   79927 logs.go:276] 0 containers: []
	W0806 08:42:43.929853   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:42:43.929858   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:42:43.929921   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:42:43.970501   79927 cri.go:89] found id: ""
	I0806 08:42:43.970533   79927 logs.go:276] 0 containers: []
	W0806 08:42:43.970541   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:42:43.970546   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:42:43.970590   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:42:44.007344   79927 cri.go:89] found id: ""
	I0806 08:42:44.007373   79927 logs.go:276] 0 containers: []
	W0806 08:42:44.007382   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:42:44.007388   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:42:44.007434   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:42:44.043724   79927 cri.go:89] found id: ""
	I0806 08:42:44.043747   79927 logs.go:276] 0 containers: []
	W0806 08:42:44.043755   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:42:44.043761   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:42:44.043810   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:42:44.084642   79927 cri.go:89] found id: ""
	I0806 08:42:44.084682   79927 logs.go:276] 0 containers: []
	W0806 08:42:44.084694   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:42:44.084709   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:42:44.084724   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:42:44.157081   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:42:44.157125   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:42:44.175703   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:42:44.175746   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:42:44.275665   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:42:44.275699   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:42:44.275717   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:42:44.385031   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:42:44.385076   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0806 08:42:44.425573   79927 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0806 08:42:44.425616   79927 out.go:239] * 
	W0806 08:42:44.425683   79927 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0806 08:42:44.425713   79927 out.go:239] * 
	W0806 08:42:44.426633   79927 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0806 08:42:44.429961   79927 out.go:177] 
	W0806 08:42:44.431352   79927 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0806 08:42:44.431398   79927 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0806 08:42:44.431414   79927 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0806 08:42:44.432963   79927 out.go:177] 
	
	
	==> CRI-O <==
	Aug 06 08:51:49 old-k8s-version-409903 crio[649]: time="2024-08-06 08:51:49.990631961Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722934309990609068,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=23cab8a2-9ac4-4616-90c6-377e9aae665f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 08:51:49 old-k8s-version-409903 crio[649]: time="2024-08-06 08:51:49.992216992Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=72eee122-81de-49d4-b451-d83b76e0d7be name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:51:49 old-k8s-version-409903 crio[649]: time="2024-08-06 08:51:49.992292625Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=72eee122-81de-49d4-b451-d83b76e0d7be name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:51:49 old-k8s-version-409903 crio[649]: time="2024-08-06 08:51:49.992337090Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=72eee122-81de-49d4-b451-d83b76e0d7be name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:51:50 old-k8s-version-409903 crio[649]: time="2024-08-06 08:51:50.025441604Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6393f6b2-3bb5-4dea-be62-41798991d7fd name=/runtime.v1.RuntimeService/Version
	Aug 06 08:51:50 old-k8s-version-409903 crio[649]: time="2024-08-06 08:51:50.025538672Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6393f6b2-3bb5-4dea-be62-41798991d7fd name=/runtime.v1.RuntimeService/Version
	Aug 06 08:51:50 old-k8s-version-409903 crio[649]: time="2024-08-06 08:51:50.026768588Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=471572e3-b7b3-4759-9c6c-81537160fe3e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 08:51:50 old-k8s-version-409903 crio[649]: time="2024-08-06 08:51:50.027245172Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722934310027215481,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=471572e3-b7b3-4759-9c6c-81537160fe3e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 08:51:50 old-k8s-version-409903 crio[649]: time="2024-08-06 08:51:50.028082516Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3022b5fc-3914-4634-89fe-c7a6430646b7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:51:50 old-k8s-version-409903 crio[649]: time="2024-08-06 08:51:50.028170847Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3022b5fc-3914-4634-89fe-c7a6430646b7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:51:50 old-k8s-version-409903 crio[649]: time="2024-08-06 08:51:50.028213000Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=3022b5fc-3914-4634-89fe-c7a6430646b7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:51:50 old-k8s-version-409903 crio[649]: time="2024-08-06 08:51:50.062805841Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=20c9090d-a2dd-41d4-8ada-7ea0f583e921 name=/runtime.v1.RuntimeService/Version
	Aug 06 08:51:50 old-k8s-version-409903 crio[649]: time="2024-08-06 08:51:50.062959576Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=20c9090d-a2dd-41d4-8ada-7ea0f583e921 name=/runtime.v1.RuntimeService/Version
	Aug 06 08:51:50 old-k8s-version-409903 crio[649]: time="2024-08-06 08:51:50.064318373Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=19715067-6e41-461b-ad2d-2e97ea046122 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 08:51:50 old-k8s-version-409903 crio[649]: time="2024-08-06 08:51:50.064802062Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722934310064776813,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=19715067-6e41-461b-ad2d-2e97ea046122 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 08:51:50 old-k8s-version-409903 crio[649]: time="2024-08-06 08:51:50.065653367Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8ec415f9-c1ae-4f92-ada5-4e9627120b7e name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:51:50 old-k8s-version-409903 crio[649]: time="2024-08-06 08:51:50.065748521Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8ec415f9-c1ae-4f92-ada5-4e9627120b7e name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:51:50 old-k8s-version-409903 crio[649]: time="2024-08-06 08:51:50.065788146Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=8ec415f9-c1ae-4f92-ada5-4e9627120b7e name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:51:50 old-k8s-version-409903 crio[649]: time="2024-08-06 08:51:50.102052525Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7aad4f9d-e9d9-4935-9442-40527ab85762 name=/runtime.v1.RuntimeService/Version
	Aug 06 08:51:50 old-k8s-version-409903 crio[649]: time="2024-08-06 08:51:50.102136579Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7aad4f9d-e9d9-4935-9442-40527ab85762 name=/runtime.v1.RuntimeService/Version
	Aug 06 08:51:50 old-k8s-version-409903 crio[649]: time="2024-08-06 08:51:50.103320654Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8fa8b423-5d26-4a2c-bd42-cd0ce1976963 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 08:51:50 old-k8s-version-409903 crio[649]: time="2024-08-06 08:51:50.103793747Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722934310103770457,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8fa8b423-5d26-4a2c-bd42-cd0ce1976963 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 08:51:50 old-k8s-version-409903 crio[649]: time="2024-08-06 08:51:50.104487490Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9ad61770-98d1-4337-971e-7f234ac27174 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:51:50 old-k8s-version-409903 crio[649]: time="2024-08-06 08:51:50.104536048Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9ad61770-98d1-4337-971e-7f234ac27174 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:51:50 old-k8s-version-409903 crio[649]: time="2024-08-06 08:51:50.104573651Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=9ad61770-98d1-4337-971e-7f234ac27174 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Aug 6 08:34] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.055941] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041778] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.077438] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.582101] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.479002] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000016] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.372783] systemd-fstab-generator[571]: Ignoring "noauto" option for root device
	[  +0.066842] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064276] systemd-fstab-generator[583]: Ignoring "noauto" option for root device
	[  +0.187929] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.204236] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.277743] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +6.753482] systemd-fstab-generator[840]: Ignoring "noauto" option for root device
	[  +0.064437] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.801648] systemd-fstab-generator[963]: Ignoring "noauto" option for root device
	[ +10.357148] kauditd_printk_skb: 46 callbacks suppressed
	[Aug 6 08:38] systemd-fstab-generator[5042]: Ignoring "noauto" option for root device
	[Aug 6 08:40] systemd-fstab-generator[5323]: Ignoring "noauto" option for root device
	[  +0.070161] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 08:51:50 up 17 min,  0 users,  load average: 0.14, 0.05, 0.02
	Linux old-k8s-version-409903 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Aug 06 08:51:49 old-k8s-version-409903 kubelet[6505]:         /usr/local/go/src/sync/mutex.go:138 +0x105
	Aug 06 08:51:49 old-k8s-version-409903 kubelet[6505]: sync.(*Mutex).Lock(...)
	Aug 06 08:51:49 old-k8s-version-409903 kubelet[6505]:         /usr/local/go/src/sync/mutex.go:81
	Aug 06 08:51:49 old-k8s-version-409903 kubelet[6505]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.(*rudimentaryErrorBackoff).OnError(0xc0000acba0, 0x4f04d00, 0xc000d16290)
	Aug 06 08:51:49 old-k8s-version-409903 kubelet[6505]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:128 +0x10a
	Aug 06 08:51:49 old-k8s-version-409903 kubelet[6505]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleError(0x4f04d00, 0xc000d16290)
	Aug 06 08:51:49 old-k8s-version-409903 kubelet[6505]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:108 +0x66
	Aug 06 08:51:49 old-k8s-version-409903 kubelet[6505]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.DefaultWatchErrorHandler(0xc000bca2a0, 0x4f04d00, 0xc000d16240)
	Aug 06 08:51:49 old-k8s-version-409903 kubelet[6505]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:138 +0x185
	Aug 06 08:51:49 old-k8s-version-409903 kubelet[6505]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run.func1()
	Aug 06 08:51:49 old-k8s-version-409903 kubelet[6505]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:222 +0x70
	Aug 06 08:51:49 old-k8s-version-409903 kubelet[6505]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc000bd96f0)
	Aug 06 08:51:49 old-k8s-version-409903 kubelet[6505]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
	Aug 06 08:51:49 old-k8s-version-409903 kubelet[6505]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000d07ef0, 0x4f0ac20, 0xc0005cf590, 0x1, 0xc00009e0c0)
	Aug 06 08:51:49 old-k8s-version-409903 kubelet[6505]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xad
	Aug 06 08:51:49 old-k8s-version-409903 kubelet[6505]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc000bca2a0, 0xc00009e0c0)
	Aug 06 08:51:49 old-k8s-version-409903 kubelet[6505]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Aug 06 08:51:49 old-k8s-version-409903 kubelet[6505]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Aug 06 08:51:49 old-k8s-version-409903 kubelet[6505]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Aug 06 08:51:49 old-k8s-version-409903 kubelet[6505]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000a7b120, 0xc000b7f220)
	Aug 06 08:51:49 old-k8s-version-409903 kubelet[6505]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Aug 06 08:51:49 old-k8s-version-409903 kubelet[6505]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Aug 06 08:51:49 old-k8s-version-409903 kubelet[6505]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Aug 06 08:51:49 old-k8s-version-409903 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Aug 06 08:51:49 old-k8s-version-409903 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-409903 -n old-k8s-version-409903
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-409903 -n old-k8s-version-409903: exit status 2 (235.302571ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-409903" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.66s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (474.52s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-324808 -n embed-certs-324808
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-08-06 08:55:32.594238955 +0000 UTC m=+6655.819049395
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-324808 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-324808 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.252µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-324808 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-324808 -n embed-certs-324808
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-324808 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-324808 logs -n 25: (1.145925903s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p                                                     | disable-driver-mounts-607762 | jenkins | v1.33.1 | 06 Aug 24 08:25 UTC | 06 Aug 24 08:25 UTC |
	|         | disable-driver-mounts-607762                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-990751 | jenkins | v1.33.1 | 06 Aug 24 08:25 UTC | 06 Aug 24 08:27 UTC |
	|         | default-k8s-diff-port-990751                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-703340             | no-preload-703340            | jenkins | v1.33.1 | 06 Aug 24 08:26 UTC | 06 Aug 24 08:26 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-703340                                   | no-preload-703340            | jenkins | v1.33.1 | 06 Aug 24 08:26 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-324808            | embed-certs-324808           | jenkins | v1.33.1 | 06 Aug 24 08:26 UTC | 06 Aug 24 08:26 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-324808                                  | embed-certs-324808           | jenkins | v1.33.1 | 06 Aug 24 08:26 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-990751  | default-k8s-diff-port-990751 | jenkins | v1.33.1 | 06 Aug 24 08:27 UTC | 06 Aug 24 08:27 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-990751 | jenkins | v1.33.1 | 06 Aug 24 08:27 UTC |                     |
	|         | default-k8s-diff-port-990751                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-703340                  | no-preload-703340            | jenkins | v1.33.1 | 06 Aug 24 08:28 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-703340                                   | no-preload-703340            | jenkins | v1.33.1 | 06 Aug 24 08:28 UTC | 06 Aug 24 08:40 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-409903        | old-k8s-version-409903       | jenkins | v1.33.1 | 06 Aug 24 08:28 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-324808                 | embed-certs-324808           | jenkins | v1.33.1 | 06 Aug 24 08:29 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-324808                                  | embed-certs-324808           | jenkins | v1.33.1 | 06 Aug 24 08:29 UTC | 06 Aug 24 08:38 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-990751       | default-k8s-diff-port-990751 | jenkins | v1.33.1 | 06 Aug 24 08:30 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-990751 | jenkins | v1.33.1 | 06 Aug 24 08:30 UTC | 06 Aug 24 08:38 UTC |
	|         | default-k8s-diff-port-990751                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-409903                              | old-k8s-version-409903       | jenkins | v1.33.1 | 06 Aug 24 08:30 UTC | 06 Aug 24 08:30 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-409903             | old-k8s-version-409903       | jenkins | v1.33.1 | 06 Aug 24 08:30 UTC | 06 Aug 24 08:30 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-409903                              | old-k8s-version-409903       | jenkins | v1.33.1 | 06 Aug 24 08:30 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-409903                              | old-k8s-version-409903       | jenkins | v1.33.1 | 06 Aug 24 08:54 UTC | 06 Aug 24 08:54 UTC |
	| start   | -p newest-cni-384471 --memory=2200 --alsologtostderr   | newest-cni-384471            | jenkins | v1.33.1 | 06 Aug 24 08:54 UTC | 06 Aug 24 08:55 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                              |         |         |                     |                     |
	| delete  | -p no-preload-703340                                   | no-preload-703340            | jenkins | v1.33.1 | 06 Aug 24 08:54 UTC | 06 Aug 24 08:54 UTC |
	| addons  | enable metrics-server -p newest-cni-384471             | newest-cni-384471            | jenkins | v1.33.1 | 06 Aug 24 08:55 UTC | 06 Aug 24 08:55 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-384471                                   | newest-cni-384471            | jenkins | v1.33.1 | 06 Aug 24 08:55 UTC | 06 Aug 24 08:55 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-384471                  | newest-cni-384471            | jenkins | v1.33.1 | 06 Aug 24 08:55 UTC | 06 Aug 24 08:55 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-384471 --memory=2200 --alsologtostderr   | newest-cni-384471            | jenkins | v1.33.1 | 06 Aug 24 08:55 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/06 08:55:18
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0806 08:55:18.857608   86804 out.go:291] Setting OutFile to fd 1 ...
	I0806 08:55:18.857947   86804 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 08:55:18.857958   86804 out.go:304] Setting ErrFile to fd 2...
	I0806 08:55:18.857963   86804 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 08:55:18.858143   86804 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19370-13057/.minikube/bin
	I0806 08:55:18.858722   86804 out.go:298] Setting JSON to false
	I0806 08:55:18.859783   86804 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":9465,"bootTime":1722925054,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0806 08:55:18.859843   86804 start.go:139] virtualization: kvm guest
	I0806 08:55:18.862120   86804 out.go:177] * [newest-cni-384471] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0806 08:55:18.863801   86804 out.go:177]   - MINIKUBE_LOCATION=19370
	I0806 08:55:18.863805   86804 notify.go:220] Checking for updates...
	I0806 08:55:18.865314   86804 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0806 08:55:18.866690   86804 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19370-13057/kubeconfig
	I0806 08:55:18.867921   86804 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19370-13057/.minikube
	I0806 08:55:18.869180   86804 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0806 08:55:18.870468   86804 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0806 08:55:18.872057   86804 config.go:182] Loaded profile config "newest-cni-384471": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-rc.0
	I0806 08:55:18.872615   86804 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:55:18.872668   86804 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:55:18.887733   86804 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44389
	I0806 08:55:18.888185   86804 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:55:18.888957   86804 main.go:141] libmachine: Using API Version  1
	I0806 08:55:18.888988   86804 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:55:18.889381   86804 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:55:18.889572   86804 main.go:141] libmachine: (newest-cni-384471) Calling .DriverName
	I0806 08:55:18.889865   86804 driver.go:392] Setting default libvirt URI to qemu:///system
	I0806 08:55:18.890225   86804 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:55:18.890264   86804 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:55:18.905074   86804 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35765
	I0806 08:55:18.905489   86804 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:55:18.905931   86804 main.go:141] libmachine: Using API Version  1
	I0806 08:55:18.905972   86804 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:55:18.906323   86804 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:55:18.906578   86804 main.go:141] libmachine: (newest-cni-384471) Calling .DriverName
	I0806 08:55:18.944904   86804 out.go:177] * Using the kvm2 driver based on existing profile
	I0806 08:55:18.946275   86804 start.go:297] selected driver: kvm2
	I0806 08:55:18.946297   86804 start.go:901] validating driver "kvm2" against &{Name:newest-cni-384471 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0-rc.0 ClusterName:newest-cni-384471 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.76 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods
:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 08:55:18.946438   86804 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0806 08:55:18.947169   86804 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 08:55:18.947247   86804 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19370-13057/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0806 08:55:18.962491   86804 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0806 08:55:18.962857   86804 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0806 08:55:18.962926   86804 cni.go:84] Creating CNI manager for ""
	I0806 08:55:18.962939   86804 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0806 08:55:18.962975   86804 start.go:340] cluster config:
	{Name:newest-cni-384471 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-rc.0 ClusterName:newest-cni-384471 Namespace:default APIS
erverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.76 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress
: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 08:55:18.963062   86804 iso.go:125] acquiring lock: {Name:mke58d71de28babe305c5eb0c31c274e9713bb3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 08:55:18.965697   86804 out.go:177] * Starting "newest-cni-384471" primary control-plane node in "newest-cni-384471" cluster
	I0806 08:55:18.967131   86804 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime crio
	I0806 08:55:18.967174   86804 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-cri-o-overlay-amd64.tar.lz4
	I0806 08:55:18.967185   86804 cache.go:56] Caching tarball of preloaded images
	I0806 08:55:18.967297   86804 preload.go:172] Found /home/jenkins/minikube-integration/19370-13057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0806 08:55:18.967311   86804 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-rc.0 on crio
	I0806 08:55:18.967430   86804 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/newest-cni-384471/config.json ...
	I0806 08:55:18.967646   86804 start.go:360] acquireMachinesLock for newest-cni-384471: {Name:mkc602ac9c8cc3360dcc0fcb8104303837aaab61 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 08:55:18.967691   86804 start.go:364] duration metric: took 25.108µs to acquireMachinesLock for "newest-cni-384471"
	I0806 08:55:18.967709   86804 start.go:96] Skipping create...Using existing machine configuration
	I0806 08:55:18.967715   86804 fix.go:54] fixHost starting: 
	I0806 08:55:18.968036   86804 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:55:18.968071   86804 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:55:18.983279   86804 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46207
	I0806 08:55:18.983763   86804 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:55:18.984251   86804 main.go:141] libmachine: Using API Version  1
	I0806 08:55:18.984273   86804 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:55:18.984609   86804 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:55:18.984836   86804 main.go:141] libmachine: (newest-cni-384471) Calling .DriverName
	I0806 08:55:18.985027   86804 main.go:141] libmachine: (newest-cni-384471) Calling .GetState
	I0806 08:55:18.987000   86804 fix.go:112] recreateIfNeeded on newest-cni-384471: state=Stopped err=<nil>
	I0806 08:55:18.987023   86804 main.go:141] libmachine: (newest-cni-384471) Calling .DriverName
	W0806 08:55:18.987200   86804 fix.go:138] unexpected machine state, will restart: <nil>
	I0806 08:55:18.989219   86804 out.go:177] * Restarting existing kvm2 VM for "newest-cni-384471" ...
	I0806 08:55:18.990412   86804 main.go:141] libmachine: (newest-cni-384471) Calling .Start
	I0806 08:55:18.990573   86804 main.go:141] libmachine: (newest-cni-384471) Ensuring networks are active...
	I0806 08:55:18.991396   86804 main.go:141] libmachine: (newest-cni-384471) Ensuring network default is active
	I0806 08:55:18.991788   86804 main.go:141] libmachine: (newest-cni-384471) Ensuring network mk-newest-cni-384471 is active
	I0806 08:55:18.992177   86804 main.go:141] libmachine: (newest-cni-384471) Getting domain xml...
	I0806 08:55:18.992993   86804 main.go:141] libmachine: (newest-cni-384471) Creating domain...
	I0806 08:55:20.242495   86804 main.go:141] libmachine: (newest-cni-384471) Waiting to get IP...
	I0806 08:55:20.243404   86804 main.go:141] libmachine: (newest-cni-384471) DBG | domain newest-cni-384471 has defined MAC address 52:54:00:95:c7:9e in network mk-newest-cni-384471
	I0806 08:55:20.243805   86804 main.go:141] libmachine: (newest-cni-384471) DBG | unable to find current IP address of domain newest-cni-384471 in network mk-newest-cni-384471
	I0806 08:55:20.243866   86804 main.go:141] libmachine: (newest-cni-384471) DBG | I0806 08:55:20.243784   86839 retry.go:31] will retry after 270.139285ms: waiting for machine to come up
	I0806 08:55:20.515344   86804 main.go:141] libmachine: (newest-cni-384471) DBG | domain newest-cni-384471 has defined MAC address 52:54:00:95:c7:9e in network mk-newest-cni-384471
	I0806 08:55:20.515792   86804 main.go:141] libmachine: (newest-cni-384471) DBG | unable to find current IP address of domain newest-cni-384471 in network mk-newest-cni-384471
	I0806 08:55:20.515879   86804 main.go:141] libmachine: (newest-cni-384471) DBG | I0806 08:55:20.515765   86839 retry.go:31] will retry after 280.916412ms: waiting for machine to come up
	I0806 08:55:20.798383   86804 main.go:141] libmachine: (newest-cni-384471) DBG | domain newest-cni-384471 has defined MAC address 52:54:00:95:c7:9e in network mk-newest-cni-384471
	I0806 08:55:20.798812   86804 main.go:141] libmachine: (newest-cni-384471) DBG | unable to find current IP address of domain newest-cni-384471 in network mk-newest-cni-384471
	I0806 08:55:20.798841   86804 main.go:141] libmachine: (newest-cni-384471) DBG | I0806 08:55:20.798779   86839 retry.go:31] will retry after 349.754615ms: waiting for machine to come up
	I0806 08:55:21.150487   86804 main.go:141] libmachine: (newest-cni-384471) DBG | domain newest-cni-384471 has defined MAC address 52:54:00:95:c7:9e in network mk-newest-cni-384471
	I0806 08:55:21.150958   86804 main.go:141] libmachine: (newest-cni-384471) DBG | unable to find current IP address of domain newest-cni-384471 in network mk-newest-cni-384471
	I0806 08:55:21.150988   86804 main.go:141] libmachine: (newest-cni-384471) DBG | I0806 08:55:21.150886   86839 retry.go:31] will retry after 389.454923ms: waiting for machine to come up
	I0806 08:55:21.541578   86804 main.go:141] libmachine: (newest-cni-384471) DBG | domain newest-cni-384471 has defined MAC address 52:54:00:95:c7:9e in network mk-newest-cni-384471
	I0806 08:55:21.542017   86804 main.go:141] libmachine: (newest-cni-384471) DBG | unable to find current IP address of domain newest-cni-384471 in network mk-newest-cni-384471
	I0806 08:55:21.542047   86804 main.go:141] libmachine: (newest-cni-384471) DBG | I0806 08:55:21.541972   86839 retry.go:31] will retry after 702.387349ms: waiting for machine to come up
	I0806 08:55:22.245922   86804 main.go:141] libmachine: (newest-cni-384471) DBG | domain newest-cni-384471 has defined MAC address 52:54:00:95:c7:9e in network mk-newest-cni-384471
	I0806 08:55:22.246385   86804 main.go:141] libmachine: (newest-cni-384471) DBG | unable to find current IP address of domain newest-cni-384471 in network mk-newest-cni-384471
	I0806 08:55:22.246409   86804 main.go:141] libmachine: (newest-cni-384471) DBG | I0806 08:55:22.246347   86839 retry.go:31] will retry after 600.269224ms: waiting for machine to come up
	I0806 08:55:22.848034   86804 main.go:141] libmachine: (newest-cni-384471) DBG | domain newest-cni-384471 has defined MAC address 52:54:00:95:c7:9e in network mk-newest-cni-384471
	I0806 08:55:22.848532   86804 main.go:141] libmachine: (newest-cni-384471) DBG | unable to find current IP address of domain newest-cni-384471 in network mk-newest-cni-384471
	I0806 08:55:22.848560   86804 main.go:141] libmachine: (newest-cni-384471) DBG | I0806 08:55:22.848480   86839 retry.go:31] will retry after 1.177416653s: waiting for machine to come up
	I0806 08:55:24.027719   86804 main.go:141] libmachine: (newest-cni-384471) DBG | domain newest-cni-384471 has defined MAC address 52:54:00:95:c7:9e in network mk-newest-cni-384471
	I0806 08:55:24.028217   86804 main.go:141] libmachine: (newest-cni-384471) DBG | unable to find current IP address of domain newest-cni-384471 in network mk-newest-cni-384471
	I0806 08:55:24.028246   86804 main.go:141] libmachine: (newest-cni-384471) DBG | I0806 08:55:24.028143   86839 retry.go:31] will retry after 1.247223216s: waiting for machine to come up
	I0806 08:55:25.277640   86804 main.go:141] libmachine: (newest-cni-384471) DBG | domain newest-cni-384471 has defined MAC address 52:54:00:95:c7:9e in network mk-newest-cni-384471
	I0806 08:55:25.278240   86804 main.go:141] libmachine: (newest-cni-384471) DBG | unable to find current IP address of domain newest-cni-384471 in network mk-newest-cni-384471
	I0806 08:55:25.278264   86804 main.go:141] libmachine: (newest-cni-384471) DBG | I0806 08:55:25.278204   86839 retry.go:31] will retry after 1.540980502s: waiting for machine to come up
	I0806 08:55:26.820632   86804 main.go:141] libmachine: (newest-cni-384471) DBG | domain newest-cni-384471 has defined MAC address 52:54:00:95:c7:9e in network mk-newest-cni-384471
	I0806 08:55:26.821079   86804 main.go:141] libmachine: (newest-cni-384471) DBG | unable to find current IP address of domain newest-cni-384471 in network mk-newest-cni-384471
	I0806 08:55:26.821117   86804 main.go:141] libmachine: (newest-cni-384471) DBG | I0806 08:55:26.821055   86839 retry.go:31] will retry after 2.311522949s: waiting for machine to come up
	
	
	==> CRI-O <==
	Aug 06 08:55:33 embed-certs-324808 crio[728]: time="2024-08-06 08:55:33.204390524Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722934533204370621,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=34253cd6-db2e-4fc9-98b4-28a2e4ce1a61 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 08:55:33 embed-certs-324808 crio[728]: time="2024-08-06 08:55:33.204953565Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4c831e39-12f5-4d1d-85d9-6b5590f46420 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:55:33 embed-certs-324808 crio[728]: time="2024-08-06 08:55:33.205016899Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4c831e39-12f5-4d1d-85d9-6b5590f46420 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:55:33 embed-certs-324808 crio[728]: time="2024-08-06 08:55:33.205223608Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:367eb6487187a37acafb1fa74364a2d9ec5d9241888ed2e871fb14f2af161975,PodSandboxId:27950acba30f699b26c3086801707c13702e0c69ec58aa1582ce326e756490e2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722933278910752050,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15c7d89a-e994-4cca-931c-f646157a15e8,},Annotations:map[string]string{io.kubernetes.container.hash: 4540f957,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b29f8b66a54410c4d859c7dd605288a33ddbdf5d1c48722c718f2dbbc9ed9f3,PodSandboxId:3683ed34d915eac01c70be437fd87c3e3d9efe834c554a0ff79b4830d0aa0f62,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722933258423173197,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 41718f5d-fd79-4af8-9464-04a00be13504,},Annotations:map[string]string{io.kubernetes.container.hash: 58ebd90e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e1c91bc3a920b6f7db4477b8219f13fa092c68ff53abe2390325a68dad253e5,PodSandboxId:ed059b4ceda167186ee96ca5e73360b595dac7a59f59f8bd89423f739dadbb0c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722933255844194079,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4xxwm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eeb47dea-32e9-458f-9ad9-2af6d4e68cc0,},Annotations:map[string]string{io.kubernetes.container.hash: f3899951,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:612fc6a8f2c1208097a6d6287c5fecb20026608a278af5d0d4efc9662e065eb9,PodSandboxId:ab1bd703c0510dbbe4c386c1d76510a87f45de355b7a86c1c84201ffe2f6730d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722933248127519730,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8lbzv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c45e6c0-9b7f-4c48-b
ff5-38d4a5949bec,},Annotations:map[string]string{io.kubernetes.container.hash: e30e5c8f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78abe2efa92d2d6c9171ed65bd4d941693be3207554f9318a7cefb91544eeb20,PodSandboxId:27950acba30f699b26c3086801707c13702e0c69ec58aa1582ce326e756490e2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722933248067976839,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15c7d89a-e994-4cca-931c-f646157a1
5e8,},Annotations:map[string]string{io.kubernetes.container.hash: 4540f957,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7edb6bece3347d88a263663a1d6549165dafdcb67ad723494b937766cba6da6,PodSandboxId:11616bc5ed6d0b08a352623e9a760f17c020d30ca828c7811afe1737dfcc2ca3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722933243338331603,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-324808,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18189df0a2512fa0a9393883a718fa75,},Annota
tions:map[string]string{io.kubernetes.container.hash: fafde40b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a25c5660a3a2b7226bf308b44307a3912efa7a9a73e79ec63f83133b80a44683,PodSandboxId:e6dd8d4a1944eb1c32f8b7427d808ec1abbab695ee962c045256ef57d96231c6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722933243355896420,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-324808,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91054f20d0444b47ed2a594ff97bc641,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df2efdb62a4af10f89a1b37599d4258ac3b72660353795f8c2141f3ef70c1f65,PodSandboxId:963ae0e923498a287d076541863b72249288ed25de0b24e686c3d10d12681101,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722933243328499057,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-324808,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8157c47ede204e65846d42268b1da9e0,},Annotations:map[string]string{io.kubernetes.container.hash:
9461fdd0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8651a7ed6615e826b950eab003841c2ec872b48160f207e77dd32a684125a23,PodSandboxId:6dbf61a7e83176b57abac02851c6bd7772ea711eb7641b03791397e466cb2127,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722933243294289542,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-324808,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af4bee3c7061a5d622ebea0a07e8602e,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4c831e39-12f5-4d1d-85d9-6b5590f46420 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:55:33 embed-certs-324808 crio[728]: time="2024-08-06 08:55:33.247711788Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b74f369c-1837-4f5b-82f9-0e3040368ae5 name=/runtime.v1.RuntimeService/Version
	Aug 06 08:55:33 embed-certs-324808 crio[728]: time="2024-08-06 08:55:33.247808417Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b74f369c-1837-4f5b-82f9-0e3040368ae5 name=/runtime.v1.RuntimeService/Version
	Aug 06 08:55:33 embed-certs-324808 crio[728]: time="2024-08-06 08:55:33.249326373Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9ca51531-a373-4939-a1d0-73964af64389 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 08:55:33 embed-certs-324808 crio[728]: time="2024-08-06 08:55:33.249915933Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722934533249892506,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9ca51531-a373-4939-a1d0-73964af64389 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 08:55:33 embed-certs-324808 crio[728]: time="2024-08-06 08:55:33.250456115Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f5ed150c-147f-4414-ac8a-d4fed4eccb0c name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:55:33 embed-certs-324808 crio[728]: time="2024-08-06 08:55:33.250524290Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f5ed150c-147f-4414-ac8a-d4fed4eccb0c name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:55:33 embed-certs-324808 crio[728]: time="2024-08-06 08:55:33.250723490Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:367eb6487187a37acafb1fa74364a2d9ec5d9241888ed2e871fb14f2af161975,PodSandboxId:27950acba30f699b26c3086801707c13702e0c69ec58aa1582ce326e756490e2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722933278910752050,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15c7d89a-e994-4cca-931c-f646157a15e8,},Annotations:map[string]string{io.kubernetes.container.hash: 4540f957,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b29f8b66a54410c4d859c7dd605288a33ddbdf5d1c48722c718f2dbbc9ed9f3,PodSandboxId:3683ed34d915eac01c70be437fd87c3e3d9efe834c554a0ff79b4830d0aa0f62,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722933258423173197,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 41718f5d-fd79-4af8-9464-04a00be13504,},Annotations:map[string]string{io.kubernetes.container.hash: 58ebd90e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e1c91bc3a920b6f7db4477b8219f13fa092c68ff53abe2390325a68dad253e5,PodSandboxId:ed059b4ceda167186ee96ca5e73360b595dac7a59f59f8bd89423f739dadbb0c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722933255844194079,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4xxwm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eeb47dea-32e9-458f-9ad9-2af6d4e68cc0,},Annotations:map[string]string{io.kubernetes.container.hash: f3899951,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:612fc6a8f2c1208097a6d6287c5fecb20026608a278af5d0d4efc9662e065eb9,PodSandboxId:ab1bd703c0510dbbe4c386c1d76510a87f45de355b7a86c1c84201ffe2f6730d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722933248127519730,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8lbzv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c45e6c0-9b7f-4c48-b
ff5-38d4a5949bec,},Annotations:map[string]string{io.kubernetes.container.hash: e30e5c8f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78abe2efa92d2d6c9171ed65bd4d941693be3207554f9318a7cefb91544eeb20,PodSandboxId:27950acba30f699b26c3086801707c13702e0c69ec58aa1582ce326e756490e2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722933248067976839,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15c7d89a-e994-4cca-931c-f646157a1
5e8,},Annotations:map[string]string{io.kubernetes.container.hash: 4540f957,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7edb6bece3347d88a263663a1d6549165dafdcb67ad723494b937766cba6da6,PodSandboxId:11616bc5ed6d0b08a352623e9a760f17c020d30ca828c7811afe1737dfcc2ca3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722933243338331603,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-324808,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18189df0a2512fa0a9393883a718fa75,},Annota
tions:map[string]string{io.kubernetes.container.hash: fafde40b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a25c5660a3a2b7226bf308b44307a3912efa7a9a73e79ec63f83133b80a44683,PodSandboxId:e6dd8d4a1944eb1c32f8b7427d808ec1abbab695ee962c045256ef57d96231c6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722933243355896420,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-324808,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91054f20d0444b47ed2a594ff97bc641,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df2efdb62a4af10f89a1b37599d4258ac3b72660353795f8c2141f3ef70c1f65,PodSandboxId:963ae0e923498a287d076541863b72249288ed25de0b24e686c3d10d12681101,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722933243328499057,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-324808,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8157c47ede204e65846d42268b1da9e0,},Annotations:map[string]string{io.kubernetes.container.hash:
9461fdd0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8651a7ed6615e826b950eab003841c2ec872b48160f207e77dd32a684125a23,PodSandboxId:6dbf61a7e83176b57abac02851c6bd7772ea711eb7641b03791397e466cb2127,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722933243294289542,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-324808,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af4bee3c7061a5d622ebea0a07e8602e,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f5ed150c-147f-4414-ac8a-d4fed4eccb0c name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:55:33 embed-certs-324808 crio[728]: time="2024-08-06 08:55:33.294351843Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=acdd3533-a5e8-492f-9b31-53442b491d13 name=/runtime.v1.RuntimeService/Version
	Aug 06 08:55:33 embed-certs-324808 crio[728]: time="2024-08-06 08:55:33.294504750Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=acdd3533-a5e8-492f-9b31-53442b491d13 name=/runtime.v1.RuntimeService/Version
	Aug 06 08:55:33 embed-certs-324808 crio[728]: time="2024-08-06 08:55:33.296046329Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=796caf5c-7a16-47f7-a93f-25a179f668f1 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 08:55:33 embed-certs-324808 crio[728]: time="2024-08-06 08:55:33.296657709Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722934533296625921,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=796caf5c-7a16-47f7-a93f-25a179f668f1 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 08:55:33 embed-certs-324808 crio[728]: time="2024-08-06 08:55:33.297163633Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=757617d2-1fad-4f60-8b8d-5b691b5f1f83 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:55:33 embed-certs-324808 crio[728]: time="2024-08-06 08:55:33.297215230Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=757617d2-1fad-4f60-8b8d-5b691b5f1f83 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:55:33 embed-certs-324808 crio[728]: time="2024-08-06 08:55:33.297471979Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:367eb6487187a37acafb1fa74364a2d9ec5d9241888ed2e871fb14f2af161975,PodSandboxId:27950acba30f699b26c3086801707c13702e0c69ec58aa1582ce326e756490e2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722933278910752050,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15c7d89a-e994-4cca-931c-f646157a15e8,},Annotations:map[string]string{io.kubernetes.container.hash: 4540f957,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b29f8b66a54410c4d859c7dd605288a33ddbdf5d1c48722c718f2dbbc9ed9f3,PodSandboxId:3683ed34d915eac01c70be437fd87c3e3d9efe834c554a0ff79b4830d0aa0f62,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722933258423173197,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 41718f5d-fd79-4af8-9464-04a00be13504,},Annotations:map[string]string{io.kubernetes.container.hash: 58ebd90e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e1c91bc3a920b6f7db4477b8219f13fa092c68ff53abe2390325a68dad253e5,PodSandboxId:ed059b4ceda167186ee96ca5e73360b595dac7a59f59f8bd89423f739dadbb0c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722933255844194079,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4xxwm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eeb47dea-32e9-458f-9ad9-2af6d4e68cc0,},Annotations:map[string]string{io.kubernetes.container.hash: f3899951,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:612fc6a8f2c1208097a6d6287c5fecb20026608a278af5d0d4efc9662e065eb9,PodSandboxId:ab1bd703c0510dbbe4c386c1d76510a87f45de355b7a86c1c84201ffe2f6730d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722933248127519730,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8lbzv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c45e6c0-9b7f-4c48-b
ff5-38d4a5949bec,},Annotations:map[string]string{io.kubernetes.container.hash: e30e5c8f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78abe2efa92d2d6c9171ed65bd4d941693be3207554f9318a7cefb91544eeb20,PodSandboxId:27950acba30f699b26c3086801707c13702e0c69ec58aa1582ce326e756490e2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722933248067976839,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15c7d89a-e994-4cca-931c-f646157a1
5e8,},Annotations:map[string]string{io.kubernetes.container.hash: 4540f957,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7edb6bece3347d88a263663a1d6549165dafdcb67ad723494b937766cba6da6,PodSandboxId:11616bc5ed6d0b08a352623e9a760f17c020d30ca828c7811afe1737dfcc2ca3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722933243338331603,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-324808,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18189df0a2512fa0a9393883a718fa75,},Annota
tions:map[string]string{io.kubernetes.container.hash: fafde40b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a25c5660a3a2b7226bf308b44307a3912efa7a9a73e79ec63f83133b80a44683,PodSandboxId:e6dd8d4a1944eb1c32f8b7427d808ec1abbab695ee962c045256ef57d96231c6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722933243355896420,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-324808,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91054f20d0444b47ed2a594ff97bc641,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df2efdb62a4af10f89a1b37599d4258ac3b72660353795f8c2141f3ef70c1f65,PodSandboxId:963ae0e923498a287d076541863b72249288ed25de0b24e686c3d10d12681101,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722933243328499057,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-324808,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8157c47ede204e65846d42268b1da9e0,},Annotations:map[string]string{io.kubernetes.container.hash:
9461fdd0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8651a7ed6615e826b950eab003841c2ec872b48160f207e77dd32a684125a23,PodSandboxId:6dbf61a7e83176b57abac02851c6bd7772ea711eb7641b03791397e466cb2127,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722933243294289542,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-324808,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af4bee3c7061a5d622ebea0a07e8602e,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=757617d2-1fad-4f60-8b8d-5b691b5f1f83 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:55:33 embed-certs-324808 crio[728]: time="2024-08-06 08:55:33.331683527Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=883763c6-eee6-448c-bfa3-db2552a0e75f name=/runtime.v1.RuntimeService/Version
	Aug 06 08:55:33 embed-certs-324808 crio[728]: time="2024-08-06 08:55:33.331757321Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=883763c6-eee6-448c-bfa3-db2552a0e75f name=/runtime.v1.RuntimeService/Version
	Aug 06 08:55:33 embed-certs-324808 crio[728]: time="2024-08-06 08:55:33.333885050Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ca89de89-0fc6-484c-a54b-dc255e8bb490 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 08:55:33 embed-certs-324808 crio[728]: time="2024-08-06 08:55:33.334760511Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722934533334734438,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ca89de89-0fc6-484c-a54b-dc255e8bb490 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 08:55:33 embed-certs-324808 crio[728]: time="2024-08-06 08:55:33.335345567Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=20e2f778-0488-4c65-812f-2e5c05085599 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:55:33 embed-certs-324808 crio[728]: time="2024-08-06 08:55:33.335399317Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=20e2f778-0488-4c65-812f-2e5c05085599 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:55:33 embed-certs-324808 crio[728]: time="2024-08-06 08:55:33.335650894Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:367eb6487187a37acafb1fa74364a2d9ec5d9241888ed2e871fb14f2af161975,PodSandboxId:27950acba30f699b26c3086801707c13702e0c69ec58aa1582ce326e756490e2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722933278910752050,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15c7d89a-e994-4cca-931c-f646157a15e8,},Annotations:map[string]string{io.kubernetes.container.hash: 4540f957,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b29f8b66a54410c4d859c7dd605288a33ddbdf5d1c48722c718f2dbbc9ed9f3,PodSandboxId:3683ed34d915eac01c70be437fd87c3e3d9efe834c554a0ff79b4830d0aa0f62,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722933258423173197,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 41718f5d-fd79-4af8-9464-04a00be13504,},Annotations:map[string]string{io.kubernetes.container.hash: 58ebd90e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e1c91bc3a920b6f7db4477b8219f13fa092c68ff53abe2390325a68dad253e5,PodSandboxId:ed059b4ceda167186ee96ca5e73360b595dac7a59f59f8bd89423f739dadbb0c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722933255844194079,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4xxwm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eeb47dea-32e9-458f-9ad9-2af6d4e68cc0,},Annotations:map[string]string{io.kubernetes.container.hash: f3899951,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:612fc6a8f2c1208097a6d6287c5fecb20026608a278af5d0d4efc9662e065eb9,PodSandboxId:ab1bd703c0510dbbe4c386c1d76510a87f45de355b7a86c1c84201ffe2f6730d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722933248127519730,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8lbzv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c45e6c0-9b7f-4c48-b
ff5-38d4a5949bec,},Annotations:map[string]string{io.kubernetes.container.hash: e30e5c8f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78abe2efa92d2d6c9171ed65bd4d941693be3207554f9318a7cefb91544eeb20,PodSandboxId:27950acba30f699b26c3086801707c13702e0c69ec58aa1582ce326e756490e2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722933248067976839,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15c7d89a-e994-4cca-931c-f646157a1
5e8,},Annotations:map[string]string{io.kubernetes.container.hash: 4540f957,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7edb6bece3347d88a263663a1d6549165dafdcb67ad723494b937766cba6da6,PodSandboxId:11616bc5ed6d0b08a352623e9a760f17c020d30ca828c7811afe1737dfcc2ca3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722933243338331603,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-324808,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18189df0a2512fa0a9393883a718fa75,},Annota
tions:map[string]string{io.kubernetes.container.hash: fafde40b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a25c5660a3a2b7226bf308b44307a3912efa7a9a73e79ec63f83133b80a44683,PodSandboxId:e6dd8d4a1944eb1c32f8b7427d808ec1abbab695ee962c045256ef57d96231c6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722933243355896420,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-324808,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91054f20d0444b47ed2a594ff97bc641,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df2efdb62a4af10f89a1b37599d4258ac3b72660353795f8c2141f3ef70c1f65,PodSandboxId:963ae0e923498a287d076541863b72249288ed25de0b24e686c3d10d12681101,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722933243328499057,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-324808,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8157c47ede204e65846d42268b1da9e0,},Annotations:map[string]string{io.kubernetes.container.hash:
9461fdd0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8651a7ed6615e826b950eab003841c2ec872b48160f207e77dd32a684125a23,PodSandboxId:6dbf61a7e83176b57abac02851c6bd7772ea711eb7641b03791397e466cb2127,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722933243294289542,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-324808,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af4bee3c7061a5d622ebea0a07e8602e,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=20e2f778-0488-4c65-812f-2e5c05085599 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	367eb6487187a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      20 minutes ago      Running             storage-provisioner       2                   27950acba30f6       storage-provisioner
	1b29f8b66a544       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   21 minutes ago      Running             busybox                   1                   3683ed34d915e       busybox
	9e1c91bc3a920       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      21 minutes ago      Running             coredns                   1                   ed059b4ceda16       coredns-7db6d8ff4d-4xxwm
	612fc6a8f2c12       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      21 minutes ago      Running             kube-proxy                1                   ab1bd703c0510       kube-proxy-8lbzv
	78abe2efa92d2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      21 minutes ago      Exited              storage-provisioner       1                   27950acba30f6       storage-provisioner
	a25c5660a3a2b       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      21 minutes ago      Running             kube-scheduler            1                   e6dd8d4a1944e       kube-scheduler-embed-certs-324808
	f7edb6bece334       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      21 minutes ago      Running             kube-apiserver            1                   11616bc5ed6d0       kube-apiserver-embed-certs-324808
	df2efdb62a4af       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      21 minutes ago      Running             etcd                      1                   963ae0e923498       etcd-embed-certs-324808
	d8651a7ed6615       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      21 minutes ago      Running             kube-controller-manager   1                   6dbf61a7e8317       kube-controller-manager-embed-certs-324808
	
	
	==> coredns [9e1c91bc3a920b6f7db4477b8219f13fa092c68ff53abe2390325a68dad253e5] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:46236 - 38099 "HINFO IN 2530163823123408894.8164630948987810677. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.016562482s
	
	
	==> describe nodes <==
	Name:               embed-certs-324808
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-324808
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e92cb06692f5ea1ba801d10d148e5e92e807f9c8
	                    minikube.k8s.io/name=embed-certs-324808
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_06T08_26_16_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 06 Aug 2024 08:26:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-324808
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 06 Aug 2024 08:55:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 06 Aug 2024 08:55:02 +0000   Tue, 06 Aug 2024 08:26:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 06 Aug 2024 08:55:02 +0000   Tue, 06 Aug 2024 08:26:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 06 Aug 2024 08:55:02 +0000   Tue, 06 Aug 2024 08:26:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 06 Aug 2024 08:55:02 +0000   Tue, 06 Aug 2024 08:34:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.190
	  Hostname:    embed-certs-324808
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c408910bdcf648538fd8ba76b3291141
	  System UUID:                c408910b-dcf6-4853-8fd8-ba76b3291141
	  Boot ID:                    0201bd3e-74f3-48df-8c90-ae37873684ef
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 coredns-7db6d8ff4d-4xxwm                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     29m
	  kube-system                 etcd-embed-certs-324808                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         29m
	  kube-system                 kube-apiserver-embed-certs-324808             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-controller-manager-embed-certs-324808    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-proxy-8lbzv                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-scheduler-embed-certs-324808             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 metrics-server-569cc877fc-8xb9k               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 29m                kube-proxy       
	  Normal  Starting                 21m                kube-proxy       
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  29m (x2 over 29m)  kubelet          Node embed-certs-324808 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m (x2 over 29m)  kubelet          Node embed-certs-324808 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29m (x2 over 29m)  kubelet          Node embed-certs-324808 status is now: NodeHasSufficientPID
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  NodeReady                29m                kubelet          Node embed-certs-324808 status is now: NodeReady
	  Normal  RegisteredNode           29m                node-controller  Node embed-certs-324808 event: Registered Node embed-certs-324808 in Controller
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node embed-certs-324808 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node embed-certs-324808 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node embed-certs-324808 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           21m                node-controller  Node embed-certs-324808 event: Registered Node embed-certs-324808 in Controller
	
	
	==> dmesg <==
	[Aug 6 08:33] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050723] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040216] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.779708] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.594729] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.589448] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.558208] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.061291] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.052531] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[  +0.208472] systemd-fstab-generator[670]: Ignoring "noauto" option for root device
	[  +0.117471] systemd-fstab-generator[682]: Ignoring "noauto" option for root device
	[  +0.305461] systemd-fstab-generator[712]: Ignoring "noauto" option for root device
	[  +4.460210] systemd-fstab-generator[808]: Ignoring "noauto" option for root device
	[  +0.060310] kauditd_printk_skb: 130 callbacks suppressed
	[Aug 6 08:34] systemd-fstab-generator[931]: Ignoring "noauto" option for root device
	[  +5.632385] kauditd_printk_skb: 97 callbacks suppressed
	[  +3.960408] systemd-fstab-generator[1542]: Ignoring "noauto" option for root device
	[  +1.735312] kauditd_printk_skb: 62 callbacks suppressed
	[  +9.389115] kauditd_printk_skb: 43 callbacks suppressed
	
	
	==> etcd [df2efdb62a4af10f89a1b37599d4258ac3b72660353795f8c2141f3ef70c1f65] <==
	{"level":"info","ts":"2024-08-06T08:44:05.522777Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":794,"took":"10.282003ms","hash":3987429953,"current-db-size-bytes":2174976,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":2174976,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2024-08-06T08:44:05.522891Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3987429953,"revision":794,"compact-revision":-1}
	{"level":"info","ts":"2024-08-06T08:49:05.518881Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1036}
	{"level":"info","ts":"2024-08-06T08:49:05.522783Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1036,"took":"3.5733ms","hash":3937595414,"current-db-size-bytes":2174976,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":1163264,"current-db-size-in-use":"1.2 MB"}
	{"level":"info","ts":"2024-08-06T08:49:05.522883Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3937595414,"revision":1036,"compact-revision":794}
	{"level":"info","ts":"2024-08-06T08:54:05.526567Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1279}
	{"level":"info","ts":"2024-08-06T08:54:05.530368Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1279,"took":"3.317581ms","hash":648397568,"current-db-size-bytes":2174976,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":1126400,"current-db-size-in-use":"1.1 MB"}
	{"level":"info","ts":"2024-08-06T08:54:05.530524Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":648397568,"revision":1279,"compact-revision":1036}
	{"level":"info","ts":"2024-08-06T08:54:51.718589Z","caller":"traceutil/trace.go:171","msg":"trace[1524970690] transaction","detail":"{read_only:false; response_revision:1560; number_of_response:1; }","duration":"427.322922ms","start":"2024-08-06T08:54:51.291155Z","end":"2024-08-06T08:54:51.718478Z","steps":["trace[1524970690] 'process raft request'  (duration: 427.145051ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-06T08:54:51.718671Z","caller":"traceutil/trace.go:171","msg":"trace[1376464369] linearizableReadLoop","detail":"{readStateIndex:1849; appliedIndex:1849; }","duration":"318.744048ms","start":"2024-08-06T08:54:51.399842Z","end":"2024-08-06T08:54:51.718586Z","steps":["trace[1376464369] 'read index received'  (duration: 318.737194ms)","trace[1376464369] 'applied index is now lower than readState.Index'  (duration: 5.882µs)"],"step_count":2}
	{"level":"warn","ts":"2024-08-06T08:54:51.718919Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-06T08:54:51.29114Z","time spent":"427.586906ms","remote":"127.0.0.1:42284","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":595,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1558 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:522 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-08-06T08:54:51.71902Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"319.158949ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/storageclasses/\" range_end:\"/registry/storageclasses0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-08-06T08:54:51.719086Z","caller":"traceutil/trace.go:171","msg":"trace[2137570331] range","detail":"{range_begin:/registry/storageclasses/; range_end:/registry/storageclasses0; response_count:0; response_revision:1560; }","duration":"319.24763ms","start":"2024-08-06T08:54:51.39981Z","end":"2024-08-06T08:54:51.719057Z","steps":["trace[2137570331] 'agreement among raft nodes before linearized reading'  (duration: 319.090025ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-06T08:54:51.719735Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-06T08:54:51.399794Z","time spent":"319.925458ms","remote":"127.0.0.1:42530","response type":"/etcdserverpb.KV/Range","request count":0,"request size":56,"response count":1,"response size":30,"request content":"key:\"/registry/storageclasses/\" range_end:\"/registry/storageclasses0\" count_only:true "}
	{"level":"warn","ts":"2024-08-06T08:54:51.941724Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"129.689994ms","expected-duration":"100ms","prefix":"","request":"header:<ID:7465438928253722362 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/embed-certs-324808\" mod_revision:1553 > success:<request_put:<key:\"/registry/leases/kube-node-lease/embed-certs-324808\" value_size:502 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/embed-certs-324808\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-08-06T08:54:51.941898Z","caller":"traceutil/trace.go:171","msg":"trace[68028951] linearizableReadLoop","detail":"{readStateIndex:1850; appliedIndex:1849; }","duration":"222.995605ms","start":"2024-08-06T08:54:51.718882Z","end":"2024-08-06T08:54:51.941878Z","steps":["trace[68028951] 'read index received'  (duration: 92.98986ms)","trace[68028951] 'applied index is now lower than readState.Index'  (duration: 130.004347ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-06T08:54:51.941994Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"267.97603ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-06T08:54:51.942049Z","caller":"traceutil/trace.go:171","msg":"trace[1973369813] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1561; }","duration":"268.062564ms","start":"2024-08-06T08:54:51.673978Z","end":"2024-08-06T08:54:51.942041Z","steps":["trace[1973369813] 'agreement among raft nodes before linearized reading'  (duration: 267.985371ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-06T08:54:51.942108Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.72781ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-06T08:54:51.942162Z","caller":"traceutil/trace.go:171","msg":"trace[1800413593] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1561; }","duration":"100.798587ms","start":"2024-08-06T08:54:51.841354Z","end":"2024-08-06T08:54:51.942153Z","steps":["trace[1800413593] 'agreement among raft nodes before linearized reading'  (duration: 100.672851ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-06T08:54:51.942183Z","caller":"traceutil/trace.go:171","msg":"trace[1198411352] transaction","detail":"{read_only:false; response_revision:1561; number_of_response:1; }","duration":"489.494396ms","start":"2024-08-06T08:54:51.452681Z","end":"2024-08-06T08:54:51.942175Z","steps":["trace[1198411352] 'process raft request'  (duration: 359.232502ms)","trace[1198411352] 'compare'  (duration: 129.574548ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-06T08:54:51.942321Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-06T08:54:51.452663Z","time spent":"489.626742ms","remote":"127.0.0.1:42406","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":561,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/embed-certs-324808\" mod_revision:1553 > success:<request_put:<key:\"/registry/leases/kube-node-lease/embed-certs-324808\" value_size:502 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/embed-certs-324808\" > >"}
	{"level":"warn","ts":"2024-08-06T08:54:51.942009Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"346.746344ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/horizontalpodautoscalers/\" range_end:\"/registry/horizontalpodautoscalers0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-06T08:54:51.942516Z","caller":"traceutil/trace.go:171","msg":"trace[240526895] range","detail":"{range_begin:/registry/horizontalpodautoscalers/; range_end:/registry/horizontalpodautoscalers0; response_count:0; response_revision:1561; }","duration":"347.267052ms","start":"2024-08-06T08:54:51.59523Z","end":"2024-08-06T08:54:51.942497Z","steps":["trace[240526895] 'agreement among raft nodes before linearized reading'  (duration: 346.751464ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-06T08:54:51.942542Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-06T08:54:51.595216Z","time spent":"347.317804ms","remote":"127.0.0.1:42360","response type":"/etcdserverpb.KV/Range","request count":0,"request size":76,"response count":0,"response size":28,"request content":"key:\"/registry/horizontalpodautoscalers/\" range_end:\"/registry/horizontalpodautoscalers0\" count_only:true "}
	
	
	==> kernel <==
	 08:55:33 up 21 min,  0 users,  load average: 0.17, 0.25, 0.19
	Linux embed-certs-324808 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [f7edb6bece3347d88a263663a1d6549165dafdcb67ad723494b937766cba6da6] <==
	I0806 08:50:07.844900       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0806 08:52:07.844505       1 handler_proxy.go:93] no RequestInfo found in the context
	E0806 08:52:07.844776       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0806 08:52:07.844803       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0806 08:52:07.845736       1 handler_proxy.go:93] no RequestInfo found in the context
	E0806 08:52:07.845817       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0806 08:52:07.845849       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0806 08:54:06.847072       1 handler_proxy.go:93] no RequestInfo found in the context
	E0806 08:54:06.847446       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0806 08:54:07.847919       1 handler_proxy.go:93] no RequestInfo found in the context
	E0806 08:54:07.848038       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0806 08:54:07.848072       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0806 08:54:07.848291       1 handler_proxy.go:93] no RequestInfo found in the context
	E0806 08:54:07.848731       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0806 08:54:07.850030       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0806 08:55:07.848679       1 handler_proxy.go:93] no RequestInfo found in the context
	E0806 08:55:07.848883       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0806 08:55:07.848910       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0806 08:55:07.850263       1 handler_proxy.go:93] no RequestInfo found in the context
	E0806 08:55:07.850470       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0806 08:55:07.850500       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [d8651a7ed6615e826b950eab003841c2ec872b48160f207e77dd32a684125a23] <==
	I0806 08:49:52.337472       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0806 08:50:21.812612       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0806 08:50:22.345582       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0806 08:50:35.704013       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="359.901µs"
	I0806 08:50:47.696670       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="61.788µs"
	E0806 08:50:51.817834       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0806 08:50:52.355632       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0806 08:51:21.823089       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0806 08:51:22.362987       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0806 08:51:51.830743       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0806 08:51:52.372141       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0806 08:52:21.835552       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0806 08:52:22.380865       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0806 08:52:51.839912       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0806 08:52:52.388930       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0806 08:53:21.846601       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0806 08:53:22.397037       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0806 08:53:51.852651       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0806 08:53:52.404762       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0806 08:54:21.858183       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0806 08:54:22.412052       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0806 08:54:51.865669       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0806 08:54:52.420524       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0806 08:55:21.871775       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0806 08:55:22.431989       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [612fc6a8f2c1208097a6d6287c5fecb20026608a278af5d0d4efc9662e065eb9] <==
	I0806 08:34:08.276218       1 server_linux.go:69] "Using iptables proxy"
	I0806 08:34:08.285065       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.190"]
	I0806 08:34:08.318635       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0806 08:34:08.318703       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0806 08:34:08.318728       1 server_linux.go:165] "Using iptables Proxier"
	I0806 08:34:08.321292       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0806 08:34:08.321649       1 server.go:872] "Version info" version="v1.30.3"
	I0806 08:34:08.321676       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0806 08:34:08.323039       1 config.go:192] "Starting service config controller"
	I0806 08:34:08.323069       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0806 08:34:08.323102       1 config.go:101] "Starting endpoint slice config controller"
	I0806 08:34:08.323106       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0806 08:34:08.325796       1 config.go:319] "Starting node config controller"
	I0806 08:34:08.325830       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0806 08:34:08.423192       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0806 08:34:08.423281       1 shared_informer.go:320] Caches are synced for service config
	I0806 08:34:08.426078       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [a25c5660a3a2b7226bf308b44307a3912efa7a9a73e79ec63f83133b80a44683] <==
	I0806 08:34:04.328149       1 serving.go:380] Generated self-signed cert in-memory
	W0806 08:34:06.787589       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0806 08:34:06.787645       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0806 08:34:06.787657       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0806 08:34:06.787664       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0806 08:34:06.818807       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0806 08:34:06.819498       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0806 08:34:06.821043       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0806 08:34:06.823683       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0806 08:34:06.823774       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0806 08:34:06.823855       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0806 08:34:06.924536       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 06 08:53:02 embed-certs-324808 kubelet[938]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 06 08:53:02 embed-certs-324808 kubelet[938]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 06 08:53:02 embed-certs-324808 kubelet[938]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 06 08:53:02 embed-certs-324808 kubelet[938]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 06 08:53:06 embed-certs-324808 kubelet[938]: E0806 08:53:06.684044     938 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-8xb9k" podUID="7a5fb326-11a7-48fa-bcde-a22026a1dccb"
	Aug 06 08:53:20 embed-certs-324808 kubelet[938]: E0806 08:53:20.682991     938 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-8xb9k" podUID="7a5fb326-11a7-48fa-bcde-a22026a1dccb"
	Aug 06 08:53:33 embed-certs-324808 kubelet[938]: E0806 08:53:33.683066     938 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-8xb9k" podUID="7a5fb326-11a7-48fa-bcde-a22026a1dccb"
	Aug 06 08:53:45 embed-certs-324808 kubelet[938]: E0806 08:53:45.682964     938 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-8xb9k" podUID="7a5fb326-11a7-48fa-bcde-a22026a1dccb"
	Aug 06 08:53:59 embed-certs-324808 kubelet[938]: E0806 08:53:59.683211     938 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-8xb9k" podUID="7a5fb326-11a7-48fa-bcde-a22026a1dccb"
	Aug 06 08:54:02 embed-certs-324808 kubelet[938]: E0806 08:54:02.716055     938 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 06 08:54:02 embed-certs-324808 kubelet[938]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 06 08:54:02 embed-certs-324808 kubelet[938]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 06 08:54:02 embed-certs-324808 kubelet[938]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 06 08:54:02 embed-certs-324808 kubelet[938]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 06 08:54:14 embed-certs-324808 kubelet[938]: E0806 08:54:14.683347     938 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-8xb9k" podUID="7a5fb326-11a7-48fa-bcde-a22026a1dccb"
	Aug 06 08:54:26 embed-certs-324808 kubelet[938]: E0806 08:54:26.682844     938 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-8xb9k" podUID="7a5fb326-11a7-48fa-bcde-a22026a1dccb"
	Aug 06 08:54:41 embed-certs-324808 kubelet[938]: E0806 08:54:41.682933     938 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-8xb9k" podUID="7a5fb326-11a7-48fa-bcde-a22026a1dccb"
	Aug 06 08:54:54 embed-certs-324808 kubelet[938]: E0806 08:54:54.683397     938 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-8xb9k" podUID="7a5fb326-11a7-48fa-bcde-a22026a1dccb"
	Aug 06 08:55:02 embed-certs-324808 kubelet[938]: E0806 08:55:02.717873     938 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 06 08:55:02 embed-certs-324808 kubelet[938]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 06 08:55:02 embed-certs-324808 kubelet[938]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 06 08:55:02 embed-certs-324808 kubelet[938]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 06 08:55:02 embed-certs-324808 kubelet[938]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 06 08:55:09 embed-certs-324808 kubelet[938]: E0806 08:55:09.683016     938 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-8xb9k" podUID="7a5fb326-11a7-48fa-bcde-a22026a1dccb"
	Aug 06 08:55:20 embed-certs-324808 kubelet[938]: E0806 08:55:20.683677     938 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-8xb9k" podUID="7a5fb326-11a7-48fa-bcde-a22026a1dccb"
	
	
	==> storage-provisioner [367eb6487187a37acafb1fa74364a2d9ec5d9241888ed2e871fb14f2af161975] <==
	I0806 08:34:39.042609       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0806 08:34:39.054272       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0806 08:34:39.054538       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0806 08:34:39.065607       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0806 08:34:39.066085       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-324808_113a14c2-2344-4cc7-8a7d-b76920a40753!
	I0806 08:34:39.067612       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"059f55b2-f290-4d6f-b0a5-25d3b1db6b88", APIVersion:"v1", ResourceVersion:"557", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-324808_113a14c2-2344-4cc7-8a7d-b76920a40753 became leader
	I0806 08:34:39.167331       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-324808_113a14c2-2344-4cc7-8a7d-b76920a40753!
	
	
	==> storage-provisioner [78abe2efa92d2d6c9171ed65bd4d941693be3207554f9318a7cefb91544eeb20] <==
	I0806 08:34:08.185137       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0806 08:34:38.188167       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-324808 -n embed-certs-324808
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-324808 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-8xb9k
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-324808 describe pod metrics-server-569cc877fc-8xb9k
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-324808 describe pod metrics-server-569cc877fc-8xb9k: exit status 1 (61.498001ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-8xb9k" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-324808 describe pod metrics-server-569cc877fc-8xb9k: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (474.52s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (460.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-990751 -n default-k8s-diff-port-990751
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-08-06 08:55:38.061600322 +0000 UTC m=+6661.286410757
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-990751 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-990751 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.256µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-990751 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-990751 -n default-k8s-diff-port-990751
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-990751 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-990751 logs -n 25: (1.204406206s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p                                                     | default-k8s-diff-port-990751 | jenkins | v1.33.1 | 06 Aug 24 08:25 UTC | 06 Aug 24 08:27 UTC |
	|         | default-k8s-diff-port-990751                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-703340             | no-preload-703340            | jenkins | v1.33.1 | 06 Aug 24 08:26 UTC | 06 Aug 24 08:26 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-703340                                   | no-preload-703340            | jenkins | v1.33.1 | 06 Aug 24 08:26 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-324808            | embed-certs-324808           | jenkins | v1.33.1 | 06 Aug 24 08:26 UTC | 06 Aug 24 08:26 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-324808                                  | embed-certs-324808           | jenkins | v1.33.1 | 06 Aug 24 08:26 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-990751  | default-k8s-diff-port-990751 | jenkins | v1.33.1 | 06 Aug 24 08:27 UTC | 06 Aug 24 08:27 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-990751 | jenkins | v1.33.1 | 06 Aug 24 08:27 UTC |                     |
	|         | default-k8s-diff-port-990751                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-703340                  | no-preload-703340            | jenkins | v1.33.1 | 06 Aug 24 08:28 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-703340                                   | no-preload-703340            | jenkins | v1.33.1 | 06 Aug 24 08:28 UTC | 06 Aug 24 08:40 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-409903        | old-k8s-version-409903       | jenkins | v1.33.1 | 06 Aug 24 08:28 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-324808                 | embed-certs-324808           | jenkins | v1.33.1 | 06 Aug 24 08:29 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-324808                                  | embed-certs-324808           | jenkins | v1.33.1 | 06 Aug 24 08:29 UTC | 06 Aug 24 08:38 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-990751       | default-k8s-diff-port-990751 | jenkins | v1.33.1 | 06 Aug 24 08:30 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-990751 | jenkins | v1.33.1 | 06 Aug 24 08:30 UTC | 06 Aug 24 08:38 UTC |
	|         | default-k8s-diff-port-990751                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-409903                              | old-k8s-version-409903       | jenkins | v1.33.1 | 06 Aug 24 08:30 UTC | 06 Aug 24 08:30 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-409903             | old-k8s-version-409903       | jenkins | v1.33.1 | 06 Aug 24 08:30 UTC | 06 Aug 24 08:30 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-409903                              | old-k8s-version-409903       | jenkins | v1.33.1 | 06 Aug 24 08:30 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-409903                              | old-k8s-version-409903       | jenkins | v1.33.1 | 06 Aug 24 08:54 UTC | 06 Aug 24 08:54 UTC |
	| start   | -p newest-cni-384471 --memory=2200 --alsologtostderr   | newest-cni-384471            | jenkins | v1.33.1 | 06 Aug 24 08:54 UTC | 06 Aug 24 08:55 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                              |         |         |                     |                     |
	| delete  | -p no-preload-703340                                   | no-preload-703340            | jenkins | v1.33.1 | 06 Aug 24 08:54 UTC | 06 Aug 24 08:54 UTC |
	| addons  | enable metrics-server -p newest-cni-384471             | newest-cni-384471            | jenkins | v1.33.1 | 06 Aug 24 08:55 UTC | 06 Aug 24 08:55 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-384471                                   | newest-cni-384471            | jenkins | v1.33.1 | 06 Aug 24 08:55 UTC | 06 Aug 24 08:55 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-384471                  | newest-cni-384471            | jenkins | v1.33.1 | 06 Aug 24 08:55 UTC | 06 Aug 24 08:55 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-384471 --memory=2200 --alsologtostderr   | newest-cni-384471            | jenkins | v1.33.1 | 06 Aug 24 08:55 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                              |         |         |                     |                     |
	| delete  | -p embed-certs-324808                                  | embed-certs-324808           | jenkins | v1.33.1 | 06 Aug 24 08:55 UTC | 06 Aug 24 08:55 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/06 08:55:18
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0806 08:55:18.857608   86804 out.go:291] Setting OutFile to fd 1 ...
	I0806 08:55:18.857947   86804 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 08:55:18.857958   86804 out.go:304] Setting ErrFile to fd 2...
	I0806 08:55:18.857963   86804 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 08:55:18.858143   86804 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19370-13057/.minikube/bin
	I0806 08:55:18.858722   86804 out.go:298] Setting JSON to false
	I0806 08:55:18.859783   86804 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":9465,"bootTime":1722925054,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0806 08:55:18.859843   86804 start.go:139] virtualization: kvm guest
	I0806 08:55:18.862120   86804 out.go:177] * [newest-cni-384471] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0806 08:55:18.863801   86804 out.go:177]   - MINIKUBE_LOCATION=19370
	I0806 08:55:18.863805   86804 notify.go:220] Checking for updates...
	I0806 08:55:18.865314   86804 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0806 08:55:18.866690   86804 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19370-13057/kubeconfig
	I0806 08:55:18.867921   86804 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19370-13057/.minikube
	I0806 08:55:18.869180   86804 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0806 08:55:18.870468   86804 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0806 08:55:18.872057   86804 config.go:182] Loaded profile config "newest-cni-384471": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-rc.0
	I0806 08:55:18.872615   86804 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:55:18.872668   86804 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:55:18.887733   86804 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44389
	I0806 08:55:18.888185   86804 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:55:18.888957   86804 main.go:141] libmachine: Using API Version  1
	I0806 08:55:18.888988   86804 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:55:18.889381   86804 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:55:18.889572   86804 main.go:141] libmachine: (newest-cni-384471) Calling .DriverName
	I0806 08:55:18.889865   86804 driver.go:392] Setting default libvirt URI to qemu:///system
	I0806 08:55:18.890225   86804 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:55:18.890264   86804 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:55:18.905074   86804 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35765
	I0806 08:55:18.905489   86804 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:55:18.905931   86804 main.go:141] libmachine: Using API Version  1
	I0806 08:55:18.905972   86804 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:55:18.906323   86804 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:55:18.906578   86804 main.go:141] libmachine: (newest-cni-384471) Calling .DriverName
	I0806 08:55:18.944904   86804 out.go:177] * Using the kvm2 driver based on existing profile
	I0806 08:55:18.946275   86804 start.go:297] selected driver: kvm2
	I0806 08:55:18.946297   86804 start.go:901] validating driver "kvm2" against &{Name:newest-cni-384471 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0-rc.0 ClusterName:newest-cni-384471 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.76 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods
:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 08:55:18.946438   86804 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0806 08:55:18.947169   86804 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 08:55:18.947247   86804 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19370-13057/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0806 08:55:18.962491   86804 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0806 08:55:18.962857   86804 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0806 08:55:18.962926   86804 cni.go:84] Creating CNI manager for ""
	I0806 08:55:18.962939   86804 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0806 08:55:18.962975   86804 start.go:340] cluster config:
	{Name:newest-cni-384471 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-rc.0 ClusterName:newest-cni-384471 Namespace:default APIS
erverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.76 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress
: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 08:55:18.963062   86804 iso.go:125] acquiring lock: {Name:mke58d71de28babe305c5eb0c31c274e9713bb3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 08:55:18.965697   86804 out.go:177] * Starting "newest-cni-384471" primary control-plane node in "newest-cni-384471" cluster
	I0806 08:55:18.967131   86804 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime crio
	I0806 08:55:18.967174   86804 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-cri-o-overlay-amd64.tar.lz4
	I0806 08:55:18.967185   86804 cache.go:56] Caching tarball of preloaded images
	I0806 08:55:18.967297   86804 preload.go:172] Found /home/jenkins/minikube-integration/19370-13057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0806 08:55:18.967311   86804 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-rc.0 on crio
	I0806 08:55:18.967430   86804 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/newest-cni-384471/config.json ...
	I0806 08:55:18.967646   86804 start.go:360] acquireMachinesLock for newest-cni-384471: {Name:mkc602ac9c8cc3360dcc0fcb8104303837aaab61 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 08:55:18.967691   86804 start.go:364] duration metric: took 25.108µs to acquireMachinesLock for "newest-cni-384471"
	I0806 08:55:18.967709   86804 start.go:96] Skipping create...Using existing machine configuration
	I0806 08:55:18.967715   86804 fix.go:54] fixHost starting: 
	I0806 08:55:18.968036   86804 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:55:18.968071   86804 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:55:18.983279   86804 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46207
	I0806 08:55:18.983763   86804 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:55:18.984251   86804 main.go:141] libmachine: Using API Version  1
	I0806 08:55:18.984273   86804 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:55:18.984609   86804 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:55:18.984836   86804 main.go:141] libmachine: (newest-cni-384471) Calling .DriverName
	I0806 08:55:18.985027   86804 main.go:141] libmachine: (newest-cni-384471) Calling .GetState
	I0806 08:55:18.987000   86804 fix.go:112] recreateIfNeeded on newest-cni-384471: state=Stopped err=<nil>
	I0806 08:55:18.987023   86804 main.go:141] libmachine: (newest-cni-384471) Calling .DriverName
	W0806 08:55:18.987200   86804 fix.go:138] unexpected machine state, will restart: <nil>
	I0806 08:55:18.989219   86804 out.go:177] * Restarting existing kvm2 VM for "newest-cni-384471" ...
	I0806 08:55:18.990412   86804 main.go:141] libmachine: (newest-cni-384471) Calling .Start
	I0806 08:55:18.990573   86804 main.go:141] libmachine: (newest-cni-384471) Ensuring networks are active...
	I0806 08:55:18.991396   86804 main.go:141] libmachine: (newest-cni-384471) Ensuring network default is active
	I0806 08:55:18.991788   86804 main.go:141] libmachine: (newest-cni-384471) Ensuring network mk-newest-cni-384471 is active
	I0806 08:55:18.992177   86804 main.go:141] libmachine: (newest-cni-384471) Getting domain xml...
	I0806 08:55:18.992993   86804 main.go:141] libmachine: (newest-cni-384471) Creating domain...
	I0806 08:55:20.242495   86804 main.go:141] libmachine: (newest-cni-384471) Waiting to get IP...
	I0806 08:55:20.243404   86804 main.go:141] libmachine: (newest-cni-384471) DBG | domain newest-cni-384471 has defined MAC address 52:54:00:95:c7:9e in network mk-newest-cni-384471
	I0806 08:55:20.243805   86804 main.go:141] libmachine: (newest-cni-384471) DBG | unable to find current IP address of domain newest-cni-384471 in network mk-newest-cni-384471
	I0806 08:55:20.243866   86804 main.go:141] libmachine: (newest-cni-384471) DBG | I0806 08:55:20.243784   86839 retry.go:31] will retry after 270.139285ms: waiting for machine to come up
	I0806 08:55:20.515344   86804 main.go:141] libmachine: (newest-cni-384471) DBG | domain newest-cni-384471 has defined MAC address 52:54:00:95:c7:9e in network mk-newest-cni-384471
	I0806 08:55:20.515792   86804 main.go:141] libmachine: (newest-cni-384471) DBG | unable to find current IP address of domain newest-cni-384471 in network mk-newest-cni-384471
	I0806 08:55:20.515879   86804 main.go:141] libmachine: (newest-cni-384471) DBG | I0806 08:55:20.515765   86839 retry.go:31] will retry after 280.916412ms: waiting for machine to come up
	I0806 08:55:20.798383   86804 main.go:141] libmachine: (newest-cni-384471) DBG | domain newest-cni-384471 has defined MAC address 52:54:00:95:c7:9e in network mk-newest-cni-384471
	I0806 08:55:20.798812   86804 main.go:141] libmachine: (newest-cni-384471) DBG | unable to find current IP address of domain newest-cni-384471 in network mk-newest-cni-384471
	I0806 08:55:20.798841   86804 main.go:141] libmachine: (newest-cni-384471) DBG | I0806 08:55:20.798779   86839 retry.go:31] will retry after 349.754615ms: waiting for machine to come up
	I0806 08:55:21.150487   86804 main.go:141] libmachine: (newest-cni-384471) DBG | domain newest-cni-384471 has defined MAC address 52:54:00:95:c7:9e in network mk-newest-cni-384471
	I0806 08:55:21.150958   86804 main.go:141] libmachine: (newest-cni-384471) DBG | unable to find current IP address of domain newest-cni-384471 in network mk-newest-cni-384471
	I0806 08:55:21.150988   86804 main.go:141] libmachine: (newest-cni-384471) DBG | I0806 08:55:21.150886   86839 retry.go:31] will retry after 389.454923ms: waiting for machine to come up
	I0806 08:55:21.541578   86804 main.go:141] libmachine: (newest-cni-384471) DBG | domain newest-cni-384471 has defined MAC address 52:54:00:95:c7:9e in network mk-newest-cni-384471
	I0806 08:55:21.542017   86804 main.go:141] libmachine: (newest-cni-384471) DBG | unable to find current IP address of domain newest-cni-384471 in network mk-newest-cni-384471
	I0806 08:55:21.542047   86804 main.go:141] libmachine: (newest-cni-384471) DBG | I0806 08:55:21.541972   86839 retry.go:31] will retry after 702.387349ms: waiting for machine to come up
	I0806 08:55:22.245922   86804 main.go:141] libmachine: (newest-cni-384471) DBG | domain newest-cni-384471 has defined MAC address 52:54:00:95:c7:9e in network mk-newest-cni-384471
	I0806 08:55:22.246385   86804 main.go:141] libmachine: (newest-cni-384471) DBG | unable to find current IP address of domain newest-cni-384471 in network mk-newest-cni-384471
	I0806 08:55:22.246409   86804 main.go:141] libmachine: (newest-cni-384471) DBG | I0806 08:55:22.246347   86839 retry.go:31] will retry after 600.269224ms: waiting for machine to come up
	I0806 08:55:22.848034   86804 main.go:141] libmachine: (newest-cni-384471) DBG | domain newest-cni-384471 has defined MAC address 52:54:00:95:c7:9e in network mk-newest-cni-384471
	I0806 08:55:22.848532   86804 main.go:141] libmachine: (newest-cni-384471) DBG | unable to find current IP address of domain newest-cni-384471 in network mk-newest-cni-384471
	I0806 08:55:22.848560   86804 main.go:141] libmachine: (newest-cni-384471) DBG | I0806 08:55:22.848480   86839 retry.go:31] will retry after 1.177416653s: waiting for machine to come up
	I0806 08:55:24.027719   86804 main.go:141] libmachine: (newest-cni-384471) DBG | domain newest-cni-384471 has defined MAC address 52:54:00:95:c7:9e in network mk-newest-cni-384471
	I0806 08:55:24.028217   86804 main.go:141] libmachine: (newest-cni-384471) DBG | unable to find current IP address of domain newest-cni-384471 in network mk-newest-cni-384471
	I0806 08:55:24.028246   86804 main.go:141] libmachine: (newest-cni-384471) DBG | I0806 08:55:24.028143   86839 retry.go:31] will retry after 1.247223216s: waiting for machine to come up
	I0806 08:55:25.277640   86804 main.go:141] libmachine: (newest-cni-384471) DBG | domain newest-cni-384471 has defined MAC address 52:54:00:95:c7:9e in network mk-newest-cni-384471
	I0806 08:55:25.278240   86804 main.go:141] libmachine: (newest-cni-384471) DBG | unable to find current IP address of domain newest-cni-384471 in network mk-newest-cni-384471
	I0806 08:55:25.278264   86804 main.go:141] libmachine: (newest-cni-384471) DBG | I0806 08:55:25.278204   86839 retry.go:31] will retry after 1.540980502s: waiting for machine to come up
	I0806 08:55:26.820632   86804 main.go:141] libmachine: (newest-cni-384471) DBG | domain newest-cni-384471 has defined MAC address 52:54:00:95:c7:9e in network mk-newest-cni-384471
	I0806 08:55:26.821079   86804 main.go:141] libmachine: (newest-cni-384471) DBG | unable to find current IP address of domain newest-cni-384471 in network mk-newest-cni-384471
	I0806 08:55:26.821117   86804 main.go:141] libmachine: (newest-cni-384471) DBG | I0806 08:55:26.821055   86839 retry.go:31] will retry after 2.311522949s: waiting for machine to come up
	I0806 08:55:29.134173   86804 main.go:141] libmachine: (newest-cni-384471) DBG | domain newest-cni-384471 has defined MAC address 52:54:00:95:c7:9e in network mk-newest-cni-384471
	I0806 08:55:29.134678   86804 main.go:141] libmachine: (newest-cni-384471) DBG | unable to find current IP address of domain newest-cni-384471 in network mk-newest-cni-384471
	I0806 08:55:29.134712   86804 main.go:141] libmachine: (newest-cni-384471) DBG | I0806 08:55:29.134615   86839 retry.go:31] will retry after 2.7874442s: waiting for machine to come up
	I0806 08:55:31.925482   86804 main.go:141] libmachine: (newest-cni-384471) DBG | domain newest-cni-384471 has defined MAC address 52:54:00:95:c7:9e in network mk-newest-cni-384471
	I0806 08:55:31.926061   86804 main.go:141] libmachine: (newest-cni-384471) DBG | unable to find current IP address of domain newest-cni-384471 in network mk-newest-cni-384471
	I0806 08:55:31.926092   86804 main.go:141] libmachine: (newest-cni-384471) DBG | I0806 08:55:31.925984   86839 retry.go:31] will retry after 2.400554867s: waiting for machine to come up
	
	
	==> CRI-O <==
	Aug 06 08:55:38 default-k8s-diff-port-990751 crio[719]: time="2024-08-06 08:55:38.641134104Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722934538641112781,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=377a890c-9270-4e57-9da8-994a1eed977b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 08:55:38 default-k8s-diff-port-990751 crio[719]: time="2024-08-06 08:55:38.641879421Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1ca7ddeb-65c7-43b0-acd0-a4a95e0c3ed1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:55:38 default-k8s-diff-port-990751 crio[719]: time="2024-08-06 08:55:38.641950664Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1ca7ddeb-65c7-43b0-acd0-a4a95e0c3ed1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:55:38 default-k8s-diff-port-990751 crio[719]: time="2024-08-06 08:55:38.642274913Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e668e4a5fa59e279bc227b6110983cc4660b639d5b1e1291ae60d361f682d82e,PodSandboxId:212629317ec9d4a02502b705c85c5e3a1d233f294ffdb7c78c6f03f5de7c4e0a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722933298482056515,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0faff3fc-d34b-418e-972b-95a2e36b577a,},Annotations:map[string]string{io.kubernetes.container.hash: 99df2b1d,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e66102b7ad4d765159de00c22e2ff0fc7466749da0042d42272d5d8590c8ee08,PodSandboxId:8676b9ccd33520640da1604c11f8e1f022def345d5accba2a99d0555ba1ac7b4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722933278325984668,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0c9c181e-0ac8-4463-bcdd-90536fa7ace0,},Annotations:map[string]string{io.kubernetes.container.hash: fb110dd9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59f63eeae644f7c8944cdb7b1d05d9e49fe84eb28e9ccbad2d2dd79846d7eb52,PodSandboxId:671bc248779b3dde9d65bca463606ec9f1e93428faa567ffc2a9d5a7f70c4ecd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722933275327872058,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gx8tt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48f83ba8-a238-4baf-94af-88370fefcb02,},Annotations:map[string]string{io.kubernetes.container.hash: a7046b57,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abd677ac30faba3fef9e9d562601ad0012d7f1b48852e08b8f2684efda6b0f2e,PodSandboxId:212629317ec9d4a02502b705c85c5e3a1d233f294ffdb7c78c6f03f5de7c4e0a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722933267621029914,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 0faff3fc-d34b-418e-972b-95a2e36b577a,},Annotations:map[string]string{io.kubernetes.container.hash: 99df2b1d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75990046864b4a4db9f8c02f44de4eefbb0cf45584880b053bce4ca0b677ce4e,PodSandboxId:62877dd46aaa63956643b3d0f0d405fa45b34e720ba8734b0ead0a9e3577eecf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722933267588436904,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lpf88,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 866c10f0-2c44-4c03-b460
-ff2194bd1ec6,},Annotations:map[string]string{io.kubernetes.container.hash: 6ffde570,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5fa6fe8f1db6b59f47ea637515efb2864e0408ce1df0e6851b3bda0e240f5d3,PodSandboxId:7d737c16e4d6b6dfb159c9481525da24c5a35d2d49fc7649c2c2b47bccf13eeb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722933263042998803,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-990751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be1af3bf6b8b56d20fa135ae58600b46,},Annotations:map[
string]string{io.kubernetes.container.hash: 42e17a12,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e596abcad3dc13d27548701f515beebba051e04a9e20f8ce1d018731d880ff1c,PodSandboxId:42f4b383c94b31dc78e49648f4bac2952ad70f8984c43ef9702d42fd0a1d6b84,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722933263050209712,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-990751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79d986bf1c9915f03028df54a8aaf049,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05bbffa4c989e42bd020ebe6b943459ec1d7b31a6c5e5ba9a37056da15542658,PodSandboxId:4502d8ea19cd903a4f9bd6f7e9e51168b714fcf287bf94b261c9ab68f1f2f3ce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722933263062442813,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-990751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7799dd4e95fe53745fccce3bf5a
3815,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49843be03d16addb301ffd26e517770ff29b00f049efb524a2139b8b2153325c,PodSandboxId:17201685578b67add3362633d77a2d263ea9700b8d936ca483f69667ff24a7e8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722933263075049639,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-990751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eeaec8565e0ec402ba61f03e2c1209
84,},Annotations:map[string]string{io.kubernetes.container.hash: e9a9675d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1ca7ddeb-65c7-43b0-acd0-a4a95e0c3ed1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:55:38 default-k8s-diff-port-990751 crio[719]: time="2024-08-06 08:55:38.689160140Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d80aa03f-4278-4243-8c26-61e84ca7254e name=/runtime.v1.RuntimeService/Version
	Aug 06 08:55:38 default-k8s-diff-port-990751 crio[719]: time="2024-08-06 08:55:38.689255362Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d80aa03f-4278-4243-8c26-61e84ca7254e name=/runtime.v1.RuntimeService/Version
	Aug 06 08:55:38 default-k8s-diff-port-990751 crio[719]: time="2024-08-06 08:55:38.690613537Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9186eb5e-5830-4247-aa4e-541142021697 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 08:55:38 default-k8s-diff-port-990751 crio[719]: time="2024-08-06 08:55:38.691284797Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722934538691257315,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9186eb5e-5830-4247-aa4e-541142021697 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 08:55:38 default-k8s-diff-port-990751 crio[719]: time="2024-08-06 08:55:38.692131117Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0a842142-310d-44e6-957a-e82b648ba4d7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:55:38 default-k8s-diff-port-990751 crio[719]: time="2024-08-06 08:55:38.692198364Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0a842142-310d-44e6-957a-e82b648ba4d7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:55:38 default-k8s-diff-port-990751 crio[719]: time="2024-08-06 08:55:38.692460064Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e668e4a5fa59e279bc227b6110983cc4660b639d5b1e1291ae60d361f682d82e,PodSandboxId:212629317ec9d4a02502b705c85c5e3a1d233f294ffdb7c78c6f03f5de7c4e0a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722933298482056515,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0faff3fc-d34b-418e-972b-95a2e36b577a,},Annotations:map[string]string{io.kubernetes.container.hash: 99df2b1d,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e66102b7ad4d765159de00c22e2ff0fc7466749da0042d42272d5d8590c8ee08,PodSandboxId:8676b9ccd33520640da1604c11f8e1f022def345d5accba2a99d0555ba1ac7b4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722933278325984668,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0c9c181e-0ac8-4463-bcdd-90536fa7ace0,},Annotations:map[string]string{io.kubernetes.container.hash: fb110dd9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59f63eeae644f7c8944cdb7b1d05d9e49fe84eb28e9ccbad2d2dd79846d7eb52,PodSandboxId:671bc248779b3dde9d65bca463606ec9f1e93428faa567ffc2a9d5a7f70c4ecd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722933275327872058,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gx8tt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48f83ba8-a238-4baf-94af-88370fefcb02,},Annotations:map[string]string{io.kubernetes.container.hash: a7046b57,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abd677ac30faba3fef9e9d562601ad0012d7f1b48852e08b8f2684efda6b0f2e,PodSandboxId:212629317ec9d4a02502b705c85c5e3a1d233f294ffdb7c78c6f03f5de7c4e0a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722933267621029914,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 0faff3fc-d34b-418e-972b-95a2e36b577a,},Annotations:map[string]string{io.kubernetes.container.hash: 99df2b1d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75990046864b4a4db9f8c02f44de4eefbb0cf45584880b053bce4ca0b677ce4e,PodSandboxId:62877dd46aaa63956643b3d0f0d405fa45b34e720ba8734b0ead0a9e3577eecf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722933267588436904,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lpf88,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 866c10f0-2c44-4c03-b460
-ff2194bd1ec6,},Annotations:map[string]string{io.kubernetes.container.hash: 6ffde570,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5fa6fe8f1db6b59f47ea637515efb2864e0408ce1df0e6851b3bda0e240f5d3,PodSandboxId:7d737c16e4d6b6dfb159c9481525da24c5a35d2d49fc7649c2c2b47bccf13eeb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722933263042998803,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-990751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be1af3bf6b8b56d20fa135ae58600b46,},Annotations:map[
string]string{io.kubernetes.container.hash: 42e17a12,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e596abcad3dc13d27548701f515beebba051e04a9e20f8ce1d018731d880ff1c,PodSandboxId:42f4b383c94b31dc78e49648f4bac2952ad70f8984c43ef9702d42fd0a1d6b84,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722933263050209712,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-990751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79d986bf1c9915f03028df54a8aaf049,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05bbffa4c989e42bd020ebe6b943459ec1d7b31a6c5e5ba9a37056da15542658,PodSandboxId:4502d8ea19cd903a4f9bd6f7e9e51168b714fcf287bf94b261c9ab68f1f2f3ce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722933263062442813,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-990751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7799dd4e95fe53745fccce3bf5a
3815,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49843be03d16addb301ffd26e517770ff29b00f049efb524a2139b8b2153325c,PodSandboxId:17201685578b67add3362633d77a2d263ea9700b8d936ca483f69667ff24a7e8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722933263075049639,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-990751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eeaec8565e0ec402ba61f03e2c1209
84,},Annotations:map[string]string{io.kubernetes.container.hash: e9a9675d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0a842142-310d-44e6-957a-e82b648ba4d7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:55:38 default-k8s-diff-port-990751 crio[719]: time="2024-08-06 08:55:38.738630044Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8aa74d1c-5acf-4cee-9c69-be49f7420e6d name=/runtime.v1.RuntimeService/Version
	Aug 06 08:55:38 default-k8s-diff-port-990751 crio[719]: time="2024-08-06 08:55:38.738706302Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8aa74d1c-5acf-4cee-9c69-be49f7420e6d name=/runtime.v1.RuntimeService/Version
	Aug 06 08:55:38 default-k8s-diff-port-990751 crio[719]: time="2024-08-06 08:55:38.742364019Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1b1c85a4-64bc-4970-b3fa-1e8483efbbd1 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 08:55:38 default-k8s-diff-port-990751 crio[719]: time="2024-08-06 08:55:38.742763015Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722934538742738998,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1b1c85a4-64bc-4970-b3fa-1e8483efbbd1 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 08:55:38 default-k8s-diff-port-990751 crio[719]: time="2024-08-06 08:55:38.743671960Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b850e538-535b-41c4-89ec-c994468d3ca8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:55:38 default-k8s-diff-port-990751 crio[719]: time="2024-08-06 08:55:38.743757697Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b850e538-535b-41c4-89ec-c994468d3ca8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:55:38 default-k8s-diff-port-990751 crio[719]: time="2024-08-06 08:55:38.743949228Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e668e4a5fa59e279bc227b6110983cc4660b639d5b1e1291ae60d361f682d82e,PodSandboxId:212629317ec9d4a02502b705c85c5e3a1d233f294ffdb7c78c6f03f5de7c4e0a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722933298482056515,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0faff3fc-d34b-418e-972b-95a2e36b577a,},Annotations:map[string]string{io.kubernetes.container.hash: 99df2b1d,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e66102b7ad4d765159de00c22e2ff0fc7466749da0042d42272d5d8590c8ee08,PodSandboxId:8676b9ccd33520640da1604c11f8e1f022def345d5accba2a99d0555ba1ac7b4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722933278325984668,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0c9c181e-0ac8-4463-bcdd-90536fa7ace0,},Annotations:map[string]string{io.kubernetes.container.hash: fb110dd9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59f63eeae644f7c8944cdb7b1d05d9e49fe84eb28e9ccbad2d2dd79846d7eb52,PodSandboxId:671bc248779b3dde9d65bca463606ec9f1e93428faa567ffc2a9d5a7f70c4ecd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722933275327872058,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gx8tt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48f83ba8-a238-4baf-94af-88370fefcb02,},Annotations:map[string]string{io.kubernetes.container.hash: a7046b57,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abd677ac30faba3fef9e9d562601ad0012d7f1b48852e08b8f2684efda6b0f2e,PodSandboxId:212629317ec9d4a02502b705c85c5e3a1d233f294ffdb7c78c6f03f5de7c4e0a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722933267621029914,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 0faff3fc-d34b-418e-972b-95a2e36b577a,},Annotations:map[string]string{io.kubernetes.container.hash: 99df2b1d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75990046864b4a4db9f8c02f44de4eefbb0cf45584880b053bce4ca0b677ce4e,PodSandboxId:62877dd46aaa63956643b3d0f0d405fa45b34e720ba8734b0ead0a9e3577eecf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722933267588436904,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lpf88,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 866c10f0-2c44-4c03-b460
-ff2194bd1ec6,},Annotations:map[string]string{io.kubernetes.container.hash: 6ffde570,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5fa6fe8f1db6b59f47ea637515efb2864e0408ce1df0e6851b3bda0e240f5d3,PodSandboxId:7d737c16e4d6b6dfb159c9481525da24c5a35d2d49fc7649c2c2b47bccf13eeb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722933263042998803,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-990751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be1af3bf6b8b56d20fa135ae58600b46,},Annotations:map[
string]string{io.kubernetes.container.hash: 42e17a12,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e596abcad3dc13d27548701f515beebba051e04a9e20f8ce1d018731d880ff1c,PodSandboxId:42f4b383c94b31dc78e49648f4bac2952ad70f8984c43ef9702d42fd0a1d6b84,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722933263050209712,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-990751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79d986bf1c9915f03028df54a8aaf049,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05bbffa4c989e42bd020ebe6b943459ec1d7b31a6c5e5ba9a37056da15542658,PodSandboxId:4502d8ea19cd903a4f9bd6f7e9e51168b714fcf287bf94b261c9ab68f1f2f3ce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722933263062442813,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-990751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7799dd4e95fe53745fccce3bf5a
3815,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49843be03d16addb301ffd26e517770ff29b00f049efb524a2139b8b2153325c,PodSandboxId:17201685578b67add3362633d77a2d263ea9700b8d936ca483f69667ff24a7e8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722933263075049639,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-990751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eeaec8565e0ec402ba61f03e2c1209
84,},Annotations:map[string]string{io.kubernetes.container.hash: e9a9675d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b850e538-535b-41c4-89ec-c994468d3ca8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:55:38 default-k8s-diff-port-990751 crio[719]: time="2024-08-06 08:55:38.784837704Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f648e4ee-455c-4175-be5c-3f2cc9da8dfb name=/runtime.v1.RuntimeService/Version
	Aug 06 08:55:38 default-k8s-diff-port-990751 crio[719]: time="2024-08-06 08:55:38.784968246Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f648e4ee-455c-4175-be5c-3f2cc9da8dfb name=/runtime.v1.RuntimeService/Version
	Aug 06 08:55:38 default-k8s-diff-port-990751 crio[719]: time="2024-08-06 08:55:38.786794869Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=98fd04cf-8a84-45dd-863c-4c5c39f15030 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 08:55:38 default-k8s-diff-port-990751 crio[719]: time="2024-08-06 08:55:38.787198477Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722934538787174498,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=98fd04cf-8a84-45dd-863c-4c5c39f15030 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 08:55:38 default-k8s-diff-port-990751 crio[719]: time="2024-08-06 08:55:38.787864458Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f5210048-1192-45fd-abbe-87a0a54a89b7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:55:38 default-k8s-diff-port-990751 crio[719]: time="2024-08-06 08:55:38.787935941Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f5210048-1192-45fd-abbe-87a0a54a89b7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:55:38 default-k8s-diff-port-990751 crio[719]: time="2024-08-06 08:55:38.788142765Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e668e4a5fa59e279bc227b6110983cc4660b639d5b1e1291ae60d361f682d82e,PodSandboxId:212629317ec9d4a02502b705c85c5e3a1d233f294ffdb7c78c6f03f5de7c4e0a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722933298482056515,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0faff3fc-d34b-418e-972b-95a2e36b577a,},Annotations:map[string]string{io.kubernetes.container.hash: 99df2b1d,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e66102b7ad4d765159de00c22e2ff0fc7466749da0042d42272d5d8590c8ee08,PodSandboxId:8676b9ccd33520640da1604c11f8e1f022def345d5accba2a99d0555ba1ac7b4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722933278325984668,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0c9c181e-0ac8-4463-bcdd-90536fa7ace0,},Annotations:map[string]string{io.kubernetes.container.hash: fb110dd9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59f63eeae644f7c8944cdb7b1d05d9e49fe84eb28e9ccbad2d2dd79846d7eb52,PodSandboxId:671bc248779b3dde9d65bca463606ec9f1e93428faa567ffc2a9d5a7f70c4ecd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722933275327872058,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gx8tt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48f83ba8-a238-4baf-94af-88370fefcb02,},Annotations:map[string]string{io.kubernetes.container.hash: a7046b57,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abd677ac30faba3fef9e9d562601ad0012d7f1b48852e08b8f2684efda6b0f2e,PodSandboxId:212629317ec9d4a02502b705c85c5e3a1d233f294ffdb7c78c6f03f5de7c4e0a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722933267621029914,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 0faff3fc-d34b-418e-972b-95a2e36b577a,},Annotations:map[string]string{io.kubernetes.container.hash: 99df2b1d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75990046864b4a4db9f8c02f44de4eefbb0cf45584880b053bce4ca0b677ce4e,PodSandboxId:62877dd46aaa63956643b3d0f0d405fa45b34e720ba8734b0ead0a9e3577eecf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722933267588436904,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lpf88,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 866c10f0-2c44-4c03-b460
-ff2194bd1ec6,},Annotations:map[string]string{io.kubernetes.container.hash: 6ffde570,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5fa6fe8f1db6b59f47ea637515efb2864e0408ce1df0e6851b3bda0e240f5d3,PodSandboxId:7d737c16e4d6b6dfb159c9481525da24c5a35d2d49fc7649c2c2b47bccf13eeb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722933263042998803,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-990751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be1af3bf6b8b56d20fa135ae58600b46,},Annotations:map[
string]string{io.kubernetes.container.hash: 42e17a12,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e596abcad3dc13d27548701f515beebba051e04a9e20f8ce1d018731d880ff1c,PodSandboxId:42f4b383c94b31dc78e49648f4bac2952ad70f8984c43ef9702d42fd0a1d6b84,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722933263050209712,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-990751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79d986bf1c9915f03028df54a8aaf049,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05bbffa4c989e42bd020ebe6b943459ec1d7b31a6c5e5ba9a37056da15542658,PodSandboxId:4502d8ea19cd903a4f9bd6f7e9e51168b714fcf287bf94b261c9ab68f1f2f3ce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722933263062442813,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-990751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7799dd4e95fe53745fccce3bf5a
3815,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49843be03d16addb301ffd26e517770ff29b00f049efb524a2139b8b2153325c,PodSandboxId:17201685578b67add3362633d77a2d263ea9700b8d936ca483f69667ff24a7e8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722933263075049639,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-990751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eeaec8565e0ec402ba61f03e2c1209
84,},Annotations:map[string]string{io.kubernetes.container.hash: e9a9675d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f5210048-1192-45fd-abbe-87a0a54a89b7 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e668e4a5fa59e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      20 minutes ago      Running             storage-provisioner       3                   212629317ec9d       storage-provisioner
	e66102b7ad4d7       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   21 minutes ago      Running             busybox                   1                   8676b9ccd3352       busybox
	59f63eeae644f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      21 minutes ago      Running             coredns                   1                   671bc248779b3       coredns-7db6d8ff4d-gx8tt
	abd677ac30fab       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      21 minutes ago      Exited              storage-provisioner       2                   212629317ec9d       storage-provisioner
	75990046864b4       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      21 minutes ago      Running             kube-proxy                1                   62877dd46aaa6       kube-proxy-lpf88
	49843be03d16a       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      21 minutes ago      Running             kube-apiserver            1                   17201685578b6       kube-apiserver-default-k8s-diff-port-990751
	05bbffa4c989e       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      21 minutes ago      Running             kube-controller-manager   1                   4502d8ea19cd9       kube-controller-manager-default-k8s-diff-port-990751
	e596abcad3dc1       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      21 minutes ago      Running             kube-scheduler            1                   42f4b383c94b3       kube-scheduler-default-k8s-diff-port-990751
	b5fa6fe8f1db6       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      21 minutes ago      Running             etcd                      1                   7d737c16e4d6b       etcd-default-k8s-diff-port-990751
	
	
	==> coredns [59f63eeae644f7c8944cdb7b1d05d9e49fe84eb28e9ccbad2d2dd79846d7eb52] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:41340 - 63584 "HINFO IN 3658871603673032814.1514312603822694222. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.00986291s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-990751
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-990751
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e92cb06692f5ea1ba801d10d148e5e92e807f9c8
	                    minikube.k8s.io/name=default-k8s-diff-port-990751
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_06T08_26_39_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 06 Aug 2024 08:26:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-990751
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 06 Aug 2024 08:55:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 06 Aug 2024 08:55:22 +0000   Tue, 06 Aug 2024 08:26:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 06 Aug 2024 08:55:22 +0000   Tue, 06 Aug 2024 08:26:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 06 Aug 2024 08:55:22 +0000   Tue, 06 Aug 2024 08:26:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 06 Aug 2024 08:55:22 +0000   Tue, 06 Aug 2024 08:34:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.235
	  Hostname:    default-k8s-diff-port-990751
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7d6f63ee0ed64917abb3517943badd0c
	  System UUID:                7d6f63ee-0ed6-4917-abb3-517943badd0c
	  Boot ID:                    8f50a54c-584e-478b-8f59-052ccfb01920
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 coredns-7db6d8ff4d-gx8tt                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     28m
	  kube-system                 etcd-default-k8s-diff-port-990751                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         29m
	  kube-system                 kube-apiserver-default-k8s-diff-port-990751             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-990751    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-proxy-lpf88                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-scheduler-default-k8s-diff-port-990751             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 metrics-server-569cc877fc-hnxd4                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         27m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28m                kube-proxy       
	  Normal  Starting                 21m                kube-proxy       
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     29m                kubelet          Node default-k8s-diff-port-990751 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  29m                kubelet          Node default-k8s-diff-port-990751 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m                kubelet          Node default-k8s-diff-port-990751 status is now: NodeHasNoDiskPressure
	  Normal  NodeReady                29m                kubelet          Node default-k8s-diff-port-990751 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           28m                node-controller  Node default-k8s-diff-port-990751 event: Registered Node default-k8s-diff-port-990751 in Controller
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-990751 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-990751 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node default-k8s-diff-port-990751 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           20m                node-controller  Node default-k8s-diff-port-990751 event: Registered Node default-k8s-diff-port-990751 in Controller
	
	
	==> dmesg <==
	[Aug 6 08:33] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051078] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040369] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Aug 6 08:34] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.525821] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.622945] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.370777] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.063429] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061661] systemd-fstab-generator[646]: Ignoring "noauto" option for root device
	[  +0.179950] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[  +0.139215] systemd-fstab-generator[672]: Ignoring "noauto" option for root device
	[  +0.308339] systemd-fstab-generator[701]: Ignoring "noauto" option for root device
	[  +4.693999] systemd-fstab-generator[802]: Ignoring "noauto" option for root device
	[  +0.061819] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.646385] systemd-fstab-generator[925]: Ignoring "noauto" option for root device
	[  +5.608618] kauditd_printk_skb: 97 callbacks suppressed
	[  +4.540987] systemd-fstab-generator[1538]: Ignoring "noauto" option for root device
	[  +1.171008] kauditd_printk_skb: 62 callbacks suppressed
	[  +5.142838] kauditd_printk_skb: 38 callbacks suppressed
	
	
	==> etcd [b5fa6fe8f1db6b59f47ea637515efb2864e0408ce1df0e6851b3bda0e240f5d3] <==
	{"level":"info","ts":"2024-08-06T08:34:25.229397Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7ca23f5629343562 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-06T08:34:25.229512Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7ca23f5629343562 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-06T08:34:25.229574Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7ca23f5629343562 received MsgPreVoteResp from 7ca23f5629343562 at term 2"}
	{"level":"info","ts":"2024-08-06T08:34:25.229618Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7ca23f5629343562 became candidate at term 3"}
	{"level":"info","ts":"2024-08-06T08:34:25.229642Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7ca23f5629343562 received MsgVoteResp from 7ca23f5629343562 at term 3"}
	{"level":"info","ts":"2024-08-06T08:34:25.22967Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7ca23f5629343562 became leader at term 3"}
	{"level":"info","ts":"2024-08-06T08:34:25.229698Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7ca23f5629343562 elected leader 7ca23f5629343562 at term 3"}
	{"level":"info","ts":"2024-08-06T08:34:25.242363Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-06T08:34:25.242772Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"7ca23f5629343562","local-member-attributes":"{Name:default-k8s-diff-port-990751 ClientURLs:[https://192.168.50.235:2379]}","request-path":"/0/members/7ca23f5629343562/attributes","cluster-id":"580d8333e4e46975","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-06T08:34:25.246793Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-06T08:34:25.247452Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-06T08:34:25.247502Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-06T08:34:25.247875Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.235:2379"}
	{"level":"info","ts":"2024-08-06T08:34:25.25009Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-08-06T08:34:45.64153Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"252.404601ms","expected-duration":"100ms","prefix":"","request":"header:<ID:3846796627666629277 > lease_revoke:<id:35629126d33d7183>","response":"size:27"}
	{"level":"info","ts":"2024-08-06T08:44:25.296197Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":838}
	{"level":"info","ts":"2024-08-06T08:44:25.307152Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":838,"took":"10.563531ms","hash":1724817416,"current-db-size-bytes":2740224,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":2740224,"current-db-size-in-use":"2.7 MB"}
	{"level":"info","ts":"2024-08-06T08:44:25.307221Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1724817416,"revision":838,"compact-revision":-1}
	{"level":"info","ts":"2024-08-06T08:49:25.303662Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1081}
	{"level":"info","ts":"2024-08-06T08:49:25.308694Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1081,"took":"4.122851ms","hash":2492973545,"current-db-size-bytes":2740224,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":1617920,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-08-06T08:49:25.308749Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2492973545,"revision":1081,"compact-revision":838}
	{"level":"info","ts":"2024-08-06T08:54:25.310786Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1324}
	{"level":"info","ts":"2024-08-06T08:54:25.314788Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1324,"took":"3.271349ms","hash":192248301,"current-db-size-bytes":2740224,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":1605632,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-08-06T08:54:25.314863Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":192248301,"revision":1324,"compact-revision":1081}
	{"level":"info","ts":"2024-08-06T08:54:52.231836Z","caller":"traceutil/trace.go:171","msg":"trace[1831715908] transaction","detail":"{read_only:false; response_revision:1590; number_of_response:1; }","duration":"152.562491ms","start":"2024-08-06T08:54:52.079214Z","end":"2024-08-06T08:54:52.231777Z","steps":["trace[1831715908] 'process raft request'  (duration: 151.887489ms)"],"step_count":1}
	
	
	==> kernel <==
	 08:55:39 up 21 min,  0 users,  load average: 0.27, 0.21, 0.17
	Linux default-k8s-diff-port-990751 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [49843be03d16addb301ffd26e517770ff29b00f049efb524a2139b8b2153325c] <==
	I0806 08:50:27.741744       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0806 08:52:27.739829       1 handler_proxy.go:93] no RequestInfo found in the context
	E0806 08:52:27.739970       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0806 08:52:27.739980       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0806 08:52:27.742411       1 handler_proxy.go:93] no RequestInfo found in the context
	E0806 08:52:27.742465       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0806 08:52:27.742479       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0806 08:54:26.743657       1 handler_proxy.go:93] no RequestInfo found in the context
	E0806 08:54:26.743968       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0806 08:54:27.744994       1 handler_proxy.go:93] no RequestInfo found in the context
	E0806 08:54:27.745079       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0806 08:54:27.745087       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0806 08:54:27.745200       1 handler_proxy.go:93] no RequestInfo found in the context
	E0806 08:54:27.745352       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0806 08:54:27.746256       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0806 08:55:27.746208       1 handler_proxy.go:93] no RequestInfo found in the context
	W0806 08:55:27.746348       1 handler_proxy.go:93] no RequestInfo found in the context
	E0806 08:55:27.746377       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0806 08:55:27.746384       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0806 08:55:27.746425       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0806 08:55:27.747459       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [05bbffa4c989e42bd020ebe6b943459ec1d7b31a6c5e5ba9a37056da15542658] <==
	I0806 08:49:42.799722       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0806 08:50:12.285475       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0806 08:50:12.808511       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0806 08:50:37.235806       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="431.712µs"
	E0806 08:50:42.291033       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0806 08:50:42.816657       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0806 08:50:52.236856       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="86.875µs"
	E0806 08:51:12.297168       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0806 08:51:12.824870       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0806 08:51:42.302385       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0806 08:51:42.832404       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0806 08:52:12.307529       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0806 08:52:12.840261       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0806 08:52:42.313425       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0806 08:52:42.848728       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0806 08:53:12.318505       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0806 08:53:12.856469       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0806 08:53:42.324108       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0806 08:53:42.864022       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0806 08:54:12.330102       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0806 08:54:12.874438       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0806 08:54:42.336037       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0806 08:54:42.882928       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0806 08:55:12.341402       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0806 08:55:12.892786       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [75990046864b4a4db9f8c02f44de4eefbb0cf45584880b053bce4ca0b677ce4e] <==
	I0806 08:34:27.955050       1 server_linux.go:69] "Using iptables proxy"
	I0806 08:34:27.983894       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.235"]
	I0806 08:34:28.024042       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0806 08:34:28.024128       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0806 08:34:28.024159       1 server_linux.go:165] "Using iptables Proxier"
	I0806 08:34:28.026803       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0806 08:34:28.027090       1 server.go:872] "Version info" version="v1.30.3"
	I0806 08:34:28.027142       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0806 08:34:28.028551       1 config.go:192] "Starting service config controller"
	I0806 08:34:28.028601       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0806 08:34:28.028642       1 config.go:101] "Starting endpoint slice config controller"
	I0806 08:34:28.028659       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0806 08:34:28.030565       1 config.go:319] "Starting node config controller"
	I0806 08:34:28.031391       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0806 08:34:28.129366       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0806 08:34:28.129507       1 shared_informer.go:320] Caches are synced for service config
	I0806 08:34:28.131503       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [e596abcad3dc13d27548701f515beebba051e04a9e20f8ce1d018731d880ff1c] <==
	I0806 08:34:24.829918       1 serving.go:380] Generated self-signed cert in-memory
	W0806 08:34:26.719992       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0806 08:34:26.720129       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0806 08:34:26.720158       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0806 08:34:26.720235       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0806 08:34:26.764539       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0806 08:34:26.764635       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0806 08:34:26.767052       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0806 08:34:26.767487       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0806 08:34:26.767691       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0806 08:34:26.767733       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0806 08:34:26.868862       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 06 08:53:22 default-k8s-diff-port-990751 kubelet[932]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 06 08:53:30 default-k8s-diff-port-990751 kubelet[932]: E0806 08:53:30.219873     932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-hnxd4" podUID="f369ac70-49b7-448a-9852-89232fb66da9"
	Aug 06 08:53:41 default-k8s-diff-port-990751 kubelet[932]: E0806 08:53:41.218658     932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-hnxd4" podUID="f369ac70-49b7-448a-9852-89232fb66da9"
	Aug 06 08:53:55 default-k8s-diff-port-990751 kubelet[932]: E0806 08:53:55.218371     932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-hnxd4" podUID="f369ac70-49b7-448a-9852-89232fb66da9"
	Aug 06 08:54:09 default-k8s-diff-port-990751 kubelet[932]: E0806 08:54:09.218921     932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-hnxd4" podUID="f369ac70-49b7-448a-9852-89232fb66da9"
	Aug 06 08:54:22 default-k8s-diff-port-990751 kubelet[932]: E0806 08:54:22.219092     932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-hnxd4" podUID="f369ac70-49b7-448a-9852-89232fb66da9"
	Aug 06 08:54:22 default-k8s-diff-port-990751 kubelet[932]: E0806 08:54:22.247986     932 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 06 08:54:22 default-k8s-diff-port-990751 kubelet[932]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 06 08:54:22 default-k8s-diff-port-990751 kubelet[932]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 06 08:54:22 default-k8s-diff-port-990751 kubelet[932]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 06 08:54:22 default-k8s-diff-port-990751 kubelet[932]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 06 08:54:34 default-k8s-diff-port-990751 kubelet[932]: E0806 08:54:34.218963     932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-hnxd4" podUID="f369ac70-49b7-448a-9852-89232fb66da9"
	Aug 06 08:54:46 default-k8s-diff-port-990751 kubelet[932]: E0806 08:54:46.220696     932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-hnxd4" podUID="f369ac70-49b7-448a-9852-89232fb66da9"
	Aug 06 08:54:58 default-k8s-diff-port-990751 kubelet[932]: E0806 08:54:58.219449     932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-hnxd4" podUID="f369ac70-49b7-448a-9852-89232fb66da9"
	Aug 06 08:55:11 default-k8s-diff-port-990751 kubelet[932]: E0806 08:55:11.219196     932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-hnxd4" podUID="f369ac70-49b7-448a-9852-89232fb66da9"
	Aug 06 08:55:22 default-k8s-diff-port-990751 kubelet[932]: E0806 08:55:22.221371     932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-hnxd4" podUID="f369ac70-49b7-448a-9852-89232fb66da9"
	Aug 06 08:55:22 default-k8s-diff-port-990751 kubelet[932]: E0806 08:55:22.247583     932 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 06 08:55:22 default-k8s-diff-port-990751 kubelet[932]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 06 08:55:22 default-k8s-diff-port-990751 kubelet[932]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 06 08:55:22 default-k8s-diff-port-990751 kubelet[932]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 06 08:55:22 default-k8s-diff-port-990751 kubelet[932]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 06 08:55:35 default-k8s-diff-port-990751 kubelet[932]: E0806 08:55:35.232379     932 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Aug 06 08:55:35 default-k8s-diff-port-990751 kubelet[932]: E0806 08:55:35.232689     932 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Aug 06 08:55:35 default-k8s-diff-port-990751 kubelet[932]: E0806 08:55:35.232947     932 kuberuntime_manager.go:1256] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ljgsp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathEx
pr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,Stdin
Once:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-569cc877fc-hnxd4_kube-system(f369ac70-49b7-448a-9852-89232fb66da9): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Aug 06 08:55:35 default-k8s-diff-port-990751 kubelet[932]: E0806 08:55:35.233397     932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-569cc877fc-hnxd4" podUID="f369ac70-49b7-448a-9852-89232fb66da9"
	
	
	==> storage-provisioner [abd677ac30faba3fef9e9d562601ad0012d7f1b48852e08b8f2684efda6b0f2e] <==
	I0806 08:34:27.931138       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0806 08:34:57.935077       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [e668e4a5fa59e279bc227b6110983cc4660b639d5b1e1291ae60d361f682d82e] <==
	I0806 08:34:58.590797       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0806 08:34:58.601682       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0806 08:34:58.601901       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0806 08:35:16.005463       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0806 08:35:16.005822       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-990751_10440c7b-16ec-4542-8107-c69e1ad11e0e!
	I0806 08:35:16.006236       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5ad1e3bd-e618-4071-ba71-ee7d7821c5e6", APIVersion:"v1", ResourceVersion:"621", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-990751_10440c7b-16ec-4542-8107-c69e1ad11e0e became leader
	I0806 08:35:16.107221       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-990751_10440c7b-16ec-4542-8107-c69e1ad11e0e!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-990751 -n default-k8s-diff-port-990751
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-990751 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-hnxd4
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-990751 describe pod metrics-server-569cc877fc-hnxd4
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-990751 describe pod metrics-server-569cc877fc-hnxd4: exit status 1 (69.278316ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-hnxd4" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-990751 describe pod metrics-server-569cc877fc-hnxd4: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (460.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (328.79s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-703340 -n no-preload-703340
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-08-06 08:54:54.85487606 +0000 UTC m=+6618.079686489
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-703340 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-703340 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.118µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-703340 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-703340 -n no-preload-703340
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-703340 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-703340 logs -n 25: (1.35643743s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p flannel-405163 sudo                                 | flannel-405163               | jenkins | v1.33.1 | 06 Aug 24 08:25 UTC | 06 Aug 24 08:25 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p flannel-405163 sudo                                 | flannel-405163               | jenkins | v1.33.1 | 06 Aug 24 08:25 UTC | 06 Aug 24 08:25 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p flannel-405163 sudo find                            | flannel-405163               | jenkins | v1.33.1 | 06 Aug 24 08:25 UTC | 06 Aug 24 08:25 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p flannel-405163 sudo crio                            | flannel-405163               | jenkins | v1.33.1 | 06 Aug 24 08:25 UTC | 06 Aug 24 08:25 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p flannel-405163                                      | flannel-405163               | jenkins | v1.33.1 | 06 Aug 24 08:25 UTC | 06 Aug 24 08:25 UTC |
	| delete  | -p                                                     | disable-driver-mounts-607762 | jenkins | v1.33.1 | 06 Aug 24 08:25 UTC | 06 Aug 24 08:25 UTC |
	|         | disable-driver-mounts-607762                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-990751 | jenkins | v1.33.1 | 06 Aug 24 08:25 UTC | 06 Aug 24 08:27 UTC |
	|         | default-k8s-diff-port-990751                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-703340             | no-preload-703340            | jenkins | v1.33.1 | 06 Aug 24 08:26 UTC | 06 Aug 24 08:26 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-703340                                   | no-preload-703340            | jenkins | v1.33.1 | 06 Aug 24 08:26 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-324808            | embed-certs-324808           | jenkins | v1.33.1 | 06 Aug 24 08:26 UTC | 06 Aug 24 08:26 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-324808                                  | embed-certs-324808           | jenkins | v1.33.1 | 06 Aug 24 08:26 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-990751  | default-k8s-diff-port-990751 | jenkins | v1.33.1 | 06 Aug 24 08:27 UTC | 06 Aug 24 08:27 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-990751 | jenkins | v1.33.1 | 06 Aug 24 08:27 UTC |                     |
	|         | default-k8s-diff-port-990751                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-703340                  | no-preload-703340            | jenkins | v1.33.1 | 06 Aug 24 08:28 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-703340                                   | no-preload-703340            | jenkins | v1.33.1 | 06 Aug 24 08:28 UTC | 06 Aug 24 08:40 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-409903        | old-k8s-version-409903       | jenkins | v1.33.1 | 06 Aug 24 08:28 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-324808                 | embed-certs-324808           | jenkins | v1.33.1 | 06 Aug 24 08:29 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-324808                                  | embed-certs-324808           | jenkins | v1.33.1 | 06 Aug 24 08:29 UTC | 06 Aug 24 08:38 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-990751       | default-k8s-diff-port-990751 | jenkins | v1.33.1 | 06 Aug 24 08:30 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-990751 | jenkins | v1.33.1 | 06 Aug 24 08:30 UTC | 06 Aug 24 08:38 UTC |
	|         | default-k8s-diff-port-990751                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-409903                              | old-k8s-version-409903       | jenkins | v1.33.1 | 06 Aug 24 08:30 UTC | 06 Aug 24 08:30 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-409903             | old-k8s-version-409903       | jenkins | v1.33.1 | 06 Aug 24 08:30 UTC | 06 Aug 24 08:30 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-409903                              | old-k8s-version-409903       | jenkins | v1.33.1 | 06 Aug 24 08:30 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-409903                              | old-k8s-version-409903       | jenkins | v1.33.1 | 06 Aug 24 08:54 UTC | 06 Aug 24 08:54 UTC |
	| start   | -p newest-cni-384471 --memory=2200 --alsologtostderr   | newest-cni-384471            | jenkins | v1.33.1 | 06 Aug 24 08:54 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/06 08:54:18
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0806 08:54:18.124895   86053 out.go:291] Setting OutFile to fd 1 ...
	I0806 08:54:18.125173   86053 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 08:54:18.125183   86053 out.go:304] Setting ErrFile to fd 2...
	I0806 08:54:18.125187   86053 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 08:54:18.125347   86053 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19370-13057/.minikube/bin
	I0806 08:54:18.125914   86053 out.go:298] Setting JSON to false
	I0806 08:54:18.126836   86053 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":9404,"bootTime":1722925054,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0806 08:54:18.126893   86053 start.go:139] virtualization: kvm guest
	I0806 08:54:18.130050   86053 out.go:177] * [newest-cni-384471] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0806 08:54:18.131530   86053 out.go:177]   - MINIKUBE_LOCATION=19370
	I0806 08:54:18.131556   86053 notify.go:220] Checking for updates...
	I0806 08:54:18.134089   86053 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0806 08:54:18.135610   86053 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19370-13057/kubeconfig
	I0806 08:54:18.137021   86053 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19370-13057/.minikube
	I0806 08:54:18.138402   86053 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0806 08:54:18.139945   86053 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0806 08:54:18.141775   86053 config.go:182] Loaded profile config "default-k8s-diff-port-990751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 08:54:18.141887   86053 config.go:182] Loaded profile config "embed-certs-324808": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 08:54:18.141972   86053 config.go:182] Loaded profile config "no-preload-703340": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-rc.0
	I0806 08:54:18.142084   86053 driver.go:392] Setting default libvirt URI to qemu:///system
	I0806 08:54:18.180195   86053 out.go:177] * Using the kvm2 driver based on user configuration
	I0806 08:54:18.181493   86053 start.go:297] selected driver: kvm2
	I0806 08:54:18.181510   86053 start.go:901] validating driver "kvm2" against <nil>
	I0806 08:54:18.181521   86053 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0806 08:54:18.182270   86053 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 08:54:18.182343   86053 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19370-13057/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0806 08:54:18.198355   86053 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0806 08:54:18.198430   86053 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0806 08:54:18.198458   86053 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0806 08:54:18.198792   86053 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0806 08:54:18.198885   86053 cni.go:84] Creating CNI manager for ""
	I0806 08:54:18.198905   86053 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0806 08:54:18.198916   86053 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0806 08:54:18.199004   86053 start.go:340] cluster config:
	{Name:newest-cni-384471 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-rc.0 ClusterName:newest-cni-384471 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 08:54:18.199160   86053 iso.go:125] acquiring lock: {Name:mke58d71de28babe305c5eb0c31c274e9713bb3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 08:54:18.201340   86053 out.go:177] * Starting "newest-cni-384471" primary control-plane node in "newest-cni-384471" cluster
	I0806 08:54:18.202811   86053 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime crio
	I0806 08:54:18.202853   86053 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-cri-o-overlay-amd64.tar.lz4
	I0806 08:54:18.202862   86053 cache.go:56] Caching tarball of preloaded images
	I0806 08:54:18.202968   86053 preload.go:172] Found /home/jenkins/minikube-integration/19370-13057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0806 08:54:18.202980   86053 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-rc.0 on crio
	I0806 08:54:18.203071   86053 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/newest-cni-384471/config.json ...
	I0806 08:54:18.203087   86053 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/newest-cni-384471/config.json: {Name:mkeb5244a4cbe1f0f1a0b8c1687543d15b7742df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:54:18.203221   86053 start.go:360] acquireMachinesLock for newest-cni-384471: {Name:mkc602ac9c8cc3360dcc0fcb8104303837aaab61 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 08:54:18.203249   86053 start.go:364] duration metric: took 15.295µs to acquireMachinesLock for "newest-cni-384471"
	I0806 08:54:18.203268   86053 start.go:93] Provisioning new machine with config: &{Name:newest-cni-384471 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.0-rc.0 ClusterName:newest-cni-384471 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0806 08:54:18.203321   86053 start.go:125] createHost starting for "" (driver="kvm2")
	I0806 08:54:18.205062   86053 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0806 08:54:18.205192   86053 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:54:18.205228   86053 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:54:18.219992   86053 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44601
	I0806 08:54:18.220525   86053 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:54:18.221129   86053 main.go:141] libmachine: Using API Version  1
	I0806 08:54:18.221156   86053 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:54:18.221543   86053 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:54:18.221731   86053 main.go:141] libmachine: (newest-cni-384471) Calling .GetMachineName
	I0806 08:54:18.221936   86053 main.go:141] libmachine: (newest-cni-384471) Calling .DriverName
	I0806 08:54:18.222117   86053 start.go:159] libmachine.API.Create for "newest-cni-384471" (driver="kvm2")
	I0806 08:54:18.222144   86053 client.go:168] LocalClient.Create starting
	I0806 08:54:18.222179   86053 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem
	I0806 08:54:18.222230   86053 main.go:141] libmachine: Decoding PEM data...
	I0806 08:54:18.222249   86053 main.go:141] libmachine: Parsing certificate...
	I0806 08:54:18.222313   86053 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem
	I0806 08:54:18.222337   86053 main.go:141] libmachine: Decoding PEM data...
	I0806 08:54:18.222354   86053 main.go:141] libmachine: Parsing certificate...
	I0806 08:54:18.222381   86053 main.go:141] libmachine: Running pre-create checks...
	I0806 08:54:18.222392   86053 main.go:141] libmachine: (newest-cni-384471) Calling .PreCreateCheck
	I0806 08:54:18.222711   86053 main.go:141] libmachine: (newest-cni-384471) Calling .GetConfigRaw
	I0806 08:54:18.223181   86053 main.go:141] libmachine: Creating machine...
	I0806 08:54:18.223199   86053 main.go:141] libmachine: (newest-cni-384471) Calling .Create
	I0806 08:54:18.223340   86053 main.go:141] libmachine: (newest-cni-384471) Creating KVM machine...
	I0806 08:54:18.224701   86053 main.go:141] libmachine: (newest-cni-384471) DBG | found existing default KVM network
	I0806 08:54:18.226283   86053 main.go:141] libmachine: (newest-cni-384471) DBG | I0806 08:54:18.226093   86075 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:13:30:6f} reservation:<nil>}
	I0806 08:54:18.227324   86053 main.go:141] libmachine: (newest-cni-384471) DBG | I0806 08:54:18.227242   86075 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:1d:b0:47} reservation:<nil>}
	I0806 08:54:18.228531   86053 main.go:141] libmachine: (newest-cni-384471) DBG | I0806 08:54:18.228441   86075 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00030a870}
	I0806 08:54:18.228548   86053 main.go:141] libmachine: (newest-cni-384471) DBG | created network xml: 
	I0806 08:54:18.228559   86053 main.go:141] libmachine: (newest-cni-384471) DBG | <network>
	I0806 08:54:18.228568   86053 main.go:141] libmachine: (newest-cni-384471) DBG |   <name>mk-newest-cni-384471</name>
	I0806 08:54:18.228576   86053 main.go:141] libmachine: (newest-cni-384471) DBG |   <dns enable='no'/>
	I0806 08:54:18.228586   86053 main.go:141] libmachine: (newest-cni-384471) DBG |   
	I0806 08:54:18.228595   86053 main.go:141] libmachine: (newest-cni-384471) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0806 08:54:18.228603   86053 main.go:141] libmachine: (newest-cni-384471) DBG |     <dhcp>
	I0806 08:54:18.228618   86053 main.go:141] libmachine: (newest-cni-384471) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0806 08:54:18.228634   86053 main.go:141] libmachine: (newest-cni-384471) DBG |     </dhcp>
	I0806 08:54:18.228647   86053 main.go:141] libmachine: (newest-cni-384471) DBG |   </ip>
	I0806 08:54:18.228658   86053 main.go:141] libmachine: (newest-cni-384471) DBG |   
	I0806 08:54:18.228667   86053 main.go:141] libmachine: (newest-cni-384471) DBG | </network>
	I0806 08:54:18.228675   86053 main.go:141] libmachine: (newest-cni-384471) DBG | 
	I0806 08:54:18.234014   86053 main.go:141] libmachine: (newest-cni-384471) DBG | trying to create private KVM network mk-newest-cni-384471 192.168.61.0/24...
	I0806 08:54:18.306728   86053 main.go:141] libmachine: (newest-cni-384471) Setting up store path in /home/jenkins/minikube-integration/19370-13057/.minikube/machines/newest-cni-384471 ...
	I0806 08:54:18.306758   86053 main.go:141] libmachine: (newest-cni-384471) Building disk image from file:///home/jenkins/minikube-integration/19370-13057/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0806 08:54:18.306769   86053 main.go:141] libmachine: (newest-cni-384471) DBG | private KVM network mk-newest-cni-384471 192.168.61.0/24 created
	I0806 08:54:18.306786   86053 main.go:141] libmachine: (newest-cni-384471) DBG | I0806 08:54:18.306674   86075 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19370-13057/.minikube
	I0806 08:54:18.306866   86053 main.go:141] libmachine: (newest-cni-384471) Downloading /home/jenkins/minikube-integration/19370-13057/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19370-13057/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0806 08:54:18.547529   86053 main.go:141] libmachine: (newest-cni-384471) DBG | I0806 08:54:18.547396   86075 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/newest-cni-384471/id_rsa...
	I0806 08:54:18.642822   86053 main.go:141] libmachine: (newest-cni-384471) DBG | I0806 08:54:18.642696   86075 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/newest-cni-384471/newest-cni-384471.rawdisk...
	I0806 08:54:18.642847   86053 main.go:141] libmachine: (newest-cni-384471) DBG | Writing magic tar header
	I0806 08:54:18.642870   86053 main.go:141] libmachine: (newest-cni-384471) DBG | Writing SSH key tar header
	I0806 08:54:18.642885   86053 main.go:141] libmachine: (newest-cni-384471) DBG | I0806 08:54:18.642831   86075 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19370-13057/.minikube/machines/newest-cni-384471 ...
	I0806 08:54:18.642993   86053 main.go:141] libmachine: (newest-cni-384471) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/newest-cni-384471
	I0806 08:54:18.643016   86053 main.go:141] libmachine: (newest-cni-384471) Setting executable bit set on /home/jenkins/minikube-integration/19370-13057/.minikube/machines/newest-cni-384471 (perms=drwx------)
	I0806 08:54:18.643027   86053 main.go:141] libmachine: (newest-cni-384471) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19370-13057/.minikube/machines
	I0806 08:54:18.643043   86053 main.go:141] libmachine: (newest-cni-384471) Setting executable bit set on /home/jenkins/minikube-integration/19370-13057/.minikube/machines (perms=drwxr-xr-x)
	I0806 08:54:18.643078   86053 main.go:141] libmachine: (newest-cni-384471) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19370-13057/.minikube
	I0806 08:54:18.643093   86053 main.go:141] libmachine: (newest-cni-384471) Setting executable bit set on /home/jenkins/minikube-integration/19370-13057/.minikube (perms=drwxr-xr-x)
	I0806 08:54:18.643111   86053 main.go:141] libmachine: (newest-cni-384471) Setting executable bit set on /home/jenkins/minikube-integration/19370-13057 (perms=drwxrwxr-x)
	I0806 08:54:18.643123   86053 main.go:141] libmachine: (newest-cni-384471) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19370-13057
	I0806 08:54:18.643135   86053 main.go:141] libmachine: (newest-cni-384471) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0806 08:54:18.643147   86053 main.go:141] libmachine: (newest-cni-384471) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0806 08:54:18.643155   86053 main.go:141] libmachine: (newest-cni-384471) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0806 08:54:18.643170   86053 main.go:141] libmachine: (newest-cni-384471) Creating domain...
	I0806 08:54:18.643184   86053 main.go:141] libmachine: (newest-cni-384471) DBG | Checking permissions on dir: /home/jenkins
	I0806 08:54:18.643195   86053 main.go:141] libmachine: (newest-cni-384471) DBG | Checking permissions on dir: /home
	I0806 08:54:18.643235   86053 main.go:141] libmachine: (newest-cni-384471) DBG | Skipping /home - not owner
	I0806 08:54:18.644274   86053 main.go:141] libmachine: (newest-cni-384471) define libvirt domain using xml: 
	I0806 08:54:18.644296   86053 main.go:141] libmachine: (newest-cni-384471) <domain type='kvm'>
	I0806 08:54:18.644307   86053 main.go:141] libmachine: (newest-cni-384471)   <name>newest-cni-384471</name>
	I0806 08:54:18.644333   86053 main.go:141] libmachine: (newest-cni-384471)   <memory unit='MiB'>2200</memory>
	I0806 08:54:18.644342   86053 main.go:141] libmachine: (newest-cni-384471)   <vcpu>2</vcpu>
	I0806 08:54:18.644347   86053 main.go:141] libmachine: (newest-cni-384471)   <features>
	I0806 08:54:18.644353   86053 main.go:141] libmachine: (newest-cni-384471)     <acpi/>
	I0806 08:54:18.644379   86053 main.go:141] libmachine: (newest-cni-384471)     <apic/>
	I0806 08:54:18.644391   86053 main.go:141] libmachine: (newest-cni-384471)     <pae/>
	I0806 08:54:18.644398   86053 main.go:141] libmachine: (newest-cni-384471)     
	I0806 08:54:18.644425   86053 main.go:141] libmachine: (newest-cni-384471)   </features>
	I0806 08:54:18.644447   86053 main.go:141] libmachine: (newest-cni-384471)   <cpu mode='host-passthrough'>
	I0806 08:54:18.644473   86053 main.go:141] libmachine: (newest-cni-384471)   
	I0806 08:54:18.644492   86053 main.go:141] libmachine: (newest-cni-384471)   </cpu>
	I0806 08:54:18.644505   86053 main.go:141] libmachine: (newest-cni-384471)   <os>
	I0806 08:54:18.644517   86053 main.go:141] libmachine: (newest-cni-384471)     <type>hvm</type>
	I0806 08:54:18.644540   86053 main.go:141] libmachine: (newest-cni-384471)     <boot dev='cdrom'/>
	I0806 08:54:18.644558   86053 main.go:141] libmachine: (newest-cni-384471)     <boot dev='hd'/>
	I0806 08:54:18.644571   86053 main.go:141] libmachine: (newest-cni-384471)     <bootmenu enable='no'/>
	I0806 08:54:18.644582   86053 main.go:141] libmachine: (newest-cni-384471)   </os>
	I0806 08:54:18.644596   86053 main.go:141] libmachine: (newest-cni-384471)   <devices>
	I0806 08:54:18.644607   86053 main.go:141] libmachine: (newest-cni-384471)     <disk type='file' device='cdrom'>
	I0806 08:54:18.644623   86053 main.go:141] libmachine: (newest-cni-384471)       <source file='/home/jenkins/minikube-integration/19370-13057/.minikube/machines/newest-cni-384471/boot2docker.iso'/>
	I0806 08:54:18.644638   86053 main.go:141] libmachine: (newest-cni-384471)       <target dev='hdc' bus='scsi'/>
	I0806 08:54:18.644648   86053 main.go:141] libmachine: (newest-cni-384471)       <readonly/>
	I0806 08:54:18.644655   86053 main.go:141] libmachine: (newest-cni-384471)     </disk>
	I0806 08:54:18.644672   86053 main.go:141] libmachine: (newest-cni-384471)     <disk type='file' device='disk'>
	I0806 08:54:18.644686   86053 main.go:141] libmachine: (newest-cni-384471)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0806 08:54:18.644703   86053 main.go:141] libmachine: (newest-cni-384471)       <source file='/home/jenkins/minikube-integration/19370-13057/.minikube/machines/newest-cni-384471/newest-cni-384471.rawdisk'/>
	I0806 08:54:18.644717   86053 main.go:141] libmachine: (newest-cni-384471)       <target dev='hda' bus='virtio'/>
	I0806 08:54:18.644728   86053 main.go:141] libmachine: (newest-cni-384471)     </disk>
	I0806 08:54:18.644738   86053 main.go:141] libmachine: (newest-cni-384471)     <interface type='network'>
	I0806 08:54:18.644750   86053 main.go:141] libmachine: (newest-cni-384471)       <source network='mk-newest-cni-384471'/>
	I0806 08:54:18.644760   86053 main.go:141] libmachine: (newest-cni-384471)       <model type='virtio'/>
	I0806 08:54:18.644768   86053 main.go:141] libmachine: (newest-cni-384471)     </interface>
	I0806 08:54:18.644778   86053 main.go:141] libmachine: (newest-cni-384471)     <interface type='network'>
	I0806 08:54:18.644789   86053 main.go:141] libmachine: (newest-cni-384471)       <source network='default'/>
	I0806 08:54:18.644799   86053 main.go:141] libmachine: (newest-cni-384471)       <model type='virtio'/>
	I0806 08:54:18.644809   86053 main.go:141] libmachine: (newest-cni-384471)     </interface>
	I0806 08:54:18.644817   86053 main.go:141] libmachine: (newest-cni-384471)     <serial type='pty'>
	I0806 08:54:18.644827   86053 main.go:141] libmachine: (newest-cni-384471)       <target port='0'/>
	I0806 08:54:18.644845   86053 main.go:141] libmachine: (newest-cni-384471)     </serial>
	I0806 08:54:18.644868   86053 main.go:141] libmachine: (newest-cni-384471)     <console type='pty'>
	I0806 08:54:18.644890   86053 main.go:141] libmachine: (newest-cni-384471)       <target type='serial' port='0'/>
	I0806 08:54:18.644897   86053 main.go:141] libmachine: (newest-cni-384471)     </console>
	I0806 08:54:18.644906   86053 main.go:141] libmachine: (newest-cni-384471)     <rng model='virtio'>
	I0806 08:54:18.644917   86053 main.go:141] libmachine: (newest-cni-384471)       <backend model='random'>/dev/random</backend>
	I0806 08:54:18.644935   86053 main.go:141] libmachine: (newest-cni-384471)     </rng>
	I0806 08:54:18.644952   86053 main.go:141] libmachine: (newest-cni-384471)     
	I0806 08:54:18.644962   86053 main.go:141] libmachine: (newest-cni-384471)     
	I0806 08:54:18.644966   86053 main.go:141] libmachine: (newest-cni-384471)   </devices>
	I0806 08:54:18.644974   86053 main.go:141] libmachine: (newest-cni-384471) </domain>
	I0806 08:54:18.644979   86053 main.go:141] libmachine: (newest-cni-384471) 
	I0806 08:54:18.649186   86053 main.go:141] libmachine: (newest-cni-384471) DBG | domain newest-cni-384471 has defined MAC address 52:54:00:7a:9e:64 in network default
	I0806 08:54:18.649825   86053 main.go:141] libmachine: (newest-cni-384471) Ensuring networks are active...
	I0806 08:54:18.649844   86053 main.go:141] libmachine: (newest-cni-384471) DBG | domain newest-cni-384471 has defined MAC address 52:54:00:95:c7:9e in network mk-newest-cni-384471
	I0806 08:54:18.650650   86053 main.go:141] libmachine: (newest-cni-384471) Ensuring network default is active
	I0806 08:54:18.650995   86053 main.go:141] libmachine: (newest-cni-384471) Ensuring network mk-newest-cni-384471 is active
	I0806 08:54:18.651621   86053 main.go:141] libmachine: (newest-cni-384471) Getting domain xml...
	I0806 08:54:18.652268   86053 main.go:141] libmachine: (newest-cni-384471) Creating domain...
	I0806 08:54:19.917369   86053 main.go:141] libmachine: (newest-cni-384471) Waiting to get IP...
	I0806 08:54:19.918173   86053 main.go:141] libmachine: (newest-cni-384471) DBG | domain newest-cni-384471 has defined MAC address 52:54:00:95:c7:9e in network mk-newest-cni-384471
	I0806 08:54:19.918557   86053 main.go:141] libmachine: (newest-cni-384471) DBG | unable to find current IP address of domain newest-cni-384471 in network mk-newest-cni-384471
	I0806 08:54:19.918600   86053 main.go:141] libmachine: (newest-cni-384471) DBG | I0806 08:54:19.918530   86075 retry.go:31] will retry after 284.130528ms: waiting for machine to come up
	I0806 08:54:20.204004   86053 main.go:141] libmachine: (newest-cni-384471) DBG | domain newest-cni-384471 has defined MAC address 52:54:00:95:c7:9e in network mk-newest-cni-384471
	I0806 08:54:20.204653   86053 main.go:141] libmachine: (newest-cni-384471) DBG | unable to find current IP address of domain newest-cni-384471 in network mk-newest-cni-384471
	I0806 08:54:20.204713   86053 main.go:141] libmachine: (newest-cni-384471) DBG | I0806 08:54:20.204602   86075 retry.go:31] will retry after 324.187389ms: waiting for machine to come up
	I0806 08:54:20.530123   86053 main.go:141] libmachine: (newest-cni-384471) DBG | domain newest-cni-384471 has defined MAC address 52:54:00:95:c7:9e in network mk-newest-cni-384471
	I0806 08:54:20.530661   86053 main.go:141] libmachine: (newest-cni-384471) DBG | unable to find current IP address of domain newest-cni-384471 in network mk-newest-cni-384471
	I0806 08:54:20.530689   86053 main.go:141] libmachine: (newest-cni-384471) DBG | I0806 08:54:20.530609   86075 retry.go:31] will retry after 306.130026ms: waiting for machine to come up
	I0806 08:54:20.838039   86053 main.go:141] libmachine: (newest-cni-384471) DBG | domain newest-cni-384471 has defined MAC address 52:54:00:95:c7:9e in network mk-newest-cni-384471
	I0806 08:54:20.838627   86053 main.go:141] libmachine: (newest-cni-384471) DBG | unable to find current IP address of domain newest-cni-384471 in network mk-newest-cni-384471
	I0806 08:54:20.838660   86053 main.go:141] libmachine: (newest-cni-384471) DBG | I0806 08:54:20.838548   86075 retry.go:31] will retry after 537.636343ms: waiting for machine to come up
	I0806 08:54:21.378225   86053 main.go:141] libmachine: (newest-cni-384471) DBG | domain newest-cni-384471 has defined MAC address 52:54:00:95:c7:9e in network mk-newest-cni-384471
	I0806 08:54:21.378755   86053 main.go:141] libmachine: (newest-cni-384471) DBG | unable to find current IP address of domain newest-cni-384471 in network mk-newest-cni-384471
	I0806 08:54:21.378775   86053 main.go:141] libmachine: (newest-cni-384471) DBG | I0806 08:54:21.378717   86075 retry.go:31] will retry after 586.605ms: waiting for machine to come up
	I0806 08:54:21.966456   86053 main.go:141] libmachine: (newest-cni-384471) DBG | domain newest-cni-384471 has defined MAC address 52:54:00:95:c7:9e in network mk-newest-cni-384471
	I0806 08:54:21.966935   86053 main.go:141] libmachine: (newest-cni-384471) DBG | unable to find current IP address of domain newest-cni-384471 in network mk-newest-cni-384471
	I0806 08:54:21.966959   86053 main.go:141] libmachine: (newest-cni-384471) DBG | I0806 08:54:21.966901   86075 retry.go:31] will retry after 831.318495ms: waiting for machine to come up
	I0806 08:54:22.799878   86053 main.go:141] libmachine: (newest-cni-384471) DBG | domain newest-cni-384471 has defined MAC address 52:54:00:95:c7:9e in network mk-newest-cni-384471
	I0806 08:54:22.800377   86053 main.go:141] libmachine: (newest-cni-384471) DBG | unable to find current IP address of domain newest-cni-384471 in network mk-newest-cni-384471
	I0806 08:54:22.800405   86053 main.go:141] libmachine: (newest-cni-384471) DBG | I0806 08:54:22.800298   86075 retry.go:31] will retry after 1.190612585s: waiting for machine to come up
	I0806 08:54:23.993105   86053 main.go:141] libmachine: (newest-cni-384471) DBG | domain newest-cni-384471 has defined MAC address 52:54:00:95:c7:9e in network mk-newest-cni-384471
	I0806 08:54:23.993586   86053 main.go:141] libmachine: (newest-cni-384471) DBG | unable to find current IP address of domain newest-cni-384471 in network mk-newest-cni-384471
	I0806 08:54:23.993619   86053 main.go:141] libmachine: (newest-cni-384471) DBG | I0806 08:54:23.993529   86075 retry.go:31] will retry after 1.405835086s: waiting for machine to come up
	I0806 08:54:25.400872   86053 main.go:141] libmachine: (newest-cni-384471) DBG | domain newest-cni-384471 has defined MAC address 52:54:00:95:c7:9e in network mk-newest-cni-384471
	I0806 08:54:25.401422   86053 main.go:141] libmachine: (newest-cni-384471) DBG | unable to find current IP address of domain newest-cni-384471 in network mk-newest-cni-384471
	I0806 08:54:25.401452   86053 main.go:141] libmachine: (newest-cni-384471) DBG | I0806 08:54:25.401380   86075 retry.go:31] will retry after 1.429764847s: waiting for machine to come up
	I0806 08:54:26.833012   86053 main.go:141] libmachine: (newest-cni-384471) DBG | domain newest-cni-384471 has defined MAC address 52:54:00:95:c7:9e in network mk-newest-cni-384471
	I0806 08:54:26.833576   86053 main.go:141] libmachine: (newest-cni-384471) DBG | unable to find current IP address of domain newest-cni-384471 in network mk-newest-cni-384471
	I0806 08:54:26.833601   86053 main.go:141] libmachine: (newest-cni-384471) DBG | I0806 08:54:26.833524   86075 retry.go:31] will retry after 1.433239253s: waiting for machine to come up
	I0806 08:54:28.268790   86053 main.go:141] libmachine: (newest-cni-384471) DBG | domain newest-cni-384471 has defined MAC address 52:54:00:95:c7:9e in network mk-newest-cni-384471
	I0806 08:54:28.269320   86053 main.go:141] libmachine: (newest-cni-384471) DBG | unable to find current IP address of domain newest-cni-384471 in network mk-newest-cni-384471
	I0806 08:54:28.269349   86053 main.go:141] libmachine: (newest-cni-384471) DBG | I0806 08:54:28.269258   86075 retry.go:31] will retry after 2.772235479s: waiting for machine to come up
	I0806 08:54:31.043091   86053 main.go:141] libmachine: (newest-cni-384471) DBG | domain newest-cni-384471 has defined MAC address 52:54:00:95:c7:9e in network mk-newest-cni-384471
	I0806 08:54:31.043747   86053 main.go:141] libmachine: (newest-cni-384471) DBG | unable to find current IP address of domain newest-cni-384471 in network mk-newest-cni-384471
	I0806 08:54:31.043778   86053 main.go:141] libmachine: (newest-cni-384471) DBG | I0806 08:54:31.043695   86075 retry.go:31] will retry after 2.56089924s: waiting for machine to come up
	I0806 08:54:33.605867   86053 main.go:141] libmachine: (newest-cni-384471) DBG | domain newest-cni-384471 has defined MAC address 52:54:00:95:c7:9e in network mk-newest-cni-384471
	I0806 08:54:33.606359   86053 main.go:141] libmachine: (newest-cni-384471) DBG | unable to find current IP address of domain newest-cni-384471 in network mk-newest-cni-384471
	I0806 08:54:33.606371   86053 main.go:141] libmachine: (newest-cni-384471) DBG | I0806 08:54:33.606338   86075 retry.go:31] will retry after 3.307357427s: waiting for machine to come up
	I0806 08:54:36.917381   86053 main.go:141] libmachine: (newest-cni-384471) DBG | domain newest-cni-384471 has defined MAC address 52:54:00:95:c7:9e in network mk-newest-cni-384471
	I0806 08:54:36.917935   86053 main.go:141] libmachine: (newest-cni-384471) DBG | unable to find current IP address of domain newest-cni-384471 in network mk-newest-cni-384471
	I0806 08:54:36.917957   86053 main.go:141] libmachine: (newest-cni-384471) DBG | I0806 08:54:36.917906   86075 retry.go:31] will retry after 5.506601832s: waiting for machine to come up
	I0806 08:54:42.425978   86053 main.go:141] libmachine: (newest-cni-384471) DBG | domain newest-cni-384471 has defined MAC address 52:54:00:95:c7:9e in network mk-newest-cni-384471
	I0806 08:54:42.426439   86053 main.go:141] libmachine: (newest-cni-384471) Found IP for machine: 192.168.61.76
	I0806 08:54:42.426462   86053 main.go:141] libmachine: (newest-cni-384471) DBG | domain newest-cni-384471 has current primary IP address 192.168.61.76 and MAC address 52:54:00:95:c7:9e in network mk-newest-cni-384471
	I0806 08:54:42.426468   86053 main.go:141] libmachine: (newest-cni-384471) Reserving static IP address...
	I0806 08:54:42.426881   86053 main.go:141] libmachine: (newest-cni-384471) DBG | unable to find host DHCP lease matching {name: "newest-cni-384471", mac: "52:54:00:95:c7:9e", ip: "192.168.61.76"} in network mk-newest-cni-384471
	I0806 08:54:42.508670   86053 main.go:141] libmachine: (newest-cni-384471) Reserved static IP address: 192.168.61.76
	I0806 08:54:42.508702   86053 main.go:141] libmachine: (newest-cni-384471) DBG | Getting to WaitForSSH function...
	I0806 08:54:42.508713   86053 main.go:141] libmachine: (newest-cni-384471) Waiting for SSH to be available...
	I0806 08:54:42.511477   86053 main.go:141] libmachine: (newest-cni-384471) DBG | domain newest-cni-384471 has defined MAC address 52:54:00:95:c7:9e in network mk-newest-cni-384471
	I0806 08:54:42.512093   86053 main.go:141] libmachine: (newest-cni-384471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:c7:9e", ip: ""} in network mk-newest-cni-384471: {Iface:virbr1 ExpiryTime:2024-08-06 09:54:33 +0000 UTC Type:0 Mac:52:54:00:95:c7:9e Iaid: IPaddr:192.168.61.76 Prefix:24 Hostname:minikube Clientid:01:52:54:00:95:c7:9e}
	I0806 08:54:42.512208   86053 main.go:141] libmachine: (newest-cni-384471) DBG | domain newest-cni-384471 has defined IP address 192.168.61.76 and MAC address 52:54:00:95:c7:9e in network mk-newest-cni-384471
	I0806 08:54:42.512341   86053 main.go:141] libmachine: (newest-cni-384471) DBG | Using SSH client type: external
	I0806 08:54:42.512406   86053 main.go:141] libmachine: (newest-cni-384471) DBG | Using SSH private key: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/newest-cni-384471/id_rsa (-rw-------)
	I0806 08:54:42.512450   86053 main.go:141] libmachine: (newest-cni-384471) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.76 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19370-13057/.minikube/machines/newest-cni-384471/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0806 08:54:42.512464   86053 main.go:141] libmachine: (newest-cni-384471) DBG | About to run SSH command:
	I0806 08:54:42.512476   86053 main.go:141] libmachine: (newest-cni-384471) DBG | exit 0
	I0806 08:54:42.636628   86053 main.go:141] libmachine: (newest-cni-384471) DBG | SSH cmd err, output: <nil>: 
	I0806 08:54:42.636932   86053 main.go:141] libmachine: (newest-cni-384471) KVM machine creation complete!
	I0806 08:54:42.637228   86053 main.go:141] libmachine: (newest-cni-384471) Calling .GetConfigRaw
	I0806 08:54:42.637780   86053 main.go:141] libmachine: (newest-cni-384471) Calling .DriverName
	I0806 08:54:42.638017   86053 main.go:141] libmachine: (newest-cni-384471) Calling .DriverName
	I0806 08:54:42.638191   86053 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0806 08:54:42.638210   86053 main.go:141] libmachine: (newest-cni-384471) Calling .GetState
	I0806 08:54:42.639603   86053 main.go:141] libmachine: Detecting operating system of created instance...
	I0806 08:54:42.639620   86053 main.go:141] libmachine: Waiting for SSH to be available...
	I0806 08:54:42.639628   86053 main.go:141] libmachine: Getting to WaitForSSH function...
	I0806 08:54:42.639652   86053 main.go:141] libmachine: (newest-cni-384471) Calling .GetSSHHostname
	I0806 08:54:42.642289   86053 main.go:141] libmachine: (newest-cni-384471) DBG | domain newest-cni-384471 has defined MAC address 52:54:00:95:c7:9e in network mk-newest-cni-384471
	I0806 08:54:42.642699   86053 main.go:141] libmachine: (newest-cni-384471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:c7:9e", ip: ""} in network mk-newest-cni-384471: {Iface:virbr1 ExpiryTime:2024-08-06 09:54:33 +0000 UTC Type:0 Mac:52:54:00:95:c7:9e Iaid: IPaddr:192.168.61.76 Prefix:24 Hostname:newest-cni-384471 Clientid:01:52:54:00:95:c7:9e}
	I0806 08:54:42.642744   86053 main.go:141] libmachine: (newest-cni-384471) DBG | domain newest-cni-384471 has defined IP address 192.168.61.76 and MAC address 52:54:00:95:c7:9e in network mk-newest-cni-384471
	I0806 08:54:42.642907   86053 main.go:141] libmachine: (newest-cni-384471) Calling .GetSSHPort
	I0806 08:54:42.643094   86053 main.go:141] libmachine: (newest-cni-384471) Calling .GetSSHKeyPath
	I0806 08:54:42.643279   86053 main.go:141] libmachine: (newest-cni-384471) Calling .GetSSHKeyPath
	I0806 08:54:42.643430   86053 main.go:141] libmachine: (newest-cni-384471) Calling .GetSSHUsername
	I0806 08:54:42.643598   86053 main.go:141] libmachine: Using SSH client type: native
	I0806 08:54:42.643794   86053 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.76 22 <nil> <nil>}
	I0806 08:54:42.643806   86053 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0806 08:54:42.747914   86053 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 08:54:42.747941   86053 main.go:141] libmachine: Detecting the provisioner...
	I0806 08:54:42.747949   86053 main.go:141] libmachine: (newest-cni-384471) Calling .GetSSHHostname
	I0806 08:54:42.750868   86053 main.go:141] libmachine: (newest-cni-384471) DBG | domain newest-cni-384471 has defined MAC address 52:54:00:95:c7:9e in network mk-newest-cni-384471
	I0806 08:54:42.751259   86053 main.go:141] libmachine: (newest-cni-384471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:c7:9e", ip: ""} in network mk-newest-cni-384471: {Iface:virbr1 ExpiryTime:2024-08-06 09:54:33 +0000 UTC Type:0 Mac:52:54:00:95:c7:9e Iaid: IPaddr:192.168.61.76 Prefix:24 Hostname:newest-cni-384471 Clientid:01:52:54:00:95:c7:9e}
	I0806 08:54:42.751289   86053 main.go:141] libmachine: (newest-cni-384471) DBG | domain newest-cni-384471 has defined IP address 192.168.61.76 and MAC address 52:54:00:95:c7:9e in network mk-newest-cni-384471
	I0806 08:54:42.751489   86053 main.go:141] libmachine: (newest-cni-384471) Calling .GetSSHPort
	I0806 08:54:42.751738   86053 main.go:141] libmachine: (newest-cni-384471) Calling .GetSSHKeyPath
	I0806 08:54:42.751900   86053 main.go:141] libmachine: (newest-cni-384471) Calling .GetSSHKeyPath
	I0806 08:54:42.752097   86053 main.go:141] libmachine: (newest-cni-384471) Calling .GetSSHUsername
	I0806 08:54:42.752280   86053 main.go:141] libmachine: Using SSH client type: native
	I0806 08:54:42.752509   86053 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.76 22 <nil> <nil>}
	I0806 08:54:42.752524   86053 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0806 08:54:42.853291   86053 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0806 08:54:42.853354   86053 main.go:141] libmachine: found compatible host: buildroot
	I0806 08:54:42.853362   86053 main.go:141] libmachine: Provisioning with buildroot...
	I0806 08:54:42.853371   86053 main.go:141] libmachine: (newest-cni-384471) Calling .GetMachineName
	I0806 08:54:42.853635   86053 buildroot.go:166] provisioning hostname "newest-cni-384471"
	I0806 08:54:42.853663   86053 main.go:141] libmachine: (newest-cni-384471) Calling .GetMachineName
	I0806 08:54:42.853847   86053 main.go:141] libmachine: (newest-cni-384471) Calling .GetSSHHostname
	I0806 08:54:42.856728   86053 main.go:141] libmachine: (newest-cni-384471) DBG | domain newest-cni-384471 has defined MAC address 52:54:00:95:c7:9e in network mk-newest-cni-384471
	I0806 08:54:42.857134   86053 main.go:141] libmachine: (newest-cni-384471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:c7:9e", ip: ""} in network mk-newest-cni-384471: {Iface:virbr1 ExpiryTime:2024-08-06 09:54:33 +0000 UTC Type:0 Mac:52:54:00:95:c7:9e Iaid: IPaddr:192.168.61.76 Prefix:24 Hostname:newest-cni-384471 Clientid:01:52:54:00:95:c7:9e}
	I0806 08:54:42.857156   86053 main.go:141] libmachine: (newest-cni-384471) DBG | domain newest-cni-384471 has defined IP address 192.168.61.76 and MAC address 52:54:00:95:c7:9e in network mk-newest-cni-384471
	I0806 08:54:42.857412   86053 main.go:141] libmachine: (newest-cni-384471) Calling .GetSSHPort
	I0806 08:54:42.857649   86053 main.go:141] libmachine: (newest-cni-384471) Calling .GetSSHKeyPath
	I0806 08:54:42.857828   86053 main.go:141] libmachine: (newest-cni-384471) Calling .GetSSHKeyPath
	I0806 08:54:42.857963   86053 main.go:141] libmachine: (newest-cni-384471) Calling .GetSSHUsername
	I0806 08:54:42.858138   86053 main.go:141] libmachine: Using SSH client type: native
	I0806 08:54:42.858323   86053 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.76 22 <nil> <nil>}
	I0806 08:54:42.858338   86053 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-384471 && echo "newest-cni-384471" | sudo tee /etc/hostname
	I0806 08:54:42.972560   86053 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-384471
	
	I0806 08:54:42.972586   86053 main.go:141] libmachine: (newest-cni-384471) Calling .GetSSHHostname
	I0806 08:54:42.975481   86053 main.go:141] libmachine: (newest-cni-384471) DBG | domain newest-cni-384471 has defined MAC address 52:54:00:95:c7:9e in network mk-newest-cni-384471
	I0806 08:54:42.975851   86053 main.go:141] libmachine: (newest-cni-384471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:c7:9e", ip: ""} in network mk-newest-cni-384471: {Iface:virbr1 ExpiryTime:2024-08-06 09:54:33 +0000 UTC Type:0 Mac:52:54:00:95:c7:9e Iaid: IPaddr:192.168.61.76 Prefix:24 Hostname:newest-cni-384471 Clientid:01:52:54:00:95:c7:9e}
	I0806 08:54:42.975881   86053 main.go:141] libmachine: (newest-cni-384471) DBG | domain newest-cni-384471 has defined IP address 192.168.61.76 and MAC address 52:54:00:95:c7:9e in network mk-newest-cni-384471
	I0806 08:54:42.976057   86053 main.go:141] libmachine: (newest-cni-384471) Calling .GetSSHPort
	I0806 08:54:42.976216   86053 main.go:141] libmachine: (newest-cni-384471) Calling .GetSSHKeyPath
	I0806 08:54:42.976397   86053 main.go:141] libmachine: (newest-cni-384471) Calling .GetSSHKeyPath
	I0806 08:54:42.976554   86053 main.go:141] libmachine: (newest-cni-384471) Calling .GetSSHUsername
	I0806 08:54:42.976714   86053 main.go:141] libmachine: Using SSH client type: native
	I0806 08:54:42.976904   86053 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.76 22 <nil> <nil>}
	I0806 08:54:42.976927   86053 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-384471' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-384471/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-384471' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0806 08:54:43.085654   86053 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 08:54:43.085700   86053 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19370-13057/.minikube CaCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19370-13057/.minikube}
	I0806 08:54:43.085741   86053 buildroot.go:174] setting up certificates
	I0806 08:54:43.085758   86053 provision.go:84] configureAuth start
	I0806 08:54:43.085776   86053 main.go:141] libmachine: (newest-cni-384471) Calling .GetMachineName
	I0806 08:54:43.086062   86053 main.go:141] libmachine: (newest-cni-384471) Calling .GetIP
	I0806 08:54:43.088863   86053 main.go:141] libmachine: (newest-cni-384471) DBG | domain newest-cni-384471 has defined MAC address 52:54:00:95:c7:9e in network mk-newest-cni-384471
	I0806 08:54:43.089223   86053 main.go:141] libmachine: (newest-cni-384471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:c7:9e", ip: ""} in network mk-newest-cni-384471: {Iface:virbr1 ExpiryTime:2024-08-06 09:54:33 +0000 UTC Type:0 Mac:52:54:00:95:c7:9e Iaid: IPaddr:192.168.61.76 Prefix:24 Hostname:newest-cni-384471 Clientid:01:52:54:00:95:c7:9e}
	I0806 08:54:43.089252   86053 main.go:141] libmachine: (newest-cni-384471) DBG | domain newest-cni-384471 has defined IP address 192.168.61.76 and MAC address 52:54:00:95:c7:9e in network mk-newest-cni-384471
	I0806 08:54:43.089338   86053 main.go:141] libmachine: (newest-cni-384471) Calling .GetSSHHostname
	I0806 08:54:43.091365   86053 main.go:141] libmachine: (newest-cni-384471) DBG | domain newest-cni-384471 has defined MAC address 52:54:00:95:c7:9e in network mk-newest-cni-384471
	I0806 08:54:43.091574   86053 main.go:141] libmachine: (newest-cni-384471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:c7:9e", ip: ""} in network mk-newest-cni-384471: {Iface:virbr1 ExpiryTime:2024-08-06 09:54:33 +0000 UTC Type:0 Mac:52:54:00:95:c7:9e Iaid: IPaddr:192.168.61.76 Prefix:24 Hostname:newest-cni-384471 Clientid:01:52:54:00:95:c7:9e}
	I0806 08:54:43.091593   86053 main.go:141] libmachine: (newest-cni-384471) DBG | domain newest-cni-384471 has defined IP address 192.168.61.76 and MAC address 52:54:00:95:c7:9e in network mk-newest-cni-384471
	I0806 08:54:43.091699   86053 provision.go:143] copyHostCerts
	I0806 08:54:43.091765   86053 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem, removing ...
	I0806 08:54:43.091778   86053 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem
	I0806 08:54:43.091862   86053 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem (1078 bytes)
	I0806 08:54:43.091988   86053 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem, removing ...
	I0806 08:54:43.092000   86053 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem
	I0806 08:54:43.092039   86053 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem (1123 bytes)
	I0806 08:54:43.092133   86053 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem, removing ...
	I0806 08:54:43.092141   86053 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem
	I0806 08:54:43.092174   86053 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem (1675 bytes)
	I0806 08:54:43.092254   86053 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem org=jenkins.newest-cni-384471 san=[127.0.0.1 192.168.61.76 localhost minikube newest-cni-384471]
	I0806 08:54:43.218026   86053 provision.go:177] copyRemoteCerts
	I0806 08:54:43.218093   86053 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0806 08:54:43.218128   86053 main.go:141] libmachine: (newest-cni-384471) Calling .GetSSHHostname
	I0806 08:54:43.220767   86053 main.go:141] libmachine: (newest-cni-384471) DBG | domain newest-cni-384471 has defined MAC address 52:54:00:95:c7:9e in network mk-newest-cni-384471
	I0806 08:54:43.221177   86053 main.go:141] libmachine: (newest-cni-384471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:c7:9e", ip: ""} in network mk-newest-cni-384471: {Iface:virbr1 ExpiryTime:2024-08-06 09:54:33 +0000 UTC Type:0 Mac:52:54:00:95:c7:9e Iaid: IPaddr:192.168.61.76 Prefix:24 Hostname:newest-cni-384471 Clientid:01:52:54:00:95:c7:9e}
	I0806 08:54:43.221206   86053 main.go:141] libmachine: (newest-cni-384471) DBG | domain newest-cni-384471 has defined IP address 192.168.61.76 and MAC address 52:54:00:95:c7:9e in network mk-newest-cni-384471
	I0806 08:54:43.221373   86053 main.go:141] libmachine: (newest-cni-384471) Calling .GetSSHPort
	I0806 08:54:43.221559   86053 main.go:141] libmachine: (newest-cni-384471) Calling .GetSSHKeyPath
	I0806 08:54:43.221721   86053 main.go:141] libmachine: (newest-cni-384471) Calling .GetSSHUsername
	I0806 08:54:43.221909   86053 sshutil.go:53] new ssh client: &{IP:192.168.61.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/newest-cni-384471/id_rsa Username:docker}
	I0806 08:54:43.308675   86053 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0806 08:54:43.335720   86053 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0806 08:54:43.361135   86053 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0806 08:54:43.386286   86053 provision.go:87] duration metric: took 300.513363ms to configureAuth
	I0806 08:54:43.386315   86053 buildroot.go:189] setting minikube options for container-runtime
	I0806 08:54:43.386528   86053 config.go:182] Loaded profile config "newest-cni-384471": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-rc.0
	I0806 08:54:43.386627   86053 main.go:141] libmachine: (newest-cni-384471) Calling .GetSSHHostname
	I0806 08:54:43.389113   86053 main.go:141] libmachine: (newest-cni-384471) DBG | domain newest-cni-384471 has defined MAC address 52:54:00:95:c7:9e in network mk-newest-cni-384471
	I0806 08:54:43.389433   86053 main.go:141] libmachine: (newest-cni-384471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:c7:9e", ip: ""} in network mk-newest-cni-384471: {Iface:virbr1 ExpiryTime:2024-08-06 09:54:33 +0000 UTC Type:0 Mac:52:54:00:95:c7:9e Iaid: IPaddr:192.168.61.76 Prefix:24 Hostname:newest-cni-384471 Clientid:01:52:54:00:95:c7:9e}
	I0806 08:54:43.389466   86053 main.go:141] libmachine: (newest-cni-384471) DBG | domain newest-cni-384471 has defined IP address 192.168.61.76 and MAC address 52:54:00:95:c7:9e in network mk-newest-cni-384471
	I0806 08:54:43.389663   86053 main.go:141] libmachine: (newest-cni-384471) Calling .GetSSHPort
	I0806 08:54:43.389844   86053 main.go:141] libmachine: (newest-cni-384471) Calling .GetSSHKeyPath
	I0806 08:54:43.390022   86053 main.go:141] libmachine: (newest-cni-384471) Calling .GetSSHKeyPath
	I0806 08:54:43.390174   86053 main.go:141] libmachine: (newest-cni-384471) Calling .GetSSHUsername
	I0806 08:54:43.390323   86053 main.go:141] libmachine: Using SSH client type: native
	I0806 08:54:43.390520   86053 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.76 22 <nil> <nil>}
	I0806 08:54:43.390541   86053 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0806 08:54:43.660955   86053 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0806 08:54:43.660985   86053 main.go:141] libmachine: Checking connection to Docker...
	I0806 08:54:43.660997   86053 main.go:141] libmachine: (newest-cni-384471) Calling .GetURL
	I0806 08:54:43.662238   86053 main.go:141] libmachine: (newest-cni-384471) DBG | Using libvirt version 6000000
	I0806 08:54:43.664575   86053 main.go:141] libmachine: (newest-cni-384471) DBG | domain newest-cni-384471 has defined MAC address 52:54:00:95:c7:9e in network mk-newest-cni-384471
	I0806 08:54:43.664880   86053 main.go:141] libmachine: (newest-cni-384471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:c7:9e", ip: ""} in network mk-newest-cni-384471: {Iface:virbr1 ExpiryTime:2024-08-06 09:54:33 +0000 UTC Type:0 Mac:52:54:00:95:c7:9e Iaid: IPaddr:192.168.61.76 Prefix:24 Hostname:newest-cni-384471 Clientid:01:52:54:00:95:c7:9e}
	I0806 08:54:43.664916   86053 main.go:141] libmachine: (newest-cni-384471) DBG | domain newest-cni-384471 has defined IP address 192.168.61.76 and MAC address 52:54:00:95:c7:9e in network mk-newest-cni-384471
	I0806 08:54:43.665166   86053 main.go:141] libmachine: Docker is up and running!
	I0806 08:54:43.665186   86053 main.go:141] libmachine: Reticulating splines...
	I0806 08:54:43.665193   86053 client.go:171] duration metric: took 25.443042946s to LocalClient.Create
	I0806 08:54:43.665212   86053 start.go:167] duration metric: took 25.443097511s to libmachine.API.Create "newest-cni-384471"
	I0806 08:54:43.665221   86053 start.go:293] postStartSetup for "newest-cni-384471" (driver="kvm2")
	I0806 08:54:43.665232   86053 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0806 08:54:43.665248   86053 main.go:141] libmachine: (newest-cni-384471) Calling .DriverName
	I0806 08:54:43.665493   86053 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0806 08:54:43.665518   86053 main.go:141] libmachine: (newest-cni-384471) Calling .GetSSHHostname
	I0806 08:54:43.667551   86053 main.go:141] libmachine: (newest-cni-384471) DBG | domain newest-cni-384471 has defined MAC address 52:54:00:95:c7:9e in network mk-newest-cni-384471
	I0806 08:54:43.667837   86053 main.go:141] libmachine: (newest-cni-384471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:c7:9e", ip: ""} in network mk-newest-cni-384471: {Iface:virbr1 ExpiryTime:2024-08-06 09:54:33 +0000 UTC Type:0 Mac:52:54:00:95:c7:9e Iaid: IPaddr:192.168.61.76 Prefix:24 Hostname:newest-cni-384471 Clientid:01:52:54:00:95:c7:9e}
	I0806 08:54:43.667870   86053 main.go:141] libmachine: (newest-cni-384471) DBG | domain newest-cni-384471 has defined IP address 192.168.61.76 and MAC address 52:54:00:95:c7:9e in network mk-newest-cni-384471
	I0806 08:54:43.667951   86053 main.go:141] libmachine: (newest-cni-384471) Calling .GetSSHPort
	I0806 08:54:43.668117   86053 main.go:141] libmachine: (newest-cni-384471) Calling .GetSSHKeyPath
	I0806 08:54:43.668270   86053 main.go:141] libmachine: (newest-cni-384471) Calling .GetSSHUsername
	I0806 08:54:43.668428   86053 sshutil.go:53] new ssh client: &{IP:192.168.61.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/newest-cni-384471/id_rsa Username:docker}
	I0806 08:54:43.747467   86053 ssh_runner.go:195] Run: cat /etc/os-release
	I0806 08:54:43.751619   86053 info.go:137] Remote host: Buildroot 2023.02.9
	I0806 08:54:43.751651   86053 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-13057/.minikube/addons for local assets ...
	I0806 08:54:43.751735   86053 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-13057/.minikube/files for local assets ...
	I0806 08:54:43.751838   86053 filesync.go:149] local asset: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem -> 202462.pem in /etc/ssl/certs
	I0806 08:54:43.751962   86053 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0806 08:54:43.762386   86053 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem --> /etc/ssl/certs/202462.pem (1708 bytes)
	I0806 08:54:43.787287   86053 start.go:296] duration metric: took 122.053777ms for postStartSetup
	I0806 08:54:43.787353   86053 main.go:141] libmachine: (newest-cni-384471) Calling .GetConfigRaw
	I0806 08:54:43.788025   86053 main.go:141] libmachine: (newest-cni-384471) Calling .GetIP
	I0806 08:54:43.790622   86053 main.go:141] libmachine: (newest-cni-384471) DBG | domain newest-cni-384471 has defined MAC address 52:54:00:95:c7:9e in network mk-newest-cni-384471
	I0806 08:54:43.790943   86053 main.go:141] libmachine: (newest-cni-384471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:c7:9e", ip: ""} in network mk-newest-cni-384471: {Iface:virbr1 ExpiryTime:2024-08-06 09:54:33 +0000 UTC Type:0 Mac:52:54:00:95:c7:9e Iaid: IPaddr:192.168.61.76 Prefix:24 Hostname:newest-cni-384471 Clientid:01:52:54:00:95:c7:9e}
	I0806 08:54:43.790973   86053 main.go:141] libmachine: (newest-cni-384471) DBG | domain newest-cni-384471 has defined IP address 192.168.61.76 and MAC address 52:54:00:95:c7:9e in network mk-newest-cni-384471
	I0806 08:54:43.791215   86053 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/newest-cni-384471/config.json ...
	I0806 08:54:43.791385   86053 start.go:128] duration metric: took 25.588049621s to createHost
	I0806 08:54:43.791407   86053 main.go:141] libmachine: (newest-cni-384471) Calling .GetSSHHostname
	I0806 08:54:43.793778   86053 main.go:141] libmachine: (newest-cni-384471) DBG | domain newest-cni-384471 has defined MAC address 52:54:00:95:c7:9e in network mk-newest-cni-384471
	I0806 08:54:43.794147   86053 main.go:141] libmachine: (newest-cni-384471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:c7:9e", ip: ""} in network mk-newest-cni-384471: {Iface:virbr1 ExpiryTime:2024-08-06 09:54:33 +0000 UTC Type:0 Mac:52:54:00:95:c7:9e Iaid: IPaddr:192.168.61.76 Prefix:24 Hostname:newest-cni-384471 Clientid:01:52:54:00:95:c7:9e}
	I0806 08:54:43.794181   86053 main.go:141] libmachine: (newest-cni-384471) DBG | domain newest-cni-384471 has defined IP address 192.168.61.76 and MAC address 52:54:00:95:c7:9e in network mk-newest-cni-384471
	I0806 08:54:43.794313   86053 main.go:141] libmachine: (newest-cni-384471) Calling .GetSSHPort
	I0806 08:54:43.794531   86053 main.go:141] libmachine: (newest-cni-384471) Calling .GetSSHKeyPath
	I0806 08:54:43.794794   86053 main.go:141] libmachine: (newest-cni-384471) Calling .GetSSHKeyPath
	I0806 08:54:43.794960   86053 main.go:141] libmachine: (newest-cni-384471) Calling .GetSSHUsername
	I0806 08:54:43.795200   86053 main.go:141] libmachine: Using SSH client type: native
	I0806 08:54:43.795396   86053 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.76 22 <nil> <nil>}
	I0806 08:54:43.795409   86053 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0806 08:54:43.897304   86053 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722934483.876024217
	
	I0806 08:54:43.897327   86053 fix.go:216] guest clock: 1722934483.876024217
	I0806 08:54:43.897338   86053 fix.go:229] Guest: 2024-08-06 08:54:43.876024217 +0000 UTC Remote: 2024-08-06 08:54:43.79139605 +0000 UTC m=+25.702843992 (delta=84.628167ms)
	I0806 08:54:43.897363   86053 fix.go:200] guest clock delta is within tolerance: 84.628167ms
	I0806 08:54:43.897369   86053 start.go:83] releasing machines lock for "newest-cni-384471", held for 25.694108314s
	I0806 08:54:43.897394   86053 main.go:141] libmachine: (newest-cni-384471) Calling .DriverName
	I0806 08:54:43.897656   86053 main.go:141] libmachine: (newest-cni-384471) Calling .GetIP
	I0806 08:54:43.900046   86053 main.go:141] libmachine: (newest-cni-384471) DBG | domain newest-cni-384471 has defined MAC address 52:54:00:95:c7:9e in network mk-newest-cni-384471
	I0806 08:54:43.900350   86053 main.go:141] libmachine: (newest-cni-384471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:c7:9e", ip: ""} in network mk-newest-cni-384471: {Iface:virbr1 ExpiryTime:2024-08-06 09:54:33 +0000 UTC Type:0 Mac:52:54:00:95:c7:9e Iaid: IPaddr:192.168.61.76 Prefix:24 Hostname:newest-cni-384471 Clientid:01:52:54:00:95:c7:9e}
	I0806 08:54:43.900391   86053 main.go:141] libmachine: (newest-cni-384471) DBG | domain newest-cni-384471 has defined IP address 192.168.61.76 and MAC address 52:54:00:95:c7:9e in network mk-newest-cni-384471
	I0806 08:54:43.900538   86053 main.go:141] libmachine: (newest-cni-384471) Calling .DriverName
	I0806 08:54:43.900998   86053 main.go:141] libmachine: (newest-cni-384471) Calling .DriverName
	I0806 08:54:43.901158   86053 main.go:141] libmachine: (newest-cni-384471) Calling .DriverName
	I0806 08:54:43.901242   86053 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0806 08:54:43.901283   86053 main.go:141] libmachine: (newest-cni-384471) Calling .GetSSHHostname
	I0806 08:54:43.901371   86053 ssh_runner.go:195] Run: cat /version.json
	I0806 08:54:43.901396   86053 main.go:141] libmachine: (newest-cni-384471) Calling .GetSSHHostname
	I0806 08:54:43.903998   86053 main.go:141] libmachine: (newest-cni-384471) DBG | domain newest-cni-384471 has defined MAC address 52:54:00:95:c7:9e in network mk-newest-cni-384471
	I0806 08:54:43.904324   86053 main.go:141] libmachine: (newest-cni-384471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:c7:9e", ip: ""} in network mk-newest-cni-384471: {Iface:virbr1 ExpiryTime:2024-08-06 09:54:33 +0000 UTC Type:0 Mac:52:54:00:95:c7:9e Iaid: IPaddr:192.168.61.76 Prefix:24 Hostname:newest-cni-384471 Clientid:01:52:54:00:95:c7:9e}
	I0806 08:54:43.904349   86053 main.go:141] libmachine: (newest-cni-384471) DBG | domain newest-cni-384471 has defined IP address 192.168.61.76 and MAC address 52:54:00:95:c7:9e in network mk-newest-cni-384471
	I0806 08:54:43.904391   86053 main.go:141] libmachine: (newest-cni-384471) DBG | domain newest-cni-384471 has defined MAC address 52:54:00:95:c7:9e in network mk-newest-cni-384471
	I0806 08:54:43.904515   86053 main.go:141] libmachine: (newest-cni-384471) Calling .GetSSHPort
	I0806 08:54:43.904702   86053 main.go:141] libmachine: (newest-cni-384471) Calling .GetSSHKeyPath
	I0806 08:54:43.904861   86053 main.go:141] libmachine: (newest-cni-384471) Calling .GetSSHUsername
	I0806 08:54:43.904922   86053 main.go:141] libmachine: (newest-cni-384471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:c7:9e", ip: ""} in network mk-newest-cni-384471: {Iface:virbr1 ExpiryTime:2024-08-06 09:54:33 +0000 UTC Type:0 Mac:52:54:00:95:c7:9e Iaid: IPaddr:192.168.61.76 Prefix:24 Hostname:newest-cni-384471 Clientid:01:52:54:00:95:c7:9e}
	I0806 08:54:43.904949   86053 main.go:141] libmachine: (newest-cni-384471) DBG | domain newest-cni-384471 has defined IP address 192.168.61.76 and MAC address 52:54:00:95:c7:9e in network mk-newest-cni-384471
	I0806 08:54:43.904978   86053 sshutil.go:53] new ssh client: &{IP:192.168.61.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/newest-cni-384471/id_rsa Username:docker}
	I0806 08:54:43.905093   86053 main.go:141] libmachine: (newest-cni-384471) Calling .GetSSHPort
	I0806 08:54:43.905258   86053 main.go:141] libmachine: (newest-cni-384471) Calling .GetSSHKeyPath
	I0806 08:54:43.905435   86053 main.go:141] libmachine: (newest-cni-384471) Calling .GetSSHUsername
	I0806 08:54:43.905632   86053 sshutil.go:53] new ssh client: &{IP:192.168.61.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/newest-cni-384471/id_rsa Username:docker}
	I0806 08:54:44.009104   86053 ssh_runner.go:195] Run: systemctl --version
	I0806 08:54:44.015505   86053 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0806 08:54:44.183289   86053 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0806 08:54:44.189674   86053 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0806 08:54:44.189754   86053 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0806 08:54:44.207069   86053 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0806 08:54:44.207099   86053 start.go:495] detecting cgroup driver to use...
	I0806 08:54:44.207170   86053 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0806 08:54:44.224836   86053 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 08:54:44.239977   86053 docker.go:217] disabling cri-docker service (if available) ...
	I0806 08:54:44.240045   86053 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0806 08:54:44.254247   86053 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0806 08:54:44.269177   86053 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0806 08:54:44.395619   86053 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0806 08:54:44.563816   86053 docker.go:233] disabling docker service ...
	I0806 08:54:44.563887   86053 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0806 08:54:44.579043   86053 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0806 08:54:44.593483   86053 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0806 08:54:44.734197   86053 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0806 08:54:44.881572   86053 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0806 08:54:44.896675   86053 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 08:54:44.916191   86053 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0806 08:54:44.916251   86053 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:54:44.927839   86053 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0806 08:54:44.927898   86053 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:54:44.939261   86053 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:54:44.951469   86053 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:54:44.962403   86053 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0806 08:54:44.973498   86053 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:54:44.984105   86053 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:54:45.001374   86053 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:54:45.013296   86053 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0806 08:54:45.023592   86053 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0806 08:54:45.023654   86053 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0806 08:54:45.037498   86053 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0806 08:54:45.047545   86053 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 08:54:45.173987   86053 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0806 08:54:45.333188   86053 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0806 08:54:45.333269   86053 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0806 08:54:45.338135   86053 start.go:563] Will wait 60s for crictl version
	I0806 08:54:45.338193   86053 ssh_runner.go:195] Run: which crictl
	I0806 08:54:45.342440   86053 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0806 08:54:45.385472   86053 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0806 08:54:45.385587   86053 ssh_runner.go:195] Run: crio --version
	I0806 08:54:45.415555   86053 ssh_runner.go:195] Run: crio --version
	I0806 08:54:45.448393   86053 out.go:177] * Preparing Kubernetes v1.31.0-rc.0 on CRI-O 1.29.1 ...
	I0806 08:54:45.449408   86053 main.go:141] libmachine: (newest-cni-384471) Calling .GetIP
	I0806 08:54:45.451761   86053 main.go:141] libmachine: (newest-cni-384471) DBG | domain newest-cni-384471 has defined MAC address 52:54:00:95:c7:9e in network mk-newest-cni-384471
	I0806 08:54:45.452013   86053 main.go:141] libmachine: (newest-cni-384471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:c7:9e", ip: ""} in network mk-newest-cni-384471: {Iface:virbr1 ExpiryTime:2024-08-06 09:54:33 +0000 UTC Type:0 Mac:52:54:00:95:c7:9e Iaid: IPaddr:192.168.61.76 Prefix:24 Hostname:newest-cni-384471 Clientid:01:52:54:00:95:c7:9e}
	I0806 08:54:45.452032   86053 main.go:141] libmachine: (newest-cni-384471) DBG | domain newest-cni-384471 has defined IP address 192.168.61.76 and MAC address 52:54:00:95:c7:9e in network mk-newest-cni-384471
	I0806 08:54:45.452221   86053 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0806 08:54:45.456445   86053 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 08:54:45.472337   86053 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0806 08:54:45.473471   86053 kubeadm.go:883] updating cluster {Name:newest-cni-384471 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-rc.0 ClusterName:newest-cni-384471 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.76 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host M
ount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0806 08:54:45.473599   86053 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime crio
	I0806 08:54:45.473666   86053 ssh_runner.go:195] Run: sudo crictl images --output json
	I0806 08:54:45.507704   86053 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-rc.0". assuming images are not preloaded.
	I0806 08:54:45.507772   86053 ssh_runner.go:195] Run: which lz4
	I0806 08:54:45.511903   86053 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0806 08:54:45.516225   86053 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0806 08:54:45.516258   86053 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389126804 bytes)
	I0806 08:54:46.901073   86053 crio.go:462] duration metric: took 1.389204547s to copy over tarball
	I0806 08:54:46.901151   86053 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0806 08:54:49.050773   86053 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.149598836s)
	I0806 08:54:49.050801   86053 crio.go:469] duration metric: took 2.149691637s to extract the tarball
	I0806 08:54:49.050809   86053 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0806 08:54:49.089379   86053 ssh_runner.go:195] Run: sudo crictl images --output json
	I0806 08:54:49.140910   86053 crio.go:514] all images are preloaded for cri-o runtime.
	I0806 08:54:49.140934   86053 cache_images.go:84] Images are preloaded, skipping loading
	I0806 08:54:49.140942   86053 kubeadm.go:934] updating node { 192.168.61.76 8443 v1.31.0-rc.0 crio true true} ...
	I0806 08:54:49.141057   86053 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-384471 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.76
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-rc.0 ClusterName:newest-cni-384471 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0806 08:54:49.141129   86053 ssh_runner.go:195] Run: crio config
	I0806 08:54:49.191798   86053 cni.go:84] Creating CNI manager for ""
	I0806 08:54:49.191821   86053 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0806 08:54:49.191831   86053 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0806 08:54:49.191860   86053 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.61.76 APIServerPort:8443 KubernetesVersion:v1.31.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-384471 NodeName:newest-cni-384471 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.76"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs
:map[] NodeIP:192.168.61.76 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0806 08:54:49.192063   86053 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.76
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-384471"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.76
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.76"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0806 08:54:49.192147   86053 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-rc.0
	I0806 08:54:49.202600   86053 binaries.go:44] Found k8s binaries, skipping transfer
	I0806 08:54:49.202670   86053 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0806 08:54:49.212756   86053 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (358 bytes)
	I0806 08:54:49.230696   86053 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0806 08:54:49.248166   86053 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2287 bytes)
	I0806 08:54:49.265927   86053 ssh_runner.go:195] Run: grep 192.168.61.76	control-plane.minikube.internal$ /etc/hosts
	I0806 08:54:49.270011   86053 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.76	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 08:54:49.283176   86053 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 08:54:49.429714   86053 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 08:54:49.450995   86053 certs.go:68] Setting up /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/newest-cni-384471 for IP: 192.168.61.76
	I0806 08:54:49.451019   86053 certs.go:194] generating shared ca certs ...
	I0806 08:54:49.451042   86053 certs.go:226] acquiring lock for ca certs: {Name:mkf5c5aed91f4108054d9f2c742cad66c86afbed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:54:49.451211   86053 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.key
	I0806 08:54:49.451275   86053 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.key
	I0806 08:54:49.451285   86053 certs.go:256] generating profile certs ...
	I0806 08:54:49.451381   86053 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/newest-cni-384471/client.key
	I0806 08:54:49.451401   86053 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/newest-cni-384471/client.crt with IP's: []
	I0806 08:54:49.545303   86053 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/newest-cni-384471/client.crt ...
	I0806 08:54:49.545328   86053 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/newest-cni-384471/client.crt: {Name:mkcecd4bc37d11d028587d7b563b696b36a7b0b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:54:49.545508   86053 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/newest-cni-384471/client.key ...
	I0806 08:54:49.545528   86053 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/newest-cni-384471/client.key: {Name:mk3b2123a736a76896efed58fcfb40fe669c8170 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:54:49.545623   86053 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/newest-cni-384471/apiserver.key.359ff024
	I0806 08:54:49.545638   86053 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/newest-cni-384471/apiserver.crt.359ff024 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.76]
	I0806 08:54:49.687984   86053 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/newest-cni-384471/apiserver.crt.359ff024 ...
	I0806 08:54:49.688025   86053 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/newest-cni-384471/apiserver.crt.359ff024: {Name:mk221e4ed25c3de27903b1c8d6baadfaf3539fb5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:54:49.688257   86053 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/newest-cni-384471/apiserver.key.359ff024 ...
	I0806 08:54:49.688281   86053 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/newest-cni-384471/apiserver.key.359ff024: {Name:mk618f06945aaddfd386754ec00dca8341a26e4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:54:49.688414   86053 certs.go:381] copying /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/newest-cni-384471/apiserver.crt.359ff024 -> /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/newest-cni-384471/apiserver.crt
	I0806 08:54:49.688495   86053 certs.go:385] copying /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/newest-cni-384471/apiserver.key.359ff024 -> /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/newest-cni-384471/apiserver.key
	I0806 08:54:49.688547   86053 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/newest-cni-384471/proxy-client.key
	I0806 08:54:49.688563   86053 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/newest-cni-384471/proxy-client.crt with IP's: []
	I0806 08:54:49.801722   86053 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/newest-cni-384471/proxy-client.crt ...
	I0806 08:54:49.801753   86053 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/newest-cni-384471/proxy-client.crt: {Name:mkcdf097a15f444b30d69c1778af3c29f1e100cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:54:49.801920   86053 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/newest-cni-384471/proxy-client.key ...
	I0806 08:54:49.801933   86053 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/newest-cni-384471/proxy-client.key: {Name:mk111d06ea2e4c65f8a817caea899b480c540d57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:54:49.802093   86053 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246.pem (1338 bytes)
	W0806 08:54:49.802127   86053 certs.go:480] ignoring /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246_empty.pem, impossibly tiny 0 bytes
	I0806 08:54:49.802138   86053 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem (1675 bytes)
	I0806 08:54:49.802162   86053 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem (1078 bytes)
	I0806 08:54:49.802182   86053 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem (1123 bytes)
	I0806 08:54:49.802205   86053 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem (1675 bytes)
	I0806 08:54:49.802243   86053 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem (1708 bytes)
	I0806 08:54:49.802784   86053 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0806 08:54:49.834580   86053 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0806 08:54:49.860468   86053 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0806 08:54:49.886561   86053 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0806 08:54:49.913158   86053 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/newest-cni-384471/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0806 08:54:49.939295   86053 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/newest-cni-384471/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0806 08:54:49.964998   86053 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/newest-cni-384471/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0806 08:54:49.990340   86053 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/newest-cni-384471/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0806 08:54:50.017524   86053 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0806 08:54:50.043075   86053 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246.pem --> /usr/share/ca-certificates/20246.pem (1338 bytes)
	I0806 08:54:50.072001   86053 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem --> /usr/share/ca-certificates/202462.pem (1708 bytes)
	I0806 08:54:50.099768   86053 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0806 08:54:50.119274   86053 ssh_runner.go:195] Run: openssl version
	I0806 08:54:50.125775   86053 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0806 08:54:50.137374   86053 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0806 08:54:50.142199   86053 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  6 07:07 /usr/share/ca-certificates/minikubeCA.pem
	I0806 08:54:50.142267   86053 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0806 08:54:50.149026   86053 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0806 08:54:50.161176   86053 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20246.pem && ln -fs /usr/share/ca-certificates/20246.pem /etc/ssl/certs/20246.pem"
	I0806 08:54:50.172912   86053 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20246.pem
	I0806 08:54:50.177945   86053 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  6 07:17 /usr/share/ca-certificates/20246.pem
	I0806 08:54:50.178013   86053 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20246.pem
	I0806 08:54:50.183911   86053 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20246.pem /etc/ssl/certs/51391683.0"
	I0806 08:54:50.195070   86053 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202462.pem && ln -fs /usr/share/ca-certificates/202462.pem /etc/ssl/certs/202462.pem"
	I0806 08:54:50.206401   86053 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202462.pem
	I0806 08:54:50.211359   86053 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  6 07:17 /usr/share/ca-certificates/202462.pem
	I0806 08:54:50.211423   86053 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202462.pem
	I0806 08:54:50.218069   86053 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202462.pem /etc/ssl/certs/3ec20f2e.0"
	I0806 08:54:50.233698   86053 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0806 08:54:50.238429   86053 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0806 08:54:50.238491   86053 kubeadm.go:392] StartCluster: {Name:newest-cni-384471 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-rc.0 ClusterName:newest-cni-384471 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.76 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Moun
t9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 08:54:50.238653   86053 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0806 08:54:50.238721   86053 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0806 08:54:50.295538   86053 cri.go:89] found id: ""
	I0806 08:54:50.295612   86053 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0806 08:54:50.311919   86053 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0806 08:54:50.323651   86053 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0806 08:54:50.336465   86053 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0806 08:54:50.336482   86053 kubeadm.go:157] found existing configuration files:
	
	I0806 08:54:50.336544   86053 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0806 08:54:50.347748   86053 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0806 08:54:50.347820   86053 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0806 08:54:50.360637   86053 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0806 08:54:50.370570   86053 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0806 08:54:50.370654   86053 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0806 08:54:50.381563   86053 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0806 08:54:50.391182   86053 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0806 08:54:50.391249   86053 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0806 08:54:50.401150   86053 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0806 08:54:50.411201   86053 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0806 08:54:50.411261   86053 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0806 08:54:50.421063   86053 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0806 08:54:50.551836   86053 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-rc.0
	I0806 08:54:50.551961   86053 kubeadm.go:310] [preflight] Running pre-flight checks
	I0806 08:54:50.667025   86053 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0806 08:54:50.667166   86053 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0806 08:54:50.667288   86053 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0806 08:54:50.677383   86053 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0806 08:54:50.823256   86053 out.go:204]   - Generating certificates and keys ...
	I0806 08:54:50.823384   86053 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0806 08:54:50.823485   86053 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0806 08:54:50.930962   86053 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0806 08:54:51.015870   86053 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0806 08:54:51.165275   86053 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0806 08:54:51.285799   86053 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0806 08:54:51.382158   86053 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0806 08:54:51.382403   86053 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-384471] and IPs [192.168.61.76 127.0.0.1 ::1]
	I0806 08:54:51.447149   86053 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0806 08:54:51.447527   86053 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-384471] and IPs [192.168.61.76 127.0.0.1 ::1]
	I0806 08:54:51.561456   86053 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0806 08:54:51.999900   86053 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0806 08:54:52.205467   86053 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0806 08:54:52.205889   86053 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0806 08:54:52.309915   86053 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0806 08:54:52.586574   86053 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0806 08:54:52.750011   86053 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0806 08:54:52.844306   86053 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0806 08:54:53.149010   86053 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0806 08:54:53.149658   86053 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0806 08:54:53.154816   86053 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	
	
	==> CRI-O <==
	Aug 06 08:54:55 no-preload-703340 crio[736]: time="2024-08-06 08:54:55.524214626Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722934495524182760,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cb6ed185-c173-4a07-bdae-4509877810a5 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 08:54:55 no-preload-703340 crio[736]: time="2024-08-06 08:54:55.525970069Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f50a449e-bd47-45b8-be2e-fefadbfcca1f name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:54:55 no-preload-703340 crio[736]: time="2024-08-06 08:54:55.526245901Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f50a449e-bd47-45b8-be2e-fefadbfcca1f name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:54:55 no-preload-703340 crio[736]: time="2024-08-06 08:54:55.526533328Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bf2aa151d2bbbede89e9f9a6b18d232dc742f0ad4eed4af7b5ea89c55a07e5c3,PodSandboxId:2b37f216b0b7fb7d0e89be845a308953cace561fe743a959fddf0d0b431c9d01,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722933617063651584,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-9dt4q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a62f9c28-2e06-4072-b459-3a879cba65d6,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f6da332003bcdaa2beb8516e28bfe2efc402ea7d4a6a89443466dc4cd736391,PodSandboxId:8f44a08dd23f18c04924921c4899ce5b9fcaeefd99e63c9c2a39ad7eceab3cd4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722933616941294769,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 5497ddbc-8ed5-4b42-80e8-81aa7c1b8fa7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:444b1faae9279d26715d6c59922c216fd88e63eae27e052f1fc54d9df1c8e527,PodSandboxId:d0f2cea9718124e2190ee70e5f76291f3c583cffa143980ebad3cbd1c17b0eee,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722933616481159333,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mpfwj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d
a40ee8-73c2-4479-a2aa-32a2dce7e21d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5bb6477358a380e3f783c71af8369ac4a7486080f96c787d216d599bd79305e,PodSandboxId:d8468c5168848d0133f92b7c86bce34d6b90c819fb18142728d15df87f81f572,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,State:CONTAINER_RUNNING,CreatedAt:
1722933615374717381,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zsv9x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2859a7f2-f880-4be4-806e-163eb9caf535,},Annotations:map[string]string{io.kubernetes.container.hash: 16faf14d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6d85f228745f04a5dd5a2f2e6364404974369765bf98a1724000807552e8908,PodSandboxId:ea26f246b9ce0ade860b50e92c703bf25bfcd033909ab854efc3ef2d4d554c52,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1722933604266305532,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-703340,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7024dbf08cd3cd6cf045756262139cce,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db34d02027a316f2243ff3c556e335b6a7a27646bdca17424ca44a19186ce1b1,PodSandboxId:b3cb276472800f4d6a9a8ca1ca2ee35e89197a05a4f86b8688b7183ed1e93bb5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_RUNNING,CreatedAt:1722933604308066021,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-703340,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 901f74e47f1d9dec93c609776ac15986,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e08881e193ba0b198a7a54b31142c5ab098dd8d790e6497956fed1247f2b42cf,PodSandboxId:3f46aa29d26fafd3f000eacce54ec236bc50b344b499ed9f2c75c305cd64ceca,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_RUNNING,CreatedAt:1722933604236782799,Labels:map[string]string{io.kubernetes.container.name: k
ube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-703340,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 894abcb80f3ffdf09387a41fe5cd0fa5,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99d3e8306f574ba7867e4cd33f71afc4878be2a8d207bd0b7386fe63a6aa953a,PodSandboxId:e01e4089952e966eae8624a82c9856c5782be7081af7e47674d10a649c326695,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_RUNNING,CreatedAt:1722933604230051050,Labels:map[string]string{io.kubernetes.container.na
me: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-703340,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1eff7ba90ca6a95ec01f7820a7d1aee0,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a6840a59bdd44f77956af229841972da859383e685eea8895d93ba43633026e,PodSandboxId:0a16086bf612650c192041fbfc646877dcea2212beb723c7a8ba5da928b0c62e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_EXITED,CreatedAt:1722933317212369718,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-703340,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 901f74e47f1d9dec93c609776ac15986,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f50a449e-bd47-45b8-be2e-fefadbfcca1f name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:54:55 no-preload-703340 crio[736]: time="2024-08-06 08:54:55.569648344Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=35b2c2c7-3108-44a3-b4b0-396dc47b3090 name=/runtime.v1.RuntimeService/Version
	Aug 06 08:54:55 no-preload-703340 crio[736]: time="2024-08-06 08:54:55.569758588Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=35b2c2c7-3108-44a3-b4b0-396dc47b3090 name=/runtime.v1.RuntimeService/Version
	Aug 06 08:54:55 no-preload-703340 crio[736]: time="2024-08-06 08:54:55.571233056Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fefd05da-4e7a-422c-8fd1-000450cdbca4 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 08:54:55 no-preload-703340 crio[736]: time="2024-08-06 08:54:55.571657576Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722934495571633236,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fefd05da-4e7a-422c-8fd1-000450cdbca4 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 08:54:55 no-preload-703340 crio[736]: time="2024-08-06 08:54:55.572736105Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=487c54d0-3e2a-4557-ab54-cdf5decb7d2c name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:54:55 no-preload-703340 crio[736]: time="2024-08-06 08:54:55.572806404Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=487c54d0-3e2a-4557-ab54-cdf5decb7d2c name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:54:55 no-preload-703340 crio[736]: time="2024-08-06 08:54:55.573004043Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bf2aa151d2bbbede89e9f9a6b18d232dc742f0ad4eed4af7b5ea89c55a07e5c3,PodSandboxId:2b37f216b0b7fb7d0e89be845a308953cace561fe743a959fddf0d0b431c9d01,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722933617063651584,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-9dt4q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a62f9c28-2e06-4072-b459-3a879cba65d6,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f6da332003bcdaa2beb8516e28bfe2efc402ea7d4a6a89443466dc4cd736391,PodSandboxId:8f44a08dd23f18c04924921c4899ce5b9fcaeefd99e63c9c2a39ad7eceab3cd4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722933616941294769,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 5497ddbc-8ed5-4b42-80e8-81aa7c1b8fa7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:444b1faae9279d26715d6c59922c216fd88e63eae27e052f1fc54d9df1c8e527,PodSandboxId:d0f2cea9718124e2190ee70e5f76291f3c583cffa143980ebad3cbd1c17b0eee,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722933616481159333,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mpfwj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d
a40ee8-73c2-4479-a2aa-32a2dce7e21d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5bb6477358a380e3f783c71af8369ac4a7486080f96c787d216d599bd79305e,PodSandboxId:d8468c5168848d0133f92b7c86bce34d6b90c819fb18142728d15df87f81f572,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,State:CONTAINER_RUNNING,CreatedAt:
1722933615374717381,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zsv9x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2859a7f2-f880-4be4-806e-163eb9caf535,},Annotations:map[string]string{io.kubernetes.container.hash: 16faf14d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6d85f228745f04a5dd5a2f2e6364404974369765bf98a1724000807552e8908,PodSandboxId:ea26f246b9ce0ade860b50e92c703bf25bfcd033909ab854efc3ef2d4d554c52,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1722933604266305532,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-703340,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7024dbf08cd3cd6cf045756262139cce,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db34d02027a316f2243ff3c556e335b6a7a27646bdca17424ca44a19186ce1b1,PodSandboxId:b3cb276472800f4d6a9a8ca1ca2ee35e89197a05a4f86b8688b7183ed1e93bb5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_RUNNING,CreatedAt:1722933604308066021,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-703340,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 901f74e47f1d9dec93c609776ac15986,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e08881e193ba0b198a7a54b31142c5ab098dd8d790e6497956fed1247f2b42cf,PodSandboxId:3f46aa29d26fafd3f000eacce54ec236bc50b344b499ed9f2c75c305cd64ceca,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_RUNNING,CreatedAt:1722933604236782799,Labels:map[string]string{io.kubernetes.container.name: k
ube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-703340,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 894abcb80f3ffdf09387a41fe5cd0fa5,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99d3e8306f574ba7867e4cd33f71afc4878be2a8d207bd0b7386fe63a6aa953a,PodSandboxId:e01e4089952e966eae8624a82c9856c5782be7081af7e47674d10a649c326695,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_RUNNING,CreatedAt:1722933604230051050,Labels:map[string]string{io.kubernetes.container.na
me: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-703340,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1eff7ba90ca6a95ec01f7820a7d1aee0,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a6840a59bdd44f77956af229841972da859383e685eea8895d93ba43633026e,PodSandboxId:0a16086bf612650c192041fbfc646877dcea2212beb723c7a8ba5da928b0c62e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_EXITED,CreatedAt:1722933317212369718,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-703340,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 901f74e47f1d9dec93c609776ac15986,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=487c54d0-3e2a-4557-ab54-cdf5decb7d2c name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:54:55 no-preload-703340 crio[736]: time="2024-08-06 08:54:55.614123963Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f0bc08f4-0e43-490d-b08a-6a80ea9d4bed name=/runtime.v1.RuntimeService/Version
	Aug 06 08:54:55 no-preload-703340 crio[736]: time="2024-08-06 08:54:55.614231292Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f0bc08f4-0e43-490d-b08a-6a80ea9d4bed name=/runtime.v1.RuntimeService/Version
	Aug 06 08:54:55 no-preload-703340 crio[736]: time="2024-08-06 08:54:55.616406925Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4e1cea73-779e-4b4d-942a-fec43e9e043b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 08:54:55 no-preload-703340 crio[736]: time="2024-08-06 08:54:55.616858838Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722934495616830786,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4e1cea73-779e-4b4d-942a-fec43e9e043b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 08:54:55 no-preload-703340 crio[736]: time="2024-08-06 08:54:55.617724455Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7b91e8b7-caa7-4cf8-b06e-e47bc83b02f3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:54:55 no-preload-703340 crio[736]: time="2024-08-06 08:54:55.617795790Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7b91e8b7-caa7-4cf8-b06e-e47bc83b02f3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:54:55 no-preload-703340 crio[736]: time="2024-08-06 08:54:55.618002008Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bf2aa151d2bbbede89e9f9a6b18d232dc742f0ad4eed4af7b5ea89c55a07e5c3,PodSandboxId:2b37f216b0b7fb7d0e89be845a308953cace561fe743a959fddf0d0b431c9d01,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722933617063651584,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-9dt4q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a62f9c28-2e06-4072-b459-3a879cba65d6,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f6da332003bcdaa2beb8516e28bfe2efc402ea7d4a6a89443466dc4cd736391,PodSandboxId:8f44a08dd23f18c04924921c4899ce5b9fcaeefd99e63c9c2a39ad7eceab3cd4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722933616941294769,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 5497ddbc-8ed5-4b42-80e8-81aa7c1b8fa7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:444b1faae9279d26715d6c59922c216fd88e63eae27e052f1fc54d9df1c8e527,PodSandboxId:d0f2cea9718124e2190ee70e5f76291f3c583cffa143980ebad3cbd1c17b0eee,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722933616481159333,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mpfwj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d
a40ee8-73c2-4479-a2aa-32a2dce7e21d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5bb6477358a380e3f783c71af8369ac4a7486080f96c787d216d599bd79305e,PodSandboxId:d8468c5168848d0133f92b7c86bce34d6b90c819fb18142728d15df87f81f572,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,State:CONTAINER_RUNNING,CreatedAt:
1722933615374717381,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zsv9x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2859a7f2-f880-4be4-806e-163eb9caf535,},Annotations:map[string]string{io.kubernetes.container.hash: 16faf14d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6d85f228745f04a5dd5a2f2e6364404974369765bf98a1724000807552e8908,PodSandboxId:ea26f246b9ce0ade860b50e92c703bf25bfcd033909ab854efc3ef2d4d554c52,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1722933604266305532,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-703340,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7024dbf08cd3cd6cf045756262139cce,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db34d02027a316f2243ff3c556e335b6a7a27646bdca17424ca44a19186ce1b1,PodSandboxId:b3cb276472800f4d6a9a8ca1ca2ee35e89197a05a4f86b8688b7183ed1e93bb5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_RUNNING,CreatedAt:1722933604308066021,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-703340,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 901f74e47f1d9dec93c609776ac15986,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e08881e193ba0b198a7a54b31142c5ab098dd8d790e6497956fed1247f2b42cf,PodSandboxId:3f46aa29d26fafd3f000eacce54ec236bc50b344b499ed9f2c75c305cd64ceca,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_RUNNING,CreatedAt:1722933604236782799,Labels:map[string]string{io.kubernetes.container.name: k
ube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-703340,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 894abcb80f3ffdf09387a41fe5cd0fa5,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99d3e8306f574ba7867e4cd33f71afc4878be2a8d207bd0b7386fe63a6aa953a,PodSandboxId:e01e4089952e966eae8624a82c9856c5782be7081af7e47674d10a649c326695,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_RUNNING,CreatedAt:1722933604230051050,Labels:map[string]string{io.kubernetes.container.na
me: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-703340,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1eff7ba90ca6a95ec01f7820a7d1aee0,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a6840a59bdd44f77956af229841972da859383e685eea8895d93ba43633026e,PodSandboxId:0a16086bf612650c192041fbfc646877dcea2212beb723c7a8ba5da928b0c62e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_EXITED,CreatedAt:1722933317212369718,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-703340,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 901f74e47f1d9dec93c609776ac15986,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7b91e8b7-caa7-4cf8-b06e-e47bc83b02f3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:54:55 no-preload-703340 crio[736]: time="2024-08-06 08:54:55.656190952Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=326b45be-6238-4bbc-b35a-43a6a6897b2f name=/runtime.v1.RuntimeService/Version
	Aug 06 08:54:55 no-preload-703340 crio[736]: time="2024-08-06 08:54:55.656452947Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=326b45be-6238-4bbc-b35a-43a6a6897b2f name=/runtime.v1.RuntimeService/Version
	Aug 06 08:54:55 no-preload-703340 crio[736]: time="2024-08-06 08:54:55.658160488Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=08800117-9e3a-4a0f-8e80-d82f09200841 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 08:54:55 no-preload-703340 crio[736]: time="2024-08-06 08:54:55.658485188Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722934495658465041,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=08800117-9e3a-4a0f-8e80-d82f09200841 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 08:54:55 no-preload-703340 crio[736]: time="2024-08-06 08:54:55.659334844Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0e93a6cd-0dbe-4fbb-8f07-914851888130 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:54:55 no-preload-703340 crio[736]: time="2024-08-06 08:54:55.659400740Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0e93a6cd-0dbe-4fbb-8f07-914851888130 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:54:55 no-preload-703340 crio[736]: time="2024-08-06 08:54:55.659659412Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bf2aa151d2bbbede89e9f9a6b18d232dc742f0ad4eed4af7b5ea89c55a07e5c3,PodSandboxId:2b37f216b0b7fb7d0e89be845a308953cace561fe743a959fddf0d0b431c9d01,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722933617063651584,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-9dt4q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a62f9c28-2e06-4072-b459-3a879cba65d6,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f6da332003bcdaa2beb8516e28bfe2efc402ea7d4a6a89443466dc4cd736391,PodSandboxId:8f44a08dd23f18c04924921c4899ce5b9fcaeefd99e63c9c2a39ad7eceab3cd4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722933616941294769,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 5497ddbc-8ed5-4b42-80e8-81aa7c1b8fa7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:444b1faae9279d26715d6c59922c216fd88e63eae27e052f1fc54d9df1c8e527,PodSandboxId:d0f2cea9718124e2190ee70e5f76291f3c583cffa143980ebad3cbd1c17b0eee,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722933616481159333,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mpfwj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d
a40ee8-73c2-4479-a2aa-32a2dce7e21d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5bb6477358a380e3f783c71af8369ac4a7486080f96c787d216d599bd79305e,PodSandboxId:d8468c5168848d0133f92b7c86bce34d6b90c819fb18142728d15df87f81f572,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,State:CONTAINER_RUNNING,CreatedAt:
1722933615374717381,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zsv9x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2859a7f2-f880-4be4-806e-163eb9caf535,},Annotations:map[string]string{io.kubernetes.container.hash: 16faf14d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6d85f228745f04a5dd5a2f2e6364404974369765bf98a1724000807552e8908,PodSandboxId:ea26f246b9ce0ade860b50e92c703bf25bfcd033909ab854efc3ef2d4d554c52,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1722933604266305532,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-703340,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7024dbf08cd3cd6cf045756262139cce,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db34d02027a316f2243ff3c556e335b6a7a27646bdca17424ca44a19186ce1b1,PodSandboxId:b3cb276472800f4d6a9a8ca1ca2ee35e89197a05a4f86b8688b7183ed1e93bb5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_RUNNING,CreatedAt:1722933604308066021,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-703340,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 901f74e47f1d9dec93c609776ac15986,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e08881e193ba0b198a7a54b31142c5ab098dd8d790e6497956fed1247f2b42cf,PodSandboxId:3f46aa29d26fafd3f000eacce54ec236bc50b344b499ed9f2c75c305cd64ceca,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_RUNNING,CreatedAt:1722933604236782799,Labels:map[string]string{io.kubernetes.container.name: k
ube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-703340,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 894abcb80f3ffdf09387a41fe5cd0fa5,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99d3e8306f574ba7867e4cd33f71afc4878be2a8d207bd0b7386fe63a6aa953a,PodSandboxId:e01e4089952e966eae8624a82c9856c5782be7081af7e47674d10a649c326695,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_RUNNING,CreatedAt:1722933604230051050,Labels:map[string]string{io.kubernetes.container.na
me: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-703340,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1eff7ba90ca6a95ec01f7820a7d1aee0,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a6840a59bdd44f77956af229841972da859383e685eea8895d93ba43633026e,PodSandboxId:0a16086bf612650c192041fbfc646877dcea2212beb723c7a8ba5da928b0c62e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_EXITED,CreatedAt:1722933317212369718,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-703340,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 901f74e47f1d9dec93c609776ac15986,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0e93a6cd-0dbe-4fbb-8f07-914851888130 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	bf2aa151d2bbb       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   14 minutes ago      Running             coredns                   0                   2b37f216b0b7f       coredns-6f6b679f8f-9dt4q
	7f6da332003bc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 minutes ago      Running             storage-provisioner       0                   8f44a08dd23f1       storage-provisioner
	444b1faae9279       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   14 minutes ago      Running             coredns                   0                   d0f2cea971812       coredns-6f6b679f8f-mpfwj
	a5bb6477358a3       41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318   14 minutes ago      Running             kube-proxy                0                   d8468c5168848       kube-proxy-zsv9x
	db34d02027a31       c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0   14 minutes ago      Running             kube-apiserver            2                   b3cb276472800       kube-apiserver-no-preload-703340
	f6d85f228745f       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   14 minutes ago      Running             etcd                      2                   ea26f246b9ce0       etcd-no-preload-703340
	e08881e193ba0       fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c   14 minutes ago      Running             kube-controller-manager   2                   3f46aa29d26fa       kube-controller-manager-no-preload-703340
	99d3e8306f574       0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c   14 minutes ago      Running             kube-scheduler            2                   e01e4089952e9       kube-scheduler-no-preload-703340
	6a6840a59bdd4       c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0   19 minutes ago      Exited              kube-apiserver            1                   0a16086bf6126       kube-apiserver-no-preload-703340
	
	
	==> coredns [444b1faae9279d26715d6c59922c216fd88e63eae27e052f1fc54d9df1c8e527] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [bf2aa151d2bbbede89e9f9a6b18d232dc742f0ad4eed4af7b5ea89c55a07e5c3] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               no-preload-703340
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-703340
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e92cb06692f5ea1ba801d10d148e5e92e807f9c8
	                    minikube.k8s.io/name=no-preload-703340
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_06T08_40_10_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 06 Aug 2024 08:40:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-703340
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 06 Aug 2024 08:54:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 06 Aug 2024 08:50:34 +0000   Tue, 06 Aug 2024 08:40:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 06 Aug 2024 08:50:34 +0000   Tue, 06 Aug 2024 08:40:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 06 Aug 2024 08:50:34 +0000   Tue, 06 Aug 2024 08:40:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 06 Aug 2024 08:50:34 +0000   Tue, 06 Aug 2024 08:40:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.126
	  Hostname:    no-preload-703340
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e76c8ce62eab460c8d415a9980721def
	  System UUID:                e76c8ce6-2eab-460c-8d41-5a9980721def
	  Boot ID:                    1356c0b6-2011-44d7-a124-f347f2da77e7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0-rc.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-9dt4q                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 coredns-6f6b679f8f-mpfwj                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 etcd-no-preload-703340                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kube-apiserver-no-preload-703340             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-no-preload-703340    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-zsv9x                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-no-preload-703340             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 metrics-server-6867b74b74-tqnjm              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14m                kube-proxy       
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)  kubelet          Node no-preload-703340 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)  kubelet          Node no-preload-703340 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)  kubelet          Node no-preload-703340 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 14m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m                kubelet          Node no-preload-703340 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m                kubelet          Node no-preload-703340 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m                kubelet          Node no-preload-703340 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14m                node-controller  Node no-preload-703340 event: Registered Node no-preload-703340 in Controller
	
	
	==> dmesg <==
	[  +0.057547] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.043410] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.211948] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.702215] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.620725] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.617040] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[  +0.063694] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062796] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +0.186700] systemd-fstab-generator[679]: Ignoring "noauto" option for root device
	[  +0.160860] systemd-fstab-generator[691]: Ignoring "noauto" option for root device
	[  +0.287038] systemd-fstab-generator[721]: Ignoring "noauto" option for root device
	[Aug 6 08:35] systemd-fstab-generator[1260]: Ignoring "noauto" option for root device
	[  +0.063898] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.870826] systemd-fstab-generator[1382]: Ignoring "noauto" option for root device
	[  +3.988358] kauditd_printk_skb: 97 callbacks suppressed
	[  +7.201530] kauditd_printk_skb: 88 callbacks suppressed
	[Aug 6 08:40] systemd-fstab-generator[3009]: Ignoring "noauto" option for root device
	[  +0.070693] kauditd_printk_skb: 9 callbacks suppressed
	[  +6.495136] systemd-fstab-generator[3331]: Ignoring "noauto" option for root device
	[  +0.095814] kauditd_printk_skb: 54 callbacks suppressed
	[  +5.332552] systemd-fstab-generator[3446]: Ignoring "noauto" option for root device
	[  +0.096358] kauditd_printk_skb: 12 callbacks suppressed
	[  +6.277628] kauditd_printk_skb: 88 callbacks suppressed
	
	
	==> etcd [f6d85f228745f04a5dd5a2f2e6364404974369765bf98a1724000807552e8908] <==
	{"level":"info","ts":"2024-08-06T08:40:05.300926Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"78ee1f817cf53edb received MsgVoteResp from 78ee1f817cf53edb at term 2"}
	{"level":"info","ts":"2024-08-06T08:40:05.300957Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"78ee1f817cf53edb became leader at term 2"}
	{"level":"info","ts":"2024-08-06T08:40:05.300983Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 78ee1f817cf53edb elected leader 78ee1f817cf53edb at term 2"}
	{"level":"info","ts":"2024-08-06T08:40:05.305854Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-06T08:40:05.308871Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"78ee1f817cf53edb","local-member-attributes":"{Name:no-preload-703340 ClientURLs:[https://192.168.72.126:2379]}","request-path":"/0/members/78ee1f817cf53edb/attributes","cluster-id":"90d6a9cd2ff29577","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-06T08:40:05.308956Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-06T08:40:05.309497Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-06T08:40:05.311455Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-06T08:40:05.315341Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"90d6a9cd2ff29577","local-member-id":"78ee1f817cf53edb","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-06T08:40:05.317860Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-06T08:40:05.322962Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-06T08:40:05.323493Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-06T08:40:05.318758Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-06T08:40:05.327462Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.126:2379"}
	{"level":"info","ts":"2024-08-06T08:40:05.318817Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-06T08:40:05.368693Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-06T08:50:05.402767Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":725}
	{"level":"info","ts":"2024-08-06T08:50:05.414101Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":725,"took":"10.196278ms","hash":57638866,"current-db-size-bytes":2297856,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":2297856,"current-db-size-in-use":"2.3 MB"}
	{"level":"info","ts":"2024-08-06T08:50:05.414239Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":57638866,"revision":725,"compact-revision":-1}
	{"level":"info","ts":"2024-08-06T08:54:52.367413Z","caller":"traceutil/trace.go:171","msg":"trace[1240117936] linearizableReadLoop","detail":"{readStateIndex:1393; appliedIndex:1392; }","duration":"301.002353ms","start":"2024-08-06T08:54:52.066356Z","end":"2024-08-06T08:54:52.367358Z","steps":["trace[1240117936] 'read index received'  (duration: 300.807896ms)","trace[1240117936] 'applied index is now lower than readState.Index'  (duration: 193.872µs)"],"step_count":2}
	{"level":"info","ts":"2024-08-06T08:54:52.367858Z","caller":"traceutil/trace.go:171","msg":"trace[494343633] transaction","detail":"{read_only:false; response_revision:1201; number_of_response:1; }","duration":"384.806455ms","start":"2024-08-06T08:54:51.983017Z","end":"2024-08-06T08:54:52.367823Z","steps":["trace[494343633] 'process raft request'  (duration: 384.210728ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-06T08:54:52.369014Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-06T08:54:51.982991Z","time spent":"385.292073ms","remote":"127.0.0.1:43416","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1200 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-08-06T08:54:52.368068Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"301.628783ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-06T08:54:52.369252Z","caller":"traceutil/trace.go:171","msg":"trace[212267385] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1201; }","duration":"302.887005ms","start":"2024-08-06T08:54:52.066352Z","end":"2024-08-06T08:54:52.369239Z","steps":["trace[212267385] 'agreement among raft nodes before linearized reading'  (duration: 301.60034ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-06T08:54:52.369294Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-06T08:54:52.066317Z","time spent":"302.964195ms","remote":"127.0.0.1:43434","response type":"/etcdserverpb.KV/Range","request count":0,"request size":76,"response count":0,"response size":28,"request content":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" "}
	
	
	==> kernel <==
	 08:54:56 up 20 min,  0 users,  load average: 0.03, 0.11, 0.15
	Linux no-preload-703340 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [6a6840a59bdd44f77956af229841972da859383e685eea8895d93ba43633026e] <==
	W0806 08:39:57.212757       1 logging.go:55] [core] [Channel #9 SubChannel #10]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 08:39:57.214207       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 08:39:57.215473       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 08:39:57.217895       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 08:39:57.262702       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 08:39:57.297501       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 08:39:57.306041       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 08:39:57.326771       1 logging.go:55] [core] [Channel #19 SubChannel #20]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 08:39:57.389782       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 08:39:57.425717       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 08:39:57.434718       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 08:39:57.440272       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 08:39:57.465891       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 08:39:57.466241       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 08:39:57.473340       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 08:39:57.545736       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 08:39:57.596848       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 08:39:57.604635       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 08:39:57.695061       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 08:39:57.737879       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 08:39:57.907355       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 08:39:58.035548       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 08:39:58.043967       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 08:39:58.184470       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 08:39:58.237508       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [db34d02027a316f2243ff3c556e335b6a7a27646bdca17424ca44a19186ce1b1] <==
	E0806 08:50:07.978647       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0806 08:50:07.978506       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0806 08:50:07.979866       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0806 08:50:07.979931       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0806 08:51:07.980730       1 handler_proxy.go:99] no RequestInfo found in the context
	E0806 08:51:07.980838       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0806 08:51:07.980748       1 handler_proxy.go:99] no RequestInfo found in the context
	E0806 08:51:07.980935       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0806 08:51:07.982215       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0806 08:51:07.982237       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0806 08:53:07.982876       1 handler_proxy.go:99] no RequestInfo found in the context
	E0806 08:53:07.983210       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0806 08:53:07.983287       1 handler_proxy.go:99] no RequestInfo found in the context
	E0806 08:53:07.983466       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0806 08:53:07.984661       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0806 08:53:07.984752       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [e08881e193ba0b198a7a54b31142c5ab098dd8d790e6497956fed1247f2b42cf] <==
	E0806 08:49:43.938061       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0806 08:49:44.430785       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0806 08:50:13.945301       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0806 08:50:14.439655       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0806 08:50:34.248997       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-703340"
	E0806 08:50:43.951855       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0806 08:50:44.448342       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0806 08:51:08.943828       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="263.672µs"
	E0806 08:51:13.960022       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0806 08:51:14.457981       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0806 08:51:23.941852       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="130.285µs"
	E0806 08:51:43.966442       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0806 08:51:44.466169       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0806 08:52:13.973210       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0806 08:52:14.474935       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0806 08:52:43.981446       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0806 08:52:44.483530       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0806 08:53:13.989661       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0806 08:53:14.493149       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0806 08:53:43.997766       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0806 08:53:44.503395       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0806 08:54:14.006547       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0806 08:54:14.515933       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0806 08:54:44.013953       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0806 08:54:44.525435       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [a5bb6477358a380e3f783c71af8369ac4a7486080f96c787d216d599bd79305e] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0806 08:40:15.829827       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0806 08:40:15.849265       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.72.126"]
	E0806 08:40:15.849654       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0806 08:40:15.944391       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0806 08:40:15.944450       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0806 08:40:15.944482       1 server_linux.go:169] "Using iptables Proxier"
	I0806 08:40:15.949856       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0806 08:40:15.950130       1 server.go:483] "Version info" version="v1.31.0-rc.0"
	I0806 08:40:15.950141       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0806 08:40:15.951432       1 config.go:197] "Starting service config controller"
	I0806 08:40:15.951459       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0806 08:40:15.951480       1 config.go:104] "Starting endpoint slice config controller"
	I0806 08:40:15.951483       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0806 08:40:15.952034       1 config.go:326] "Starting node config controller"
	I0806 08:40:15.952040       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0806 08:40:16.053747       1 shared_informer.go:320] Caches are synced for node config
	I0806 08:40:16.053776       1 shared_informer.go:320] Caches are synced for service config
	I0806 08:40:16.053806       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [99d3e8306f574ba7867e4cd33f71afc4878be2a8d207bd0b7386fe63a6aa953a] <==
	W0806 08:40:07.850546       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0806 08:40:07.850652       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0806 08:40:07.864422       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0806 08:40:07.864550       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0806 08:40:07.907798       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0806 08:40:07.908273       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0806 08:40:07.990304       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0806 08:40:07.991144       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0806 08:40:08.067741       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0806 08:40:08.068044       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0806 08:40:08.105250       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0806 08:40:08.105302       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0806 08:40:08.117501       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0806 08:40:08.117631       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0806 08:40:08.227644       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0806 08:40:08.227693       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0806 08:40:08.267765       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0806 08:40:08.268503       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0806 08:40:08.284351       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0806 08:40:08.284543       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0806 08:40:08.336833       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0806 08:40:08.336956       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0806 08:40:08.406718       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0806 08:40:08.407105       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0806 08:40:09.813958       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 06 08:53:40 no-preload-703340 kubelet[3338]: E0806 08:53:40.924310    3338 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-tqnjm" podUID="4ab2bdd5-bd70-445a-84ac-6a45bdaf06a7"
	Aug 06 08:53:50 no-preload-703340 kubelet[3338]: E0806 08:53:50.181235    3338 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722934430180847761,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 06 08:53:50 no-preload-703340 kubelet[3338]: E0806 08:53:50.181297    3338 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722934430180847761,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 06 08:53:51 no-preload-703340 kubelet[3338]: E0806 08:53:51.927447    3338 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-tqnjm" podUID="4ab2bdd5-bd70-445a-84ac-6a45bdaf06a7"
	Aug 06 08:54:00 no-preload-703340 kubelet[3338]: E0806 08:54:00.182884    3338 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722934440182188798,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 06 08:54:00 no-preload-703340 kubelet[3338]: E0806 08:54:00.182994    3338 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722934440182188798,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 06 08:54:04 no-preload-703340 kubelet[3338]: E0806 08:54:04.924380    3338 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-tqnjm" podUID="4ab2bdd5-bd70-445a-84ac-6a45bdaf06a7"
	Aug 06 08:54:09 no-preload-703340 kubelet[3338]: E0806 08:54:09.942177    3338 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 06 08:54:09 no-preload-703340 kubelet[3338]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 06 08:54:09 no-preload-703340 kubelet[3338]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 06 08:54:09 no-preload-703340 kubelet[3338]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 06 08:54:09 no-preload-703340 kubelet[3338]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 06 08:54:10 no-preload-703340 kubelet[3338]: E0806 08:54:10.184699    3338 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722934450184169022,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 06 08:54:10 no-preload-703340 kubelet[3338]: E0806 08:54:10.184755    3338 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722934450184169022,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 06 08:54:18 no-preload-703340 kubelet[3338]: E0806 08:54:18.923883    3338 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-tqnjm" podUID="4ab2bdd5-bd70-445a-84ac-6a45bdaf06a7"
	Aug 06 08:54:20 no-preload-703340 kubelet[3338]: E0806 08:54:20.186930    3338 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722934460185982781,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 06 08:54:20 no-preload-703340 kubelet[3338]: E0806 08:54:20.186956    3338 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722934460185982781,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 06 08:54:30 no-preload-703340 kubelet[3338]: E0806 08:54:30.188944    3338 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722934470188383791,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 06 08:54:30 no-preload-703340 kubelet[3338]: E0806 08:54:30.188992    3338 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722934470188383791,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 06 08:54:33 no-preload-703340 kubelet[3338]: E0806 08:54:33.924148    3338 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-tqnjm" podUID="4ab2bdd5-bd70-445a-84ac-6a45bdaf06a7"
	Aug 06 08:54:40 no-preload-703340 kubelet[3338]: E0806 08:54:40.190343    3338 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722934480190063332,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 06 08:54:40 no-preload-703340 kubelet[3338]: E0806 08:54:40.190382    3338 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722934480190063332,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 06 08:54:45 no-preload-703340 kubelet[3338]: E0806 08:54:45.924836    3338 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-tqnjm" podUID="4ab2bdd5-bd70-445a-84ac-6a45bdaf06a7"
	Aug 06 08:54:50 no-preload-703340 kubelet[3338]: E0806 08:54:50.192158    3338 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722934490191690749,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 06 08:54:50 no-preload-703340 kubelet[3338]: E0806 08:54:50.192224    3338 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722934490191690749,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [7f6da332003bcdaa2beb8516e28bfe2efc402ea7d4a6a89443466dc4cd736391] <==
	I0806 08:40:17.188365       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0806 08:40:17.217614       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0806 08:40:17.217847       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0806 08:40:17.242948       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0806 08:40:17.245439       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-703340_2c16686c-c303-4327-91b0-e0d7bdf0d7e7!
	I0806 08:40:17.245286       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3469128c-ad60-4354-895e-64f1e00e6347", APIVersion:"v1", ResourceVersion:"438", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-703340_2c16686c-c303-4327-91b0-e0d7bdf0d7e7 became leader
	I0806 08:40:17.346662       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-703340_2c16686c-c303-4327-91b0-e0d7bdf0d7e7!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-703340 -n no-preload-703340
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-703340 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-tqnjm
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-703340 describe pod metrics-server-6867b74b74-tqnjm
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-703340 describe pod metrics-server-6867b74b74-tqnjm: exit status 1 (64.967463ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-tqnjm" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-703340 describe pod metrics-server-6867b74b74-tqnjm: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (328.79s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (145.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
E0806 08:53:23.040222   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/calico-405163/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
E0806 08:53:45.266021   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/custom-flannel-405163/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.158:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.158:8443: connect: connection refused
E0806 08:54:13.203201   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/addons-505457/client.crt: no such file or directory
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-409903 -n old-k8s-version-409903
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-409903 -n old-k8s-version-409903: exit status 2 (233.779563ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-409903" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-409903 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-409903 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.525µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-409903 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-409903 -n old-k8s-version-409903
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-409903 -n old-k8s-version-409903: exit status 2 (230.029395ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-409903 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-409903 logs -n 25: (1.788793772s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p flannel-405163 sudo cat                             | flannel-405163               | jenkins | v1.33.1 | 06 Aug 24 08:25 UTC | 06 Aug 24 08:25 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p flannel-405163 sudo                                 | flannel-405163               | jenkins | v1.33.1 | 06 Aug 24 08:25 UTC | 06 Aug 24 08:25 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p flannel-405163 sudo                                 | flannel-405163               | jenkins | v1.33.1 | 06 Aug 24 08:25 UTC | 06 Aug 24 08:25 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p flannel-405163 sudo                                 | flannel-405163               | jenkins | v1.33.1 | 06 Aug 24 08:25 UTC | 06 Aug 24 08:25 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p flannel-405163 sudo find                            | flannel-405163               | jenkins | v1.33.1 | 06 Aug 24 08:25 UTC | 06 Aug 24 08:25 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p flannel-405163 sudo crio                            | flannel-405163               | jenkins | v1.33.1 | 06 Aug 24 08:25 UTC | 06 Aug 24 08:25 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p flannel-405163                                      | flannel-405163               | jenkins | v1.33.1 | 06 Aug 24 08:25 UTC | 06 Aug 24 08:25 UTC |
	| delete  | -p                                                     | disable-driver-mounts-607762 | jenkins | v1.33.1 | 06 Aug 24 08:25 UTC | 06 Aug 24 08:25 UTC |
	|         | disable-driver-mounts-607762                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-990751 | jenkins | v1.33.1 | 06 Aug 24 08:25 UTC | 06 Aug 24 08:27 UTC |
	|         | default-k8s-diff-port-990751                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-703340             | no-preload-703340            | jenkins | v1.33.1 | 06 Aug 24 08:26 UTC | 06 Aug 24 08:26 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-703340                                   | no-preload-703340            | jenkins | v1.33.1 | 06 Aug 24 08:26 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-324808            | embed-certs-324808           | jenkins | v1.33.1 | 06 Aug 24 08:26 UTC | 06 Aug 24 08:26 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-324808                                  | embed-certs-324808           | jenkins | v1.33.1 | 06 Aug 24 08:26 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-990751  | default-k8s-diff-port-990751 | jenkins | v1.33.1 | 06 Aug 24 08:27 UTC | 06 Aug 24 08:27 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-990751 | jenkins | v1.33.1 | 06 Aug 24 08:27 UTC |                     |
	|         | default-k8s-diff-port-990751                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-703340                  | no-preload-703340            | jenkins | v1.33.1 | 06 Aug 24 08:28 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-703340                                   | no-preload-703340            | jenkins | v1.33.1 | 06 Aug 24 08:28 UTC | 06 Aug 24 08:40 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-409903        | old-k8s-version-409903       | jenkins | v1.33.1 | 06 Aug 24 08:28 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-324808                 | embed-certs-324808           | jenkins | v1.33.1 | 06 Aug 24 08:29 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-324808                                  | embed-certs-324808           | jenkins | v1.33.1 | 06 Aug 24 08:29 UTC | 06 Aug 24 08:38 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-990751       | default-k8s-diff-port-990751 | jenkins | v1.33.1 | 06 Aug 24 08:30 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-990751 | jenkins | v1.33.1 | 06 Aug 24 08:30 UTC | 06 Aug 24 08:38 UTC |
	|         | default-k8s-diff-port-990751                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-409903                              | old-k8s-version-409903       | jenkins | v1.33.1 | 06 Aug 24 08:30 UTC | 06 Aug 24 08:30 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-409903             | old-k8s-version-409903       | jenkins | v1.33.1 | 06 Aug 24 08:30 UTC | 06 Aug 24 08:30 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-409903                              | old-k8s-version-409903       | jenkins | v1.33.1 | 06 Aug 24 08:30 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/06 08:30:58
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0806 08:30:58.757517   79927 out.go:291] Setting OutFile to fd 1 ...
	I0806 08:30:58.757772   79927 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 08:30:58.757780   79927 out.go:304] Setting ErrFile to fd 2...
	I0806 08:30:58.757784   79927 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 08:30:58.757992   79927 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19370-13057/.minikube/bin
	I0806 08:30:58.758527   79927 out.go:298] Setting JSON to false
	I0806 08:30:58.759405   79927 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8005,"bootTime":1722925054,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0806 08:30:58.759463   79927 start.go:139] virtualization: kvm guest
	I0806 08:30:58.762200   79927 out.go:177] * [old-k8s-version-409903] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0806 08:30:58.763461   79927 out.go:177]   - MINIKUBE_LOCATION=19370
	I0806 08:30:58.763514   79927 notify.go:220] Checking for updates...
	I0806 08:30:58.766017   79927 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0806 08:30:58.767180   79927 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19370-13057/kubeconfig
	I0806 08:30:58.768270   79927 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19370-13057/.minikube
	I0806 08:30:58.769303   79927 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0806 08:30:58.770385   79927 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0806 08:30:58.771931   79927 config.go:182] Loaded profile config "old-k8s-version-409903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0806 08:30:58.772319   79927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:30:58.772407   79927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:30:58.787845   79927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39563
	I0806 08:30:58.788270   79927 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:30:58.788863   79927 main.go:141] libmachine: Using API Version  1
	I0806 08:30:58.788879   79927 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:30:58.789169   79927 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:30:58.789314   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .DriverName
	I0806 08:30:58.791190   79927 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0806 08:30:58.792487   79927 driver.go:392] Setting default libvirt URI to qemu:///system
	I0806 08:30:58.792776   79927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:30:58.792811   79927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:30:58.807347   79927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33093
	I0806 08:30:58.807788   79927 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:30:58.808260   79927 main.go:141] libmachine: Using API Version  1
	I0806 08:30:58.808276   79927 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:30:58.808633   79927 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:30:58.808849   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .DriverName
	I0806 08:30:58.845305   79927 out.go:177] * Using the kvm2 driver based on existing profile
	I0806 08:30:58.846560   79927 start.go:297] selected driver: kvm2
	I0806 08:30:58.846580   79927 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-409903 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-409903 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.158 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 08:30:58.846684   79927 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0806 08:30:58.847335   79927 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 08:30:58.847395   79927 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19370-13057/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0806 08:30:58.862013   79927 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0806 08:30:58.862398   79927 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0806 08:30:58.862431   79927 cni.go:84] Creating CNI manager for ""
	I0806 08:30:58.862441   79927 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0806 08:30:58.862493   79927 start.go:340] cluster config:
	{Name:old-k8s-version-409903 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-409903 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.158 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 08:30:58.862609   79927 iso.go:125] acquiring lock: {Name:mke58d71de28babe305c5eb0c31c274e9713bb3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 08:30:58.864484   79927 out.go:177] * Starting "old-k8s-version-409903" primary control-plane node in "old-k8s-version-409903" cluster
	I0806 08:30:58.865763   79927 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0806 08:30:58.865796   79927 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0806 08:30:58.865803   79927 cache.go:56] Caching tarball of preloaded images
	I0806 08:30:58.865912   79927 preload.go:172] Found /home/jenkins/minikube-integration/19370-13057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0806 08:30:58.865928   79927 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0806 08:30:58.866013   79927 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/old-k8s-version-409903/config.json ...
	I0806 08:30:58.866228   79927 start.go:360] acquireMachinesLock for old-k8s-version-409903: {Name:mkc602ac9c8cc3360dcc0fcb8104303837aaab61 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 08:31:01.700664   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:31:04.772608   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:31:10.852634   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:31:13.924605   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:31:20.004661   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:31:23.076602   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:31:29.156621   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:31:32.228616   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:31:38.308684   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:31:41.380622   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:31:47.460632   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:31:50.532638   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:31:56.612610   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:31:59.684696   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:32:05.764642   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:32:08.836560   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:32:14.916653   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:32:17.988720   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:32:24.068597   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:32:27.140686   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:32:33.220630   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:32:36.292609   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:32:42.372654   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:32:45.444619   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:32:51.524637   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:32:54.596677   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:33:00.676664   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:33:03.748647   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:33:09.828643   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:33:12.900674   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:33:18.980642   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:33:22.052616   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:33:28.132639   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:33:31.204678   78922 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.126:22: connect: no route to host
	I0806 08:33:34.209491   79219 start.go:364] duration metric: took 4m19.788239383s to acquireMachinesLock for "embed-certs-324808"
	I0806 08:33:34.209554   79219 start.go:96] Skipping create...Using existing machine configuration
	I0806 08:33:34.209563   79219 fix.go:54] fixHost starting: 
	I0806 08:33:34.209914   79219 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:33:34.209945   79219 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:33:34.225180   79219 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44037
	I0806 08:33:34.225607   79219 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:33:34.226014   79219 main.go:141] libmachine: Using API Version  1
	I0806 08:33:34.226033   79219 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:33:34.226323   79219 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:33:34.226522   79219 main.go:141] libmachine: (embed-certs-324808) Calling .DriverName
	I0806 08:33:34.226684   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetState
	I0806 08:33:34.228172   79219 fix.go:112] recreateIfNeeded on embed-certs-324808: state=Stopped err=<nil>
	I0806 08:33:34.228200   79219 main.go:141] libmachine: (embed-certs-324808) Calling .DriverName
	W0806 08:33:34.228355   79219 fix.go:138] unexpected machine state, will restart: <nil>
	I0806 08:33:34.231261   79219 out.go:177] * Restarting existing kvm2 VM for "embed-certs-324808" ...
	I0806 08:33:34.232543   79219 main.go:141] libmachine: (embed-certs-324808) Calling .Start
	I0806 08:33:34.232728   79219 main.go:141] libmachine: (embed-certs-324808) Ensuring networks are active...
	I0806 08:33:34.233475   79219 main.go:141] libmachine: (embed-certs-324808) Ensuring network default is active
	I0806 08:33:34.233819   79219 main.go:141] libmachine: (embed-certs-324808) Ensuring network mk-embed-certs-324808 is active
	I0806 08:33:34.234291   79219 main.go:141] libmachine: (embed-certs-324808) Getting domain xml...
	I0806 08:33:34.235129   79219 main.go:141] libmachine: (embed-certs-324808) Creating domain...
	I0806 08:33:34.206591   78922 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 08:33:34.206636   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetMachineName
	I0806 08:33:34.206950   78922 buildroot.go:166] provisioning hostname "no-preload-703340"
	I0806 08:33:34.206975   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetMachineName
	I0806 08:33:34.207187   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHHostname
	I0806 08:33:34.209379   78922 machine.go:97] duration metric: took 4m37.416826471s to provisionDockerMachine
	I0806 08:33:34.209416   78922 fix.go:56] duration metric: took 4m37.443014145s for fixHost
	I0806 08:33:34.209424   78922 start.go:83] releasing machines lock for "no-preload-703340", held for 4m37.443037097s
	W0806 08:33:34.209442   78922 start.go:714] error starting host: provision: host is not running
	W0806 08:33:34.209538   78922 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0806 08:33:34.209545   78922 start.go:729] Will try again in 5 seconds ...
	I0806 08:33:35.434516   79219 main.go:141] libmachine: (embed-certs-324808) Waiting to get IP...
	I0806 08:33:35.435546   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:35.435973   79219 main.go:141] libmachine: (embed-certs-324808) DBG | unable to find current IP address of domain embed-certs-324808 in network mk-embed-certs-324808
	I0806 08:33:35.436045   79219 main.go:141] libmachine: (embed-certs-324808) DBG | I0806 08:33:35.435966   80472 retry.go:31] will retry after 190.247547ms: waiting for machine to come up
	I0806 08:33:35.627362   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:35.627768   79219 main.go:141] libmachine: (embed-certs-324808) DBG | unable to find current IP address of domain embed-certs-324808 in network mk-embed-certs-324808
	I0806 08:33:35.627803   79219 main.go:141] libmachine: (embed-certs-324808) DBG | I0806 08:33:35.627742   80472 retry.go:31] will retry after 306.927129ms: waiting for machine to come up
	I0806 08:33:35.936242   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:35.936714   79219 main.go:141] libmachine: (embed-certs-324808) DBG | unable to find current IP address of domain embed-certs-324808 in network mk-embed-certs-324808
	I0806 08:33:35.936738   79219 main.go:141] libmachine: (embed-certs-324808) DBG | I0806 08:33:35.936661   80472 retry.go:31] will retry after 363.399025ms: waiting for machine to come up
	I0806 08:33:36.301454   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:36.301934   79219 main.go:141] libmachine: (embed-certs-324808) DBG | unable to find current IP address of domain embed-certs-324808 in network mk-embed-certs-324808
	I0806 08:33:36.301967   79219 main.go:141] libmachine: (embed-certs-324808) DBG | I0806 08:33:36.301879   80472 retry.go:31] will retry after 495.07719ms: waiting for machine to come up
	I0806 08:33:36.798489   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:36.798930   79219 main.go:141] libmachine: (embed-certs-324808) DBG | unable to find current IP address of domain embed-certs-324808 in network mk-embed-certs-324808
	I0806 08:33:36.798958   79219 main.go:141] libmachine: (embed-certs-324808) DBG | I0806 08:33:36.798870   80472 retry.go:31] will retry after 755.286107ms: waiting for machine to come up
	I0806 08:33:37.555799   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:37.556155   79219 main.go:141] libmachine: (embed-certs-324808) DBG | unable to find current IP address of domain embed-certs-324808 in network mk-embed-certs-324808
	I0806 08:33:37.556187   79219 main.go:141] libmachine: (embed-certs-324808) DBG | I0806 08:33:37.556094   80472 retry.go:31] will retry after 600.530399ms: waiting for machine to come up
	I0806 08:33:38.157817   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:38.158303   79219 main.go:141] libmachine: (embed-certs-324808) DBG | unable to find current IP address of domain embed-certs-324808 in network mk-embed-certs-324808
	I0806 08:33:38.158334   79219 main.go:141] libmachine: (embed-certs-324808) DBG | I0806 08:33:38.158241   80472 retry.go:31] will retry after 943.262354ms: waiting for machine to come up
	I0806 08:33:39.103226   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:39.103615   79219 main.go:141] libmachine: (embed-certs-324808) DBG | unable to find current IP address of domain embed-certs-324808 in network mk-embed-certs-324808
	I0806 08:33:39.103645   79219 main.go:141] libmachine: (embed-certs-324808) DBG | I0806 08:33:39.103563   80472 retry.go:31] will retry after 1.100862399s: waiting for machine to come up
	I0806 08:33:39.211204   78922 start.go:360] acquireMachinesLock for no-preload-703340: {Name:mkc602ac9c8cc3360dcc0fcb8104303837aaab61 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 08:33:40.205796   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:40.206130   79219 main.go:141] libmachine: (embed-certs-324808) DBG | unable to find current IP address of domain embed-certs-324808 in network mk-embed-certs-324808
	I0806 08:33:40.206160   79219 main.go:141] libmachine: (embed-certs-324808) DBG | I0806 08:33:40.206077   80472 retry.go:31] will retry after 1.405550913s: waiting for machine to come up
	I0806 08:33:41.613629   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:41.614060   79219 main.go:141] libmachine: (embed-certs-324808) DBG | unable to find current IP address of domain embed-certs-324808 in network mk-embed-certs-324808
	I0806 08:33:41.614092   79219 main.go:141] libmachine: (embed-certs-324808) DBG | I0806 08:33:41.614001   80472 retry.go:31] will retry after 2.190867783s: waiting for machine to come up
	I0806 08:33:43.807417   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:43.808017   79219 main.go:141] libmachine: (embed-certs-324808) DBG | unable to find current IP address of domain embed-certs-324808 in network mk-embed-certs-324808
	I0806 08:33:43.808049   79219 main.go:141] libmachine: (embed-certs-324808) DBG | I0806 08:33:43.807958   80472 retry.go:31] will retry after 2.233558066s: waiting for machine to come up
	I0806 08:33:46.043427   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:46.043803   79219 main.go:141] libmachine: (embed-certs-324808) DBG | unable to find current IP address of domain embed-certs-324808 in network mk-embed-certs-324808
	I0806 08:33:46.043830   79219 main.go:141] libmachine: (embed-certs-324808) DBG | I0806 08:33:46.043754   80472 retry.go:31] will retry after 3.171710196s: waiting for machine to come up
	I0806 08:33:49.218967   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:49.219319   79219 main.go:141] libmachine: (embed-certs-324808) DBG | unable to find current IP address of domain embed-certs-324808 in network mk-embed-certs-324808
	I0806 08:33:49.219349   79219 main.go:141] libmachine: (embed-certs-324808) DBG | I0806 08:33:49.219266   80472 retry.go:31] will retry after 4.141473633s: waiting for machine to come up
	I0806 08:33:53.361887   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.362300   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has current primary IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.362315   79219 main.go:141] libmachine: (embed-certs-324808) Found IP for machine: 192.168.39.190
	I0806 08:33:53.362324   79219 main.go:141] libmachine: (embed-certs-324808) Reserving static IP address...
	I0806 08:33:53.362762   79219 main.go:141] libmachine: (embed-certs-324808) Reserved static IP address: 192.168.39.190
	I0806 08:33:53.362812   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "embed-certs-324808", mac: "52:54:00:b9:df:8c", ip: "192.168.39.190"} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:33:53.362826   79219 main.go:141] libmachine: (embed-certs-324808) Waiting for SSH to be available...
	I0806 08:33:53.362858   79219 main.go:141] libmachine: (embed-certs-324808) DBG | skip adding static IP to network mk-embed-certs-324808 - found existing host DHCP lease matching {name: "embed-certs-324808", mac: "52:54:00:b9:df:8c", ip: "192.168.39.190"}
	I0806 08:33:53.362871   79219 main.go:141] libmachine: (embed-certs-324808) DBG | Getting to WaitForSSH function...
	I0806 08:33:53.364761   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.365170   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:33:53.365204   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.365301   79219 main.go:141] libmachine: (embed-certs-324808) DBG | Using SSH client type: external
	I0806 08:33:53.365333   79219 main.go:141] libmachine: (embed-certs-324808) DBG | Using SSH private key: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/embed-certs-324808/id_rsa (-rw-------)
	I0806 08:33:53.365366   79219 main.go:141] libmachine: (embed-certs-324808) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.190 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19370-13057/.minikube/machines/embed-certs-324808/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0806 08:33:53.365378   79219 main.go:141] libmachine: (embed-certs-324808) DBG | About to run SSH command:
	I0806 08:33:53.365386   79219 main.go:141] libmachine: (embed-certs-324808) DBG | exit 0
	I0806 08:33:53.492385   79219 main.go:141] libmachine: (embed-certs-324808) DBG | SSH cmd err, output: <nil>: 
	I0806 08:33:53.492739   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetConfigRaw
	I0806 08:33:53.493342   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetIP
	I0806 08:33:53.495371   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.495663   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:33:53.495693   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.495982   79219 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/embed-certs-324808/config.json ...
	I0806 08:33:53.496215   79219 machine.go:94] provisionDockerMachine start ...
	I0806 08:33:53.496238   79219 main.go:141] libmachine: (embed-certs-324808) Calling .DriverName
	I0806 08:33:53.496460   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHHostname
	I0806 08:33:53.498625   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.498924   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:33:53.498945   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.499096   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHPort
	I0806 08:33:53.499265   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHKeyPath
	I0806 08:33:53.499394   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHKeyPath
	I0806 08:33:53.499593   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHUsername
	I0806 08:33:53.499815   79219 main.go:141] libmachine: Using SSH client type: native
	I0806 08:33:53.500050   79219 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.190 22 <nil> <nil>}
	I0806 08:33:53.500063   79219 main.go:141] libmachine: About to run SSH command:
	hostname
	I0806 08:33:53.612698   79219 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0806 08:33:53.612725   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetMachineName
	I0806 08:33:53.612979   79219 buildroot.go:166] provisioning hostname "embed-certs-324808"
	I0806 08:33:53.613002   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetMachineName
	I0806 08:33:53.613164   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHHostname
	I0806 08:33:53.615731   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.616123   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:33:53.616157   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.616316   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHPort
	I0806 08:33:53.616530   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHKeyPath
	I0806 08:33:53.616676   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHKeyPath
	I0806 08:33:53.616810   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHUsername
	I0806 08:33:53.616961   79219 main.go:141] libmachine: Using SSH client type: native
	I0806 08:33:53.617124   79219 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.190 22 <nil> <nil>}
	I0806 08:33:53.617138   79219 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-324808 && echo "embed-certs-324808" | sudo tee /etc/hostname
	I0806 08:33:53.742677   79219 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-324808
	
	I0806 08:33:53.742722   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHHostname
	I0806 08:33:53.745670   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.745984   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:33:53.746015   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.746195   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHPort
	I0806 08:33:53.746380   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHKeyPath
	I0806 08:33:53.746545   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHKeyPath
	I0806 08:33:53.746676   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHUsername
	I0806 08:33:53.746867   79219 main.go:141] libmachine: Using SSH client type: native
	I0806 08:33:53.747033   79219 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.190 22 <nil> <nil>}
	I0806 08:33:53.747048   79219 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-324808' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-324808/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-324808' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0806 08:33:53.869596   79219 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 08:33:53.869629   79219 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19370-13057/.minikube CaCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19370-13057/.minikube}
	I0806 08:33:53.869652   79219 buildroot.go:174] setting up certificates
	I0806 08:33:53.869663   79219 provision.go:84] configureAuth start
	I0806 08:33:53.869680   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetMachineName
	I0806 08:33:53.869953   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetIP
	I0806 08:33:53.872701   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.873071   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:33:53.873105   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.873246   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHHostname
	I0806 08:33:53.875521   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.875928   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:33:53.875959   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.876009   79219 provision.go:143] copyHostCerts
	I0806 08:33:53.876077   79219 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem, removing ...
	I0806 08:33:53.876089   79219 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem
	I0806 08:33:53.876170   79219 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem (1123 bytes)
	I0806 08:33:53.876287   79219 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem, removing ...
	I0806 08:33:53.876297   79219 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem
	I0806 08:33:53.876337   79219 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem (1675 bytes)
	I0806 08:33:53.876450   79219 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem, removing ...
	I0806 08:33:53.876460   79219 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem
	I0806 08:33:53.876499   79219 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem (1078 bytes)
	I0806 08:33:53.876578   79219 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem org=jenkins.embed-certs-324808 san=[127.0.0.1 192.168.39.190 embed-certs-324808 localhost minikube]
	I0806 08:33:53.961275   79219 provision.go:177] copyRemoteCerts
	I0806 08:33:53.961344   79219 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0806 08:33:53.961375   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHHostname
	I0806 08:33:53.964004   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.964356   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:33:53.964404   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:53.964596   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHPort
	I0806 08:33:53.964829   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHKeyPath
	I0806 08:33:53.964983   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHUsername
	I0806 08:33:53.965115   79219 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/embed-certs-324808/id_rsa Username:docker}
	I0806 08:33:54.050335   79219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0806 08:33:54.074977   79219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0806 08:33:54.100201   79219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0806 08:33:54.123356   79219 provision.go:87] duration metric: took 253.6775ms to configureAuth
	I0806 08:33:54.123386   79219 buildroot.go:189] setting minikube options for container-runtime
	I0806 08:33:54.123556   79219 config.go:182] Loaded profile config "embed-certs-324808": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 08:33:54.123634   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHHostname
	I0806 08:33:54.126279   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:54.126721   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:33:54.126746   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:54.126941   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHPort
	I0806 08:33:54.127143   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHKeyPath
	I0806 08:33:54.127309   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHKeyPath
	I0806 08:33:54.127444   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHUsername
	I0806 08:33:54.127611   79219 main.go:141] libmachine: Using SSH client type: native
	I0806 08:33:54.127785   79219 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.190 22 <nil> <nil>}
	I0806 08:33:54.127800   79219 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0806 08:33:54.653395   79614 start.go:364] duration metric: took 3m35.719137455s to acquireMachinesLock for "default-k8s-diff-port-990751"
	I0806 08:33:54.653444   79614 start.go:96] Skipping create...Using existing machine configuration
	I0806 08:33:54.653452   79614 fix.go:54] fixHost starting: 
	I0806 08:33:54.653812   79614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:33:54.653844   79614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:33:54.673686   79614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35867
	I0806 08:33:54.674123   79614 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:33:54.674581   79614 main.go:141] libmachine: Using API Version  1
	I0806 08:33:54.674604   79614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:33:54.674950   79614 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:33:54.675108   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .DriverName
	I0806 08:33:54.675246   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetState
	I0806 08:33:54.676908   79614 fix.go:112] recreateIfNeeded on default-k8s-diff-port-990751: state=Stopped err=<nil>
	I0806 08:33:54.676939   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .DriverName
	W0806 08:33:54.677097   79614 fix.go:138] unexpected machine state, will restart: <nil>
	I0806 08:33:54.678912   79614 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-990751" ...
	I0806 08:33:54.403758   79219 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0806 08:33:54.403785   79219 machine.go:97] duration metric: took 907.554493ms to provisionDockerMachine
	I0806 08:33:54.403799   79219 start.go:293] postStartSetup for "embed-certs-324808" (driver="kvm2")
	I0806 08:33:54.403814   79219 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0806 08:33:54.403854   79219 main.go:141] libmachine: (embed-certs-324808) Calling .DriverName
	I0806 08:33:54.404139   79219 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0806 08:33:54.404168   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHHostname
	I0806 08:33:54.406768   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:54.407114   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:33:54.407142   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:54.407328   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHPort
	I0806 08:33:54.407510   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHKeyPath
	I0806 08:33:54.407647   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHUsername
	I0806 08:33:54.407765   79219 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/embed-certs-324808/id_rsa Username:docker}
	I0806 08:33:54.495482   79219 ssh_runner.go:195] Run: cat /etc/os-release
	I0806 08:33:54.499691   79219 info.go:137] Remote host: Buildroot 2023.02.9
	I0806 08:33:54.499725   79219 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-13057/.minikube/addons for local assets ...
	I0806 08:33:54.499915   79219 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-13057/.minikube/files for local assets ...
	I0806 08:33:54.500104   79219 filesync.go:149] local asset: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem -> 202462.pem in /etc/ssl/certs
	I0806 08:33:54.500243   79219 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0806 08:33:54.509690   79219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem --> /etc/ssl/certs/202462.pem (1708 bytes)
	I0806 08:33:54.534233   79219 start.go:296] duration metric: took 130.420444ms for postStartSetup
	I0806 08:33:54.534282   79219 fix.go:56] duration metric: took 20.324719602s for fixHost
	I0806 08:33:54.534307   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHHostname
	I0806 08:33:54.536871   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:54.537202   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:33:54.537234   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:54.537393   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHPort
	I0806 08:33:54.537562   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHKeyPath
	I0806 08:33:54.537746   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHKeyPath
	I0806 08:33:54.537911   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHUsername
	I0806 08:33:54.538105   79219 main.go:141] libmachine: Using SSH client type: native
	I0806 08:33:54.538285   79219 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.190 22 <nil> <nil>}
	I0806 08:33:54.538295   79219 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0806 08:33:54.653264   79219 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722933234.630593565
	
	I0806 08:33:54.653288   79219 fix.go:216] guest clock: 1722933234.630593565
	I0806 08:33:54.653298   79219 fix.go:229] Guest: 2024-08-06 08:33:54.630593565 +0000 UTC Remote: 2024-08-06 08:33:54.534287484 +0000 UTC m=+280.251105524 (delta=96.306081ms)
	I0806 08:33:54.653327   79219 fix.go:200] guest clock delta is within tolerance: 96.306081ms
	I0806 08:33:54.653337   79219 start.go:83] releasing machines lock for "embed-certs-324808", held for 20.443802529s
	I0806 08:33:54.653361   79219 main.go:141] libmachine: (embed-certs-324808) Calling .DriverName
	I0806 08:33:54.653640   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetIP
	I0806 08:33:54.656332   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:54.656741   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:33:54.656766   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:54.656933   79219 main.go:141] libmachine: (embed-certs-324808) Calling .DriverName
	I0806 08:33:54.657443   79219 main.go:141] libmachine: (embed-certs-324808) Calling .DriverName
	I0806 08:33:54.657623   79219 main.go:141] libmachine: (embed-certs-324808) Calling .DriverName
	I0806 08:33:54.657705   79219 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0806 08:33:54.657748   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHHostname
	I0806 08:33:54.657857   79219 ssh_runner.go:195] Run: cat /version.json
	I0806 08:33:54.657877   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHHostname
	I0806 08:33:54.660302   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:54.660482   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:54.660694   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:33:54.660729   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:54.660823   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:33:54.660831   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHPort
	I0806 08:33:54.660844   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:54.661011   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHPort
	I0806 08:33:54.661094   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHKeyPath
	I0806 08:33:54.661165   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHKeyPath
	I0806 08:33:54.661223   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHUsername
	I0806 08:33:54.661280   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHUsername
	I0806 08:33:54.661338   79219 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/embed-certs-324808/id_rsa Username:docker}
	I0806 08:33:54.661386   79219 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/embed-certs-324808/id_rsa Username:docker}
	I0806 08:33:54.770935   79219 ssh_runner.go:195] Run: systemctl --version
	I0806 08:33:54.777166   79219 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0806 08:33:54.920252   79219 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0806 08:33:54.927740   79219 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0806 08:33:54.927829   79219 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0806 08:33:54.949691   79219 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0806 08:33:54.949720   79219 start.go:495] detecting cgroup driver to use...
	I0806 08:33:54.949794   79219 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0806 08:33:54.967575   79219 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 08:33:54.982019   79219 docker.go:217] disabling cri-docker service (if available) ...
	I0806 08:33:54.982094   79219 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0806 08:33:55.001629   79219 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0806 08:33:55.018332   79219 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0806 08:33:55.139207   79219 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0806 08:33:55.302225   79219 docker.go:233] disabling docker service ...
	I0806 08:33:55.302300   79219 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0806 08:33:55.321470   79219 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0806 08:33:55.334925   79219 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0806 08:33:55.456388   79219 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0806 08:33:55.579725   79219 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0806 08:33:55.594362   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 08:33:55.613084   79219 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0806 08:33:55.613165   79219 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:33:55.623414   79219 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0806 08:33:55.623487   79219 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:33:55.633837   79219 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:33:55.645893   79219 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:33:55.658623   79219 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0806 08:33:55.670016   79219 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:33:55.681227   79219 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:33:55.700047   79219 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:33:55.711409   79219 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0806 08:33:55.721076   79219 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0806 08:33:55.721138   79219 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0806 08:33:55.734470   79219 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0806 08:33:55.750919   79219 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 08:33:55.880306   79219 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0806 08:33:56.031323   79219 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0806 08:33:56.031401   79219 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0806 08:33:56.036456   79219 start.go:563] Will wait 60s for crictl version
	I0806 08:33:56.036522   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:33:56.040542   79219 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0806 08:33:56.081166   79219 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0806 08:33:56.081258   79219 ssh_runner.go:195] Run: crio --version
	I0806 08:33:56.109963   79219 ssh_runner.go:195] Run: crio --version
	I0806 08:33:56.141118   79219 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0806 08:33:54.680267   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .Start
	I0806 08:33:54.680448   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Ensuring networks are active...
	I0806 08:33:54.681411   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Ensuring network default is active
	I0806 08:33:54.681778   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Ensuring network mk-default-k8s-diff-port-990751 is active
	I0806 08:33:54.682205   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Getting domain xml...
	I0806 08:33:54.683028   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Creating domain...
	I0806 08:33:55.964313   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting to get IP...
	I0806 08:33:55.965483   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:33:55.965965   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | unable to find current IP address of domain default-k8s-diff-port-990751 in network mk-default-k8s-diff-port-990751
	I0806 08:33:55.966053   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | I0806 08:33:55.965953   80608 retry.go:31] will retry after 294.177981ms: waiting for machine to come up
	I0806 08:33:56.261404   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:33:56.262032   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | unable to find current IP address of domain default-k8s-diff-port-990751 in network mk-default-k8s-diff-port-990751
	I0806 08:33:56.262059   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | I0806 08:33:56.261969   80608 retry.go:31] will retry after 375.14393ms: waiting for machine to come up
	I0806 08:33:56.638714   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:33:56.639150   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | unable to find current IP address of domain default-k8s-diff-port-990751 in network mk-default-k8s-diff-port-990751
	I0806 08:33:56.639178   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | I0806 08:33:56.639118   80608 retry.go:31] will retry after 439.582122ms: waiting for machine to come up
	I0806 08:33:57.080785   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:33:57.081335   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | unable to find current IP address of domain default-k8s-diff-port-990751 in network mk-default-k8s-diff-port-990751
	I0806 08:33:57.081373   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | I0806 08:33:57.081299   80608 retry.go:31] will retry after 523.875054ms: waiting for machine to come up
	I0806 08:33:57.606731   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:33:57.607220   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | unable to find current IP address of domain default-k8s-diff-port-990751 in network mk-default-k8s-diff-port-990751
	I0806 08:33:57.607250   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | I0806 08:33:57.607178   80608 retry.go:31] will retry after 727.363483ms: waiting for machine to come up
	I0806 08:33:58.336352   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:33:58.336852   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | unable to find current IP address of domain default-k8s-diff-port-990751 in network mk-default-k8s-diff-port-990751
	I0806 08:33:58.336885   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | I0806 08:33:58.336782   80608 retry.go:31] will retry after 662.304027ms: waiting for machine to come up
	I0806 08:33:56.142523   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetIP
	I0806 08:33:56.145641   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:56.145997   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:33:56.146023   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:33:56.146239   79219 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0806 08:33:56.150763   79219 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 08:33:56.165617   79219 kubeadm.go:883] updating cluster {Name:embed-certs-324808 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-324808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.190 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0806 08:33:56.165775   79219 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0806 08:33:56.165837   79219 ssh_runner.go:195] Run: sudo crictl images --output json
	I0806 08:33:56.205302   79219 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0806 08:33:56.205393   79219 ssh_runner.go:195] Run: which lz4
	I0806 08:33:56.209791   79219 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0806 08:33:56.214338   79219 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0806 08:33:56.214369   79219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0806 08:33:57.735973   79219 crio.go:462] duration metric: took 1.526221921s to copy over tarball
	I0806 08:33:57.736049   79219 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0806 08:33:59.000478   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:33:59.000905   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | unable to find current IP address of domain default-k8s-diff-port-990751 in network mk-default-k8s-diff-port-990751
	I0806 08:33:59.000971   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | I0806 08:33:59.000829   80608 retry.go:31] will retry after 1.110230284s: waiting for machine to come up
	I0806 08:34:00.112658   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:00.113097   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | unable to find current IP address of domain default-k8s-diff-port-990751 in network mk-default-k8s-diff-port-990751
	I0806 08:34:00.113124   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | I0806 08:34:00.113053   80608 retry.go:31] will retry after 1.081213279s: waiting for machine to come up
	I0806 08:34:01.195646   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:01.196062   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | unable to find current IP address of domain default-k8s-diff-port-990751 in network mk-default-k8s-diff-port-990751
	I0806 08:34:01.196097   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | I0806 08:34:01.196014   80608 retry.go:31] will retry after 1.334553019s: waiting for machine to come up
	I0806 08:34:02.532009   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:02.532492   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | unable to find current IP address of domain default-k8s-diff-port-990751 in network mk-default-k8s-diff-port-990751
	I0806 08:34:02.532520   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | I0806 08:34:02.532449   80608 retry.go:31] will retry after 1.42743583s: waiting for machine to come up
	I0806 08:33:59.991340   79219 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.255258522s)
	I0806 08:33:59.991368   79219 crio.go:469] duration metric: took 2.255367027s to extract the tarball
	I0806 08:33:59.991378   79219 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0806 08:34:00.030261   79219 ssh_runner.go:195] Run: sudo crictl images --output json
	I0806 08:34:00.075217   79219 crio.go:514] all images are preloaded for cri-o runtime.
	I0806 08:34:00.075243   79219 cache_images.go:84] Images are preloaded, skipping loading
	I0806 08:34:00.075253   79219 kubeadm.go:934] updating node { 192.168.39.190 8443 v1.30.3 crio true true} ...
	I0806 08:34:00.075396   79219 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-324808 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.190
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-324808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0806 08:34:00.075477   79219 ssh_runner.go:195] Run: crio config
	I0806 08:34:00.124134   79219 cni.go:84] Creating CNI manager for ""
	I0806 08:34:00.124160   79219 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0806 08:34:00.124174   79219 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0806 08:34:00.124195   79219 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.190 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-324808 NodeName:embed-certs-324808 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.190"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.190 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0806 08:34:00.124348   79219 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.190
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-324808"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.190
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.190"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0806 08:34:00.124446   79219 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0806 08:34:00.134782   79219 binaries.go:44] Found k8s binaries, skipping transfer
	I0806 08:34:00.134874   79219 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0806 08:34:00.144892   79219 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0806 08:34:00.164614   79219 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0806 08:34:00.183911   79219 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0806 08:34:00.203821   79219 ssh_runner.go:195] Run: grep 192.168.39.190	control-plane.minikube.internal$ /etc/hosts
	I0806 08:34:00.207980   79219 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.190	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 08:34:00.220566   79219 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 08:34:00.339428   79219 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 08:34:00.357441   79219 certs.go:68] Setting up /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/embed-certs-324808 for IP: 192.168.39.190
	I0806 08:34:00.357470   79219 certs.go:194] generating shared ca certs ...
	I0806 08:34:00.357486   79219 certs.go:226] acquiring lock for ca certs: {Name:mkf5c5aed91f4108054d9f2c742cad66c86afbed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:34:00.357648   79219 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.key
	I0806 08:34:00.357694   79219 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.key
	I0806 08:34:00.357702   79219 certs.go:256] generating profile certs ...
	I0806 08:34:00.357788   79219 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/embed-certs-324808/client.key
	I0806 08:34:00.357870   79219 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/embed-certs-324808/apiserver.key.078f53b0
	I0806 08:34:00.357907   79219 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/embed-certs-324808/proxy-client.key
	I0806 08:34:00.358033   79219 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246.pem (1338 bytes)
	W0806 08:34:00.358070   79219 certs.go:480] ignoring /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246_empty.pem, impossibly tiny 0 bytes
	I0806 08:34:00.358085   79219 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem (1675 bytes)
	I0806 08:34:00.358122   79219 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem (1078 bytes)
	I0806 08:34:00.358151   79219 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem (1123 bytes)
	I0806 08:34:00.358174   79219 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem (1675 bytes)
	I0806 08:34:00.358231   79219 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem (1708 bytes)
	I0806 08:34:00.359196   79219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0806 08:34:00.397544   79219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0806 08:34:00.436824   79219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0806 08:34:00.478633   79219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0806 08:34:00.525641   79219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/embed-certs-324808/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0806 08:34:00.571003   79219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/embed-certs-324808/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0806 08:34:00.596423   79219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/embed-certs-324808/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0806 08:34:00.621129   79219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/embed-certs-324808/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0806 08:34:00.646297   79219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem --> /usr/share/ca-certificates/202462.pem (1708 bytes)
	I0806 08:34:00.672396   79219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0806 08:34:00.698468   79219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246.pem --> /usr/share/ca-certificates/20246.pem (1338 bytes)
	I0806 08:34:00.724115   79219 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0806 08:34:00.742200   79219 ssh_runner.go:195] Run: openssl version
	I0806 08:34:00.748418   79219 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202462.pem && ln -fs /usr/share/ca-certificates/202462.pem /etc/ssl/certs/202462.pem"
	I0806 08:34:00.760234   79219 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202462.pem
	I0806 08:34:00.764947   79219 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  6 07:17 /usr/share/ca-certificates/202462.pem
	I0806 08:34:00.765013   79219 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202462.pem
	I0806 08:34:00.771065   79219 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202462.pem /etc/ssl/certs/3ec20f2e.0"
	I0806 08:34:00.782995   79219 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0806 08:34:00.795092   79219 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0806 08:34:00.800011   79219 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  6 07:07 /usr/share/ca-certificates/minikubeCA.pem
	I0806 08:34:00.800079   79219 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0806 08:34:00.806038   79219 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0806 08:34:00.817660   79219 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20246.pem && ln -fs /usr/share/ca-certificates/20246.pem /etc/ssl/certs/20246.pem"
	I0806 08:34:00.829045   79219 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20246.pem
	I0806 08:34:00.833640   79219 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  6 07:17 /usr/share/ca-certificates/20246.pem
	I0806 08:34:00.833704   79219 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20246.pem
	I0806 08:34:00.839618   79219 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20246.pem /etc/ssl/certs/51391683.0"
	I0806 08:34:00.850938   79219 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0806 08:34:00.855577   79219 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0806 08:34:00.861960   79219 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0806 08:34:00.868626   79219 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0806 08:34:00.875291   79219 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0806 08:34:00.881553   79219 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0806 08:34:00.887650   79219 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0806 08:34:00.894134   79219 kubeadm.go:392] StartCluster: {Name:embed-certs-324808 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-324808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.190 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 08:34:00.894223   79219 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0806 08:34:00.894272   79219 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0806 08:34:00.938980   79219 cri.go:89] found id: ""
	I0806 08:34:00.939046   79219 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0806 08:34:00.951339   79219 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0806 08:34:00.951369   79219 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0806 08:34:00.951430   79219 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0806 08:34:00.962789   79219 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0806 08:34:00.964165   79219 kubeconfig.go:125] found "embed-certs-324808" server: "https://192.168.39.190:8443"
	I0806 08:34:00.967221   79219 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0806 08:34:00.977500   79219 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.190
	I0806 08:34:00.977535   79219 kubeadm.go:1160] stopping kube-system containers ...
	I0806 08:34:00.977547   79219 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0806 08:34:00.977603   79219 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0806 08:34:01.017218   79219 cri.go:89] found id: ""
	I0806 08:34:01.017315   79219 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0806 08:34:01.035305   79219 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0806 08:34:01.046210   79219 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0806 08:34:01.046233   79219 kubeadm.go:157] found existing configuration files:
	
	I0806 08:34:01.046290   79219 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0806 08:34:01.056904   79219 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0806 08:34:01.056974   79219 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0806 08:34:01.066640   79219 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0806 08:34:01.076042   79219 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0806 08:34:01.076114   79219 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0806 08:34:01.085735   79219 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0806 08:34:01.094602   79219 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0806 08:34:01.094661   79219 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0806 08:34:01.104160   79219 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0806 08:34:01.113176   79219 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0806 08:34:01.113249   79219 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0806 08:34:01.122679   79219 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0806 08:34:01.133032   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:01.259256   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:02.252011   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:02.481629   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:02.570215   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:02.710597   79219 api_server.go:52] waiting for apiserver process to appear ...
	I0806 08:34:02.710700   79219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:03.210803   79219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:03.711111   79219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:03.738971   79219 api_server.go:72] duration metric: took 1.028375323s to wait for apiserver process to appear ...
	I0806 08:34:03.739003   79219 api_server.go:88] waiting for apiserver healthz status ...
	I0806 08:34:03.739025   79219 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8443/healthz ...
	I0806 08:34:03.962232   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:03.962678   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | unable to find current IP address of domain default-k8s-diff-port-990751 in network mk-default-k8s-diff-port-990751
	I0806 08:34:03.962709   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | I0806 08:34:03.962638   80608 retry.go:31] will retry after 2.113116057s: waiting for machine to come up
	I0806 08:34:06.078010   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:06.078537   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | unable to find current IP address of domain default-k8s-diff-port-990751 in network mk-default-k8s-diff-port-990751
	I0806 08:34:06.078570   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | I0806 08:34:06.078503   80608 retry.go:31] will retry after 2.580076274s: waiting for machine to come up
	I0806 08:34:08.662313   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:08.662651   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | unable to find current IP address of domain default-k8s-diff-port-990751 in network mk-default-k8s-diff-port-990751
	I0806 08:34:08.662673   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | I0806 08:34:08.662609   80608 retry.go:31] will retry after 4.365982677s: waiting for machine to come up
	I0806 08:34:06.791233   79219 api_server.go:279] https://192.168.39.190:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0806 08:34:06.791272   79219 api_server.go:103] status: https://192.168.39.190:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0806 08:34:06.791311   79219 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8443/healthz ...
	I0806 08:34:06.804038   79219 api_server.go:279] https://192.168.39.190:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0806 08:34:06.804074   79219 api_server.go:103] status: https://192.168.39.190:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0806 08:34:07.239192   79219 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8443/healthz ...
	I0806 08:34:07.244973   79219 api_server.go:279] https://192.168.39.190:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0806 08:34:07.245005   79219 api_server.go:103] status: https://192.168.39.190:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0806 08:34:07.739556   79219 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8443/healthz ...
	I0806 08:34:07.754354   79219 api_server.go:279] https://192.168.39.190:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0806 08:34:07.754382   79219 api_server.go:103] status: https://192.168.39.190:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0806 08:34:08.240114   79219 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8443/healthz ...
	I0806 08:34:08.244799   79219 api_server.go:279] https://192.168.39.190:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0806 08:34:08.244829   79219 api_server.go:103] status: https://192.168.39.190:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0806 08:34:08.739407   79219 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8443/healthz ...
	I0806 08:34:08.743770   79219 api_server.go:279] https://192.168.39.190:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0806 08:34:08.743804   79219 api_server.go:103] status: https://192.168.39.190:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0806 08:34:09.239855   79219 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8443/healthz ...
	I0806 08:34:09.244078   79219 api_server.go:279] https://192.168.39.190:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0806 08:34:09.244135   79219 api_server.go:103] status: https://192.168.39.190:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0806 08:34:09.739859   79219 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8443/healthz ...
	I0806 08:34:09.744236   79219 api_server.go:279] https://192.168.39.190:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0806 08:34:09.744272   79219 api_server.go:103] status: https://192.168.39.190:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0806 08:34:10.240036   79219 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8443/healthz ...
	I0806 08:34:10.244217   79219 api_server.go:279] https://192.168.39.190:8443/healthz returned 200:
	ok
	I0806 08:34:10.250333   79219 api_server.go:141] control plane version: v1.30.3
	I0806 08:34:10.250361   79219 api_server.go:131] duration metric: took 6.511350709s to wait for apiserver health ...
	I0806 08:34:10.250370   79219 cni.go:84] Creating CNI manager for ""
	I0806 08:34:10.250376   79219 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0806 08:34:10.252472   79219 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0806 08:34:10.253779   79219 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0806 08:34:10.264772   79219 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0806 08:34:10.282904   79219 system_pods.go:43] waiting for kube-system pods to appear ...
	I0806 08:34:10.292343   79219 system_pods.go:59] 8 kube-system pods found
	I0806 08:34:10.292397   79219 system_pods.go:61] "coredns-7db6d8ff4d-4xxwm" [eeb47dea-32e9-458f-9ad9-2af6d4e68cc0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0806 08:34:10.292408   79219 system_pods.go:61] "etcd-embed-certs-324808" [1e0a82cd-6704-4da3-8992-15c68a7aaf70] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0806 08:34:10.292419   79219 system_pods.go:61] "kube-apiserver-embed-certs-324808" [a33c54d9-1fc5-4243-8fe9-d535c40908c3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0806 08:34:10.292426   79219 system_pods.go:61] "kube-controller-manager-embed-certs-324808" [b1c519b8-9c96-4c87-90f3-cc277cfe7763] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0806 08:34:10.292432   79219 system_pods.go:61] "kube-proxy-8lbzv" [0c45e6c0-9b7f-4c48-bff5-38d4a5949bec] Running
	I0806 08:34:10.292438   79219 system_pods.go:61] "kube-scheduler-embed-certs-324808" [78c2827f-341a-4441-bf7e-eff947fc3ea8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0806 08:34:10.292445   79219 system_pods.go:61] "metrics-server-569cc877fc-8xb9k" [7a5fb326-11a7-48fa-bcde-a22026a1dccb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0806 08:34:10.292452   79219 system_pods.go:61] "storage-provisioner" [15c7d89a-e994-4cca-931c-f646157a15e8] Running
	I0806 08:34:10.292465   79219 system_pods.go:74] duration metric: took 9.536183ms to wait for pod list to return data ...
	I0806 08:34:10.292474   79219 node_conditions.go:102] verifying NodePressure condition ...
	I0806 08:34:10.295584   79219 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0806 08:34:10.295616   79219 node_conditions.go:123] node cpu capacity is 2
	I0806 08:34:10.295632   79219 node_conditions.go:105] duration metric: took 3.152403ms to run NodePressure ...
	I0806 08:34:10.295654   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:10.566724   79219 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0806 08:34:10.571777   79219 kubeadm.go:739] kubelet initialised
	I0806 08:34:10.571801   79219 kubeadm.go:740] duration metric: took 5.042307ms waiting for restarted kubelet to initialise ...
	I0806 08:34:10.571812   79219 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 08:34:10.576738   79219 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-4xxwm" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:10.581943   79219 pod_ready.go:97] node "embed-certs-324808" hosting pod "coredns-7db6d8ff4d-4xxwm" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-324808" has status "Ready":"False"
	I0806 08:34:10.581976   79219 pod_ready.go:81] duration metric: took 5.207743ms for pod "coredns-7db6d8ff4d-4xxwm" in "kube-system" namespace to be "Ready" ...
	E0806 08:34:10.581989   79219 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-324808" hosting pod "coredns-7db6d8ff4d-4xxwm" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-324808" has status "Ready":"False"
	I0806 08:34:10.581997   79219 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-324808" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:10.586983   79219 pod_ready.go:97] node "embed-certs-324808" hosting pod "etcd-embed-certs-324808" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-324808" has status "Ready":"False"
	I0806 08:34:10.587007   79219 pod_ready.go:81] duration metric: took 5.000306ms for pod "etcd-embed-certs-324808" in "kube-system" namespace to be "Ready" ...
	E0806 08:34:10.587016   79219 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-324808" hosting pod "etcd-embed-certs-324808" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-324808" has status "Ready":"False"
	I0806 08:34:10.587022   79219 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-324808" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:10.592042   79219 pod_ready.go:97] node "embed-certs-324808" hosting pod "kube-apiserver-embed-certs-324808" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-324808" has status "Ready":"False"
	I0806 08:34:10.592066   79219 pod_ready.go:81] duration metric: took 5.035031ms for pod "kube-apiserver-embed-certs-324808" in "kube-system" namespace to be "Ready" ...
	E0806 08:34:10.592074   79219 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-324808" hosting pod "kube-apiserver-embed-certs-324808" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-324808" has status "Ready":"False"
	I0806 08:34:10.592080   79219 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-324808" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:10.687156   79219 pod_ready.go:97] node "embed-certs-324808" hosting pod "kube-controller-manager-embed-certs-324808" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-324808" has status "Ready":"False"
	I0806 08:34:10.687193   79219 pod_ready.go:81] duration metric: took 95.100916ms for pod "kube-controller-manager-embed-certs-324808" in "kube-system" namespace to be "Ready" ...
	E0806 08:34:10.687207   79219 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-324808" hosting pod "kube-controller-manager-embed-certs-324808" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-324808" has status "Ready":"False"
	I0806 08:34:10.687217   79219 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-8lbzv" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:11.086180   79219 pod_ready.go:97] node "embed-certs-324808" hosting pod "kube-proxy-8lbzv" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-324808" has status "Ready":"False"
	I0806 08:34:11.086213   79219 pod_ready.go:81] duration metric: took 398.984678ms for pod "kube-proxy-8lbzv" in "kube-system" namespace to be "Ready" ...
	E0806 08:34:11.086224   79219 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-324808" hosting pod "kube-proxy-8lbzv" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-324808" has status "Ready":"False"
	I0806 08:34:11.086230   79219 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-324808" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:11.486971   79219 pod_ready.go:97] node "embed-certs-324808" hosting pod "kube-scheduler-embed-certs-324808" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-324808" has status "Ready":"False"
	I0806 08:34:11.486995   79219 pod_ready.go:81] duration metric: took 400.759449ms for pod "kube-scheduler-embed-certs-324808" in "kube-system" namespace to be "Ready" ...
	E0806 08:34:11.487005   79219 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-324808" hosting pod "kube-scheduler-embed-certs-324808" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-324808" has status "Ready":"False"
	I0806 08:34:11.487011   79219 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:11.887030   79219 pod_ready.go:97] node "embed-certs-324808" hosting pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-324808" has status "Ready":"False"
	I0806 08:34:11.887052   79219 pod_ready.go:81] duration metric: took 400.033763ms for pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace to be "Ready" ...
	E0806 08:34:11.887061   79219 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-324808" hosting pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-324808" has status "Ready":"False"
	I0806 08:34:11.887068   79219 pod_ready.go:38] duration metric: took 1.315245982s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 08:34:11.887084   79219 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0806 08:34:11.899633   79219 ops.go:34] apiserver oom_adj: -16
	I0806 08:34:11.899649   79219 kubeadm.go:597] duration metric: took 10.948272769s to restartPrimaryControlPlane
	I0806 08:34:11.899657   79219 kubeadm.go:394] duration metric: took 11.005530169s to StartCluster
	I0806 08:34:11.899676   79219 settings.go:142] acquiring lock: {Name:mkb0bfbcae3b4205ef43487a9de84cfd8afd2e5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:34:11.899745   79219 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19370-13057/kubeconfig
	I0806 08:34:11.901341   79219 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/kubeconfig: {Name:mk5c066914acf8e36556b29792f75b98076ca34a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:34:11.901603   79219 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.190 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0806 08:34:11.901676   79219 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0806 08:34:11.901753   79219 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-324808"
	I0806 08:34:11.901785   79219 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-324808"
	W0806 08:34:11.901796   79219 addons.go:243] addon storage-provisioner should already be in state true
	I0806 08:34:11.901788   79219 addons.go:69] Setting default-storageclass=true in profile "embed-certs-324808"
	I0806 08:34:11.901834   79219 host.go:66] Checking if "embed-certs-324808" exists ...
	I0806 08:34:11.901851   79219 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-324808"
	I0806 08:34:11.901854   79219 config.go:182] Loaded profile config "embed-certs-324808": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 08:34:11.901842   79219 addons.go:69] Setting metrics-server=true in profile "embed-certs-324808"
	I0806 08:34:11.901921   79219 addons.go:234] Setting addon metrics-server=true in "embed-certs-324808"
	W0806 08:34:11.901940   79219 addons.go:243] addon metrics-server should already be in state true
	I0806 08:34:11.901998   79219 host.go:66] Checking if "embed-certs-324808" exists ...
	I0806 08:34:11.902216   79219 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:34:11.902252   79219 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:34:11.902280   79219 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:34:11.902308   79219 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:34:11.902383   79219 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:34:11.902411   79219 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:34:11.904343   79219 out.go:177] * Verifying Kubernetes components...
	I0806 08:34:11.905934   79219 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 08:34:11.917271   79219 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32971
	I0806 08:34:11.917375   79219 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38975
	I0806 08:34:11.917833   79219 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:34:11.917874   79219 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:34:11.917981   79219 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33569
	I0806 08:34:11.918394   79219 main.go:141] libmachine: Using API Version  1
	I0806 08:34:11.918416   79219 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:34:11.918453   79219 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:34:11.918530   79219 main.go:141] libmachine: Using API Version  1
	I0806 08:34:11.918552   79219 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:34:11.918761   79219 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:34:11.918819   79219 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:34:11.918913   79219 main.go:141] libmachine: Using API Version  1
	I0806 08:34:11.918939   79219 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:34:11.918965   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetState
	I0806 08:34:11.919280   79219 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:34:11.919322   79219 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:34:11.919367   79219 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:34:11.919873   79219 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:34:11.919901   79219 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:34:11.921815   79219 addons.go:234] Setting addon default-storageclass=true in "embed-certs-324808"
	W0806 08:34:11.921834   79219 addons.go:243] addon default-storageclass should already be in state true
	I0806 08:34:11.921856   79219 host.go:66] Checking if "embed-certs-324808" exists ...
	I0806 08:34:11.922132   79219 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:34:11.922168   79219 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:34:11.936191   79219 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40029
	I0806 08:34:11.936678   79219 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:34:11.937209   79219 main.go:141] libmachine: Using API Version  1
	I0806 08:34:11.937234   79219 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:34:11.940428   79219 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35087
	I0806 08:34:11.940508   79219 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38421
	I0806 08:34:11.940845   79219 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:34:11.941345   79219 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:34:11.941346   79219 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:34:11.941707   79219 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:34:11.941752   79219 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:34:11.941831   79219 main.go:141] libmachine: Using API Version  1
	I0806 08:34:11.941872   79219 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:34:11.941904   79219 main.go:141] libmachine: Using API Version  1
	I0806 08:34:11.941917   79219 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:34:11.942273   79219 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:34:11.942359   79219 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:34:11.942514   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetState
	I0806 08:34:11.942532   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetState
	I0806 08:34:11.944317   79219 main.go:141] libmachine: (embed-certs-324808) Calling .DriverName
	I0806 08:34:11.944503   79219 main.go:141] libmachine: (embed-certs-324808) Calling .DriverName
	I0806 08:34:11.946511   79219 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0806 08:34:11.946525   79219 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 08:34:11.947929   79219 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0806 08:34:11.947948   79219 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0806 08:34:11.947976   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHHostname
	I0806 08:34:11.948044   79219 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0806 08:34:11.948066   79219 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0806 08:34:11.948085   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHHostname
	I0806 08:34:11.951758   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:34:11.951988   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:34:11.952181   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:34:11.952205   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:34:11.952530   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:34:11.952565   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHPort
	I0806 08:34:11.952566   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:34:11.952693   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHPort
	I0806 08:34:11.952819   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHKeyPath
	I0806 08:34:11.952858   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHKeyPath
	I0806 08:34:11.952970   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHUsername
	I0806 08:34:11.952974   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHUsername
	I0806 08:34:11.953143   79219 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/embed-certs-324808/id_rsa Username:docker}
	I0806 08:34:11.953207   79219 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/embed-certs-324808/id_rsa Username:docker}
	I0806 08:34:11.959949   79219 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35275
	I0806 08:34:11.960353   79219 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:34:11.960900   79219 main.go:141] libmachine: Using API Version  1
	I0806 08:34:11.960929   79219 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:34:11.961241   79219 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:34:11.961451   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetState
	I0806 08:34:11.963031   79219 main.go:141] libmachine: (embed-certs-324808) Calling .DriverName
	I0806 08:34:11.963333   79219 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0806 08:34:11.963349   79219 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0806 08:34:11.963366   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHHostname
	I0806 08:34:11.966079   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:34:11.966448   79219 main.go:141] libmachine: (embed-certs-324808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:df:8c", ip: ""} in network mk-embed-certs-324808: {Iface:virbr2 ExpiryTime:2024-08-06 09:33:45 +0000 UTC Type:0 Mac:52:54:00:b9:df:8c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-324808 Clientid:01:52:54:00:b9:df:8c}
	I0806 08:34:11.966481   79219 main.go:141] libmachine: (embed-certs-324808) DBG | domain embed-certs-324808 has defined IP address 192.168.39.190 and MAC address 52:54:00:b9:df:8c in network mk-embed-certs-324808
	I0806 08:34:11.966559   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHPort
	I0806 08:34:11.966737   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHKeyPath
	I0806 08:34:11.966881   79219 main.go:141] libmachine: (embed-certs-324808) Calling .GetSSHUsername
	I0806 08:34:11.967013   79219 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/embed-certs-324808/id_rsa Username:docker}
	I0806 08:34:12.081433   79219 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 08:34:12.100701   79219 node_ready.go:35] waiting up to 6m0s for node "embed-certs-324808" to be "Ready" ...
	I0806 08:34:12.196131   79219 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0806 08:34:12.196151   79219 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0806 08:34:12.201603   79219 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0806 08:34:12.217590   79219 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0806 08:34:12.217626   79219 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0806 08:34:12.239354   79219 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0806 08:34:12.239377   79219 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0806 08:34:12.254418   79219 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0806 08:34:12.266126   79219 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0806 08:34:13.441725   79219 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.187269206s)
	I0806 08:34:13.441780   79219 main.go:141] libmachine: Making call to close driver server
	I0806 08:34:13.441791   79219 main.go:141] libmachine: (embed-certs-324808) Calling .Close
	I0806 08:34:13.441856   79219 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.240222324s)
	I0806 08:34:13.441895   79219 main.go:141] libmachine: Making call to close driver server
	I0806 08:34:13.441910   79219 main.go:141] libmachine: (embed-certs-324808) Calling .Close
	I0806 08:34:13.442096   79219 main.go:141] libmachine: (embed-certs-324808) DBG | Closing plugin on server side
	I0806 08:34:13.442116   79219 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:34:13.442128   79219 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:34:13.442136   79219 main.go:141] libmachine: Making call to close driver server
	I0806 08:34:13.442147   79219 main.go:141] libmachine: (embed-certs-324808) Calling .Close
	I0806 08:34:13.442211   79219 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:34:13.442225   79219 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:34:13.442235   79219 main.go:141] libmachine: Making call to close driver server
	I0806 08:34:13.442245   79219 main.go:141] libmachine: (embed-certs-324808) Calling .Close
	I0806 08:34:13.442414   79219 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:34:13.442424   79219 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:34:13.443716   79219 main.go:141] libmachine: (embed-certs-324808) DBG | Closing plugin on server side
	I0806 08:34:13.443727   79219 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:34:13.443748   79219 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:34:13.448919   79219 main.go:141] libmachine: Making call to close driver server
	I0806 08:34:13.448945   79219 main.go:141] libmachine: (embed-certs-324808) Calling .Close
	I0806 08:34:13.449225   79219 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:34:13.449242   79219 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:34:13.521481   79219 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.255292404s)
	I0806 08:34:13.521542   79219 main.go:141] libmachine: Making call to close driver server
	I0806 08:34:13.521561   79219 main.go:141] libmachine: (embed-certs-324808) Calling .Close
	I0806 08:34:13.521875   79219 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:34:13.521898   79219 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:34:13.521899   79219 main.go:141] libmachine: (embed-certs-324808) DBG | Closing plugin on server side
	I0806 08:34:13.521916   79219 main.go:141] libmachine: Making call to close driver server
	I0806 08:34:13.521927   79219 main.go:141] libmachine: (embed-certs-324808) Calling .Close
	I0806 08:34:13.522147   79219 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:34:13.522165   79219 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:34:13.522175   79219 addons.go:475] Verifying addon metrics-server=true in "embed-certs-324808"
	I0806 08:34:13.522185   79219 main.go:141] libmachine: (embed-certs-324808) DBG | Closing plugin on server side
	I0806 08:34:13.524209   79219 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0806 08:34:13.032813   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.033366   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Found IP for machine: 192.168.50.235
	I0806 08:34:13.033394   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Reserving static IP address...
	I0806 08:34:13.033414   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has current primary IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.033956   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-990751", mac: "52:54:00:fb:51:bd", ip: "192.168.50.235"} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:13.033992   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Reserved static IP address: 192.168.50.235
	I0806 08:34:13.034012   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | skip adding static IP to network mk-default-k8s-diff-port-990751 - found existing host DHCP lease matching {name: "default-k8s-diff-port-990751", mac: "52:54:00:fb:51:bd", ip: "192.168.50.235"}
	I0806 08:34:13.034031   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | Getting to WaitForSSH function...
	I0806 08:34:13.034046   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Waiting for SSH to be available...
	I0806 08:34:13.036483   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.036843   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:13.036874   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.037013   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | Using SSH client type: external
	I0806 08:34:13.037042   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | Using SSH private key: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/default-k8s-diff-port-990751/id_rsa (-rw-------)
	I0806 08:34:13.037066   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.235 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19370-13057/.minikube/machines/default-k8s-diff-port-990751/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0806 08:34:13.037078   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | About to run SSH command:
	I0806 08:34:13.037119   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | exit 0
	I0806 08:34:13.164932   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | SSH cmd err, output: <nil>: 
	I0806 08:34:13.165301   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetConfigRaw
	I0806 08:34:13.165892   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetIP
	I0806 08:34:13.169003   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.169408   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:13.169449   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.169783   79614 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/default-k8s-diff-port-990751/config.json ...
	I0806 08:34:13.170431   79614 machine.go:94] provisionDockerMachine start ...
	I0806 08:34:13.170452   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .DriverName
	I0806 08:34:13.170703   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHHostname
	I0806 08:34:13.173536   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.173994   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:13.174022   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.174241   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHPort
	I0806 08:34:13.174456   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHKeyPath
	I0806 08:34:13.174632   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHKeyPath
	I0806 08:34:13.174789   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHUsername
	I0806 08:34:13.175056   79614 main.go:141] libmachine: Using SSH client type: native
	I0806 08:34:13.175329   79614 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.235 22 <nil> <nil>}
	I0806 08:34:13.175344   79614 main.go:141] libmachine: About to run SSH command:
	hostname
	I0806 08:34:13.281288   79614 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0806 08:34:13.281322   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetMachineName
	I0806 08:34:13.281573   79614 buildroot.go:166] provisioning hostname "default-k8s-diff-port-990751"
	I0806 08:34:13.281606   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetMachineName
	I0806 08:34:13.281808   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHHostname
	I0806 08:34:13.284463   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.284825   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:13.284860   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.284988   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHPort
	I0806 08:34:13.285178   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHKeyPath
	I0806 08:34:13.285350   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHKeyPath
	I0806 08:34:13.285468   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHUsername
	I0806 08:34:13.285614   79614 main.go:141] libmachine: Using SSH client type: native
	I0806 08:34:13.285807   79614 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.235 22 <nil> <nil>}
	I0806 08:34:13.285821   79614 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-990751 && echo "default-k8s-diff-port-990751" | sudo tee /etc/hostname
	I0806 08:34:13.413970   79614 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-990751
	
	I0806 08:34:13.414005   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHHostname
	I0806 08:34:13.417279   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.417708   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:13.417757   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.417947   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHPort
	I0806 08:34:13.418153   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHKeyPath
	I0806 08:34:13.418337   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHKeyPath
	I0806 08:34:13.418495   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHUsername
	I0806 08:34:13.418683   79614 main.go:141] libmachine: Using SSH client type: native
	I0806 08:34:13.418935   79614 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.235 22 <nil> <nil>}
	I0806 08:34:13.418970   79614 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-990751' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-990751/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-990751' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0806 08:34:13.534550   79614 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 08:34:13.534578   79614 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19370-13057/.minikube CaCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19370-13057/.minikube}
	I0806 08:34:13.534630   79614 buildroot.go:174] setting up certificates
	I0806 08:34:13.534647   79614 provision.go:84] configureAuth start
	I0806 08:34:13.534664   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetMachineName
	I0806 08:34:13.534962   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetIP
	I0806 08:34:13.537582   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.538144   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:13.538173   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.538329   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHHostname
	I0806 08:34:13.541132   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.541422   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:13.541447   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.541638   79614 provision.go:143] copyHostCerts
	I0806 08:34:13.541693   79614 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem, removing ...
	I0806 08:34:13.541703   79614 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem
	I0806 08:34:13.541761   79614 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem (1078 bytes)
	I0806 08:34:13.541862   79614 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem, removing ...
	I0806 08:34:13.541870   79614 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem
	I0806 08:34:13.541891   79614 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem (1123 bytes)
	I0806 08:34:13.541943   79614 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem, removing ...
	I0806 08:34:13.541950   79614 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem
	I0806 08:34:13.541966   79614 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem (1675 bytes)
	I0806 08:34:13.542013   79614 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-990751 san=[127.0.0.1 192.168.50.235 default-k8s-diff-port-990751 localhost minikube]
	I0806 08:34:13.632290   79614 provision.go:177] copyRemoteCerts
	I0806 08:34:13.632345   79614 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0806 08:34:13.632389   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHHostname
	I0806 08:34:13.635332   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.635619   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:13.635644   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.635868   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHPort
	I0806 08:34:13.636069   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHKeyPath
	I0806 08:34:13.636208   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHUsername
	I0806 08:34:13.636381   79614 sshutil.go:53] new ssh client: &{IP:192.168.50.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/default-k8s-diff-port-990751/id_rsa Username:docker}
	I0806 08:34:13.719415   79614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0806 08:34:13.745981   79614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0806 08:34:13.775906   79614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0806 08:34:13.806948   79614 provision.go:87] duration metric: took 272.283009ms to configureAuth
	I0806 08:34:13.806986   79614 buildroot.go:189] setting minikube options for container-runtime
	I0806 08:34:13.807234   79614 config.go:182] Loaded profile config "default-k8s-diff-port-990751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 08:34:13.807328   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHHostname
	I0806 08:34:13.810500   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.810895   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:13.810928   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:13.811074   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHPort
	I0806 08:34:13.811302   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHKeyPath
	I0806 08:34:13.811497   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHKeyPath
	I0806 08:34:13.811627   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHUsername
	I0806 08:34:13.811896   79614 main.go:141] libmachine: Using SSH client type: native
	I0806 08:34:13.812124   79614 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.235 22 <nil> <nil>}
	I0806 08:34:13.812148   79614 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0806 08:34:13.525425   79219 addons.go:510] duration metric: took 1.623751418s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0806 08:34:14.104863   79219 node_ready.go:53] node "embed-certs-324808" has status "Ready":"False"
	I0806 08:34:14.353523   79927 start.go:364] duration metric: took 3m15.487259208s to acquireMachinesLock for "old-k8s-version-409903"
	I0806 08:34:14.353614   79927 start.go:96] Skipping create...Using existing machine configuration
	I0806 08:34:14.353629   79927 fix.go:54] fixHost starting: 
	I0806 08:34:14.354059   79927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:34:14.354096   79927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:34:14.373696   79927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33985
	I0806 08:34:14.374148   79927 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:34:14.374646   79927 main.go:141] libmachine: Using API Version  1
	I0806 08:34:14.374672   79927 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:34:14.374995   79927 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:34:14.375277   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .DriverName
	I0806 08:34:14.375472   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetState
	I0806 08:34:14.377209   79927 fix.go:112] recreateIfNeeded on old-k8s-version-409903: state=Stopped err=<nil>
	I0806 08:34:14.377251   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .DriverName
	W0806 08:34:14.377387   79927 fix.go:138] unexpected machine state, will restart: <nil>
	I0806 08:34:14.379029   79927 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-409903" ...
	I0806 08:34:14.113926   79614 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0806 08:34:14.113953   79614 machine.go:97] duration metric: took 943.50747ms to provisionDockerMachine
	I0806 08:34:14.113967   79614 start.go:293] postStartSetup for "default-k8s-diff-port-990751" (driver="kvm2")
	I0806 08:34:14.113982   79614 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0806 08:34:14.114004   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .DriverName
	I0806 08:34:14.114313   79614 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0806 08:34:14.114339   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHHostname
	I0806 08:34:14.116618   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:14.117028   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:14.117067   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:14.117162   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHPort
	I0806 08:34:14.117371   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHKeyPath
	I0806 08:34:14.117545   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHUsername
	I0806 08:34:14.117738   79614 sshutil.go:53] new ssh client: &{IP:192.168.50.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/default-k8s-diff-port-990751/id_rsa Username:docker}
	I0806 08:34:14.199514   79614 ssh_runner.go:195] Run: cat /etc/os-release
	I0806 08:34:14.204063   79614 info.go:137] Remote host: Buildroot 2023.02.9
	I0806 08:34:14.204084   79614 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-13057/.minikube/addons for local assets ...
	I0806 08:34:14.204139   79614 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-13057/.minikube/files for local assets ...
	I0806 08:34:14.204219   79614 filesync.go:149] local asset: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem -> 202462.pem in /etc/ssl/certs
	I0806 08:34:14.204308   79614 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0806 08:34:14.213847   79614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem --> /etc/ssl/certs/202462.pem (1708 bytes)
	I0806 08:34:14.241524   79614 start.go:296] duration metric: took 127.543182ms for postStartSetup
	I0806 08:34:14.241570   79614 fix.go:56] duration metric: took 19.588116399s for fixHost
	I0806 08:34:14.241590   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHHostname
	I0806 08:34:14.244333   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:14.244702   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:14.244733   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:14.244936   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHPort
	I0806 08:34:14.245178   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHKeyPath
	I0806 08:34:14.245338   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHKeyPath
	I0806 08:34:14.245497   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHUsername
	I0806 08:34:14.245637   79614 main.go:141] libmachine: Using SSH client type: native
	I0806 08:34:14.245783   79614 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.235 22 <nil> <nil>}
	I0806 08:34:14.245793   79614 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0806 08:34:14.353301   79614 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722933254.326161993
	
	I0806 08:34:14.353325   79614 fix.go:216] guest clock: 1722933254.326161993
	I0806 08:34:14.353339   79614 fix.go:229] Guest: 2024-08-06 08:34:14.326161993 +0000 UTC Remote: 2024-08-06 08:34:14.2415744 +0000 UTC m=+235.446489438 (delta=84.587593ms)
	I0806 08:34:14.353389   79614 fix.go:200] guest clock delta is within tolerance: 84.587593ms
	I0806 08:34:14.353398   79614 start.go:83] releasing machines lock for "default-k8s-diff-port-990751", held for 19.699973637s
	I0806 08:34:14.353432   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .DriverName
	I0806 08:34:14.353751   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetIP
	I0806 08:34:14.356590   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:14.357031   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:14.357065   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:14.357349   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .DriverName
	I0806 08:34:14.357896   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .DriverName
	I0806 08:34:14.358125   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .DriverName
	I0806 08:34:14.358222   79614 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0806 08:34:14.358265   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHHostname
	I0806 08:34:14.358387   79614 ssh_runner.go:195] Run: cat /version.json
	I0806 08:34:14.358415   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHHostname
	I0806 08:34:14.361131   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:14.361444   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:14.361590   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:14.361629   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:14.361801   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHPort
	I0806 08:34:14.361905   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:14.361940   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:14.362025   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHKeyPath
	I0806 08:34:14.362228   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHUsername
	I0806 08:34:14.362242   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHPort
	I0806 08:34:14.362420   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHKeyPath
	I0806 08:34:14.362424   79614 sshutil.go:53] new ssh client: &{IP:192.168.50.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/default-k8s-diff-port-990751/id_rsa Username:docker}
	I0806 08:34:14.362600   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHUsername
	I0806 08:34:14.362717   79614 sshutil.go:53] new ssh client: &{IP:192.168.50.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/default-k8s-diff-port-990751/id_rsa Username:docker}
	I0806 08:34:14.469232   79614 ssh_runner.go:195] Run: systemctl --version
	I0806 08:34:14.476929   79614 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0806 08:34:14.629487   79614 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0806 08:34:14.637276   79614 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0806 08:34:14.637349   79614 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0806 08:34:14.654612   79614 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0806 08:34:14.654634   79614 start.go:495] detecting cgroup driver to use...
	I0806 08:34:14.654699   79614 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0806 08:34:14.672255   79614 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 08:34:14.689135   79614 docker.go:217] disabling cri-docker service (if available) ...
	I0806 08:34:14.689203   79614 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0806 08:34:14.705745   79614 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0806 08:34:14.721267   79614 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0806 08:34:14.841773   79614 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0806 08:34:14.982757   79614 docker.go:233] disabling docker service ...
	I0806 08:34:14.982837   79614 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0806 08:34:14.997637   79614 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0806 08:34:15.011404   79614 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0806 08:34:15.167108   79614 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0806 08:34:15.286749   79614 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0806 08:34:15.301818   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 08:34:15.320848   79614 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0806 08:34:15.320920   79614 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:15.331331   79614 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0806 08:34:15.331414   79614 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:15.342763   79614 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:15.353534   79614 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:15.365593   79614 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0806 08:34:15.379804   79614 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:15.390200   79614 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:15.410545   79614 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:15.423050   79614 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0806 08:34:15.432810   79614 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0806 08:34:15.432878   79614 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0806 08:34:15.447298   79614 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0806 08:34:15.457992   79614 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 08:34:15.606288   79614 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0806 08:34:15.792322   79614 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0806 08:34:15.792411   79614 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0806 08:34:15.798556   79614 start.go:563] Will wait 60s for crictl version
	I0806 08:34:15.798615   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:34:15.802732   79614 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0806 08:34:15.856880   79614 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0806 08:34:15.856963   79614 ssh_runner.go:195] Run: crio --version
	I0806 08:34:15.890496   79614 ssh_runner.go:195] Run: crio --version
	I0806 08:34:15.924486   79614 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0806 08:34:14.380306   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .Start
	I0806 08:34:14.380486   79927 main.go:141] libmachine: (old-k8s-version-409903) Ensuring networks are active...
	I0806 08:34:14.381155   79927 main.go:141] libmachine: (old-k8s-version-409903) Ensuring network default is active
	I0806 08:34:14.381514   79927 main.go:141] libmachine: (old-k8s-version-409903) Ensuring network mk-old-k8s-version-409903 is active
	I0806 08:34:14.381820   79927 main.go:141] libmachine: (old-k8s-version-409903) Getting domain xml...
	I0806 08:34:14.382679   79927 main.go:141] libmachine: (old-k8s-version-409903) Creating domain...
	I0806 08:34:15.725416   79927 main.go:141] libmachine: (old-k8s-version-409903) Waiting to get IP...
	I0806 08:34:15.726492   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:15.727078   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:34:15.727276   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:34:15.727179   80849 retry.go:31] will retry after 303.235731ms: waiting for machine to come up
	I0806 08:34:16.031996   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:16.032592   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:34:16.032621   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:34:16.032542   80849 retry.go:31] will retry after 355.466914ms: waiting for machine to come up
	I0806 08:34:16.390212   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:16.390736   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:34:16.390768   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:34:16.390674   80849 retry.go:31] will retry after 432.942387ms: waiting for machine to come up
	I0806 08:34:16.825739   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:16.826366   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:34:16.826390   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:34:16.826317   80849 retry.go:31] will retry after 481.17687ms: waiting for machine to come up
	I0806 08:34:17.309005   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:17.309768   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:34:17.309799   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:34:17.309745   80849 retry.go:31] will retry after 631.184181ms: waiting for machine to come up
	I0806 08:34:17.942220   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:17.942742   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:34:17.942766   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:34:17.942701   80849 retry.go:31] will retry after 602.253362ms: waiting for machine to come up
	I0806 08:34:18.546333   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:18.546918   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:34:18.546944   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:34:18.546855   80849 retry.go:31] will retry after 1.061939923s: waiting for machine to come up
	I0806 08:34:15.925709   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetIP
	I0806 08:34:15.929420   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:15.929901   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:15.929931   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:15.930159   79614 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0806 08:34:15.934916   79614 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 08:34:15.949815   79614 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-990751 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-990751 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.235 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0806 08:34:15.949971   79614 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0806 08:34:15.950054   79614 ssh_runner.go:195] Run: sudo crictl images --output json
	I0806 08:34:16.004551   79614 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0806 08:34:16.004631   79614 ssh_runner.go:195] Run: which lz4
	I0806 08:34:16.011388   79614 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0806 08:34:16.017475   79614 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0806 08:34:16.017511   79614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0806 08:34:17.564750   79614 crio.go:462] duration metric: took 1.553401521s to copy over tarball
	I0806 08:34:17.564840   79614 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0806 08:34:16.106922   79219 node_ready.go:53] node "embed-certs-324808" has status "Ready":"False"
	I0806 08:34:17.104735   79219 node_ready.go:49] node "embed-certs-324808" has status "Ready":"True"
	I0806 08:34:17.104778   79219 node_ready.go:38] duration metric: took 5.004030652s for node "embed-certs-324808" to be "Ready" ...
	I0806 08:34:17.104790   79219 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 08:34:17.113668   79219 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-4xxwm" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:17.120777   79219 pod_ready.go:92] pod "coredns-7db6d8ff4d-4xxwm" in "kube-system" namespace has status "Ready":"True"
	I0806 08:34:17.120801   79219 pod_ready.go:81] duration metric: took 7.097875ms for pod "coredns-7db6d8ff4d-4xxwm" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:17.120811   79219 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-324808" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:17.126581   79219 pod_ready.go:92] pod "etcd-embed-certs-324808" in "kube-system" namespace has status "Ready":"True"
	I0806 08:34:17.126606   79219 pod_ready.go:81] duration metric: took 5.788426ms for pod "etcd-embed-certs-324808" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:17.126619   79219 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-324808" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:18.136739   79219 pod_ready.go:92] pod "kube-apiserver-embed-certs-324808" in "kube-system" namespace has status "Ready":"True"
	I0806 08:34:18.136767   79219 pod_ready.go:81] duration metric: took 1.010139212s for pod "kube-apiserver-embed-certs-324808" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:18.136781   79219 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-324808" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:19.610165   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:19.610567   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:34:19.610598   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:34:19.610542   80849 retry.go:31] will retry after 1.339464672s: waiting for machine to come up
	I0806 08:34:20.951505   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:20.952049   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:34:20.952080   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:34:20.951975   80849 retry.go:31] will retry after 1.622574331s: waiting for machine to come up
	I0806 08:34:22.576150   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:22.576559   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:34:22.576582   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:34:22.576508   80849 retry.go:31] will retry after 1.408827574s: waiting for machine to come up
	I0806 08:34:19.920581   79614 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.355705846s)
	I0806 08:34:19.920609   79614 crio.go:469] duration metric: took 2.355825462s to extract the tarball
	I0806 08:34:19.920619   79614 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0806 08:34:19.959118   79614 ssh_runner.go:195] Run: sudo crictl images --output json
	I0806 08:34:20.002413   79614 crio.go:514] all images are preloaded for cri-o runtime.
	I0806 08:34:20.002449   79614 cache_images.go:84] Images are preloaded, skipping loading
	I0806 08:34:20.002461   79614 kubeadm.go:934] updating node { 192.168.50.235 8444 v1.30.3 crio true true} ...
	I0806 08:34:20.002588   79614 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-990751 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.235
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-990751 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0806 08:34:20.002655   79614 ssh_runner.go:195] Run: crio config
	I0806 08:34:20.058943   79614 cni.go:84] Creating CNI manager for ""
	I0806 08:34:20.058970   79614 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0806 08:34:20.058981   79614 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0806 08:34:20.059010   79614 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.235 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-990751 NodeName:default-k8s-diff-port-990751 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.235"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.235 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0806 08:34:20.059247   79614 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.235
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-990751"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.235
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.235"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0806 08:34:20.059337   79614 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0806 08:34:20.072766   79614 binaries.go:44] Found k8s binaries, skipping transfer
	I0806 08:34:20.072876   79614 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0806 08:34:20.085730   79614 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0806 08:34:20.105230   79614 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0806 08:34:20.124800   79614 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0806 08:34:20.146547   79614 ssh_runner.go:195] Run: grep 192.168.50.235	control-plane.minikube.internal$ /etc/hosts
	I0806 08:34:20.150692   79614 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.235	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 08:34:20.164006   79614 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 08:34:20.286820   79614 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 08:34:20.304141   79614 certs.go:68] Setting up /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/default-k8s-diff-port-990751 for IP: 192.168.50.235
	I0806 08:34:20.304166   79614 certs.go:194] generating shared ca certs ...
	I0806 08:34:20.304187   79614 certs.go:226] acquiring lock for ca certs: {Name:mkf5c5aed91f4108054d9f2c742cad66c86afbed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:34:20.304386   79614 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.key
	I0806 08:34:20.304448   79614 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.key
	I0806 08:34:20.304464   79614 certs.go:256] generating profile certs ...
	I0806 08:34:20.304567   79614 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/default-k8s-diff-port-990751/client.key
	I0806 08:34:20.304648   79614 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/default-k8s-diff-port-990751/apiserver.key.aa7eccb7
	I0806 08:34:20.304715   79614 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/default-k8s-diff-port-990751/proxy-client.key
	I0806 08:34:20.304872   79614 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246.pem (1338 bytes)
	W0806 08:34:20.304919   79614 certs.go:480] ignoring /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246_empty.pem, impossibly tiny 0 bytes
	I0806 08:34:20.304929   79614 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem (1675 bytes)
	I0806 08:34:20.304960   79614 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem (1078 bytes)
	I0806 08:34:20.304989   79614 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem (1123 bytes)
	I0806 08:34:20.305019   79614 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem (1675 bytes)
	I0806 08:34:20.305089   79614 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem (1708 bytes)
	I0806 08:34:20.305821   79614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0806 08:34:20.341201   79614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0806 08:34:20.375406   79614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0806 08:34:20.407409   79614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0806 08:34:20.449555   79614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/default-k8s-diff-port-990751/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0806 08:34:20.479479   79614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/default-k8s-diff-port-990751/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0806 08:34:20.518719   79614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/default-k8s-diff-port-990751/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0806 08:34:20.544212   79614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/default-k8s-diff-port-990751/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0806 08:34:20.569413   79614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0806 08:34:20.594573   79614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246.pem --> /usr/share/ca-certificates/20246.pem (1338 bytes)
	I0806 08:34:20.620183   79614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem --> /usr/share/ca-certificates/202462.pem (1708 bytes)
	I0806 08:34:20.648311   79614 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0806 08:34:20.670845   79614 ssh_runner.go:195] Run: openssl version
	I0806 08:34:20.677437   79614 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202462.pem && ln -fs /usr/share/ca-certificates/202462.pem /etc/ssl/certs/202462.pem"
	I0806 08:34:20.688990   79614 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202462.pem
	I0806 08:34:20.694512   79614 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  6 07:17 /usr/share/ca-certificates/202462.pem
	I0806 08:34:20.694580   79614 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202462.pem
	I0806 08:34:20.701362   79614 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202462.pem /etc/ssl/certs/3ec20f2e.0"
	I0806 08:34:20.712966   79614 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0806 08:34:20.724455   79614 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0806 08:34:20.729111   79614 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  6 07:07 /usr/share/ca-certificates/minikubeCA.pem
	I0806 08:34:20.729165   79614 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0806 08:34:20.734973   79614 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0806 08:34:20.746387   79614 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20246.pem && ln -fs /usr/share/ca-certificates/20246.pem /etc/ssl/certs/20246.pem"
	I0806 08:34:20.757763   79614 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20246.pem
	I0806 08:34:20.762396   79614 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  6 07:17 /usr/share/ca-certificates/20246.pem
	I0806 08:34:20.762454   79614 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20246.pem
	I0806 08:34:20.768261   79614 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20246.pem /etc/ssl/certs/51391683.0"
	I0806 08:34:20.779553   79614 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0806 08:34:20.785854   79614 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0806 08:34:20.794054   79614 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0806 08:34:20.802652   79614 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0806 08:34:20.809929   79614 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0806 08:34:20.816886   79614 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0806 08:34:20.823667   79614 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0806 08:34:20.830355   79614 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-990751 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-990751 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.235 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 08:34:20.830456   79614 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0806 08:34:20.830527   79614 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0806 08:34:20.878368   79614 cri.go:89] found id: ""
	I0806 08:34:20.878455   79614 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0806 08:34:20.889175   79614 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0806 08:34:20.889196   79614 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0806 08:34:20.889250   79614 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0806 08:34:20.899842   79614 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0806 08:34:20.901037   79614 kubeconfig.go:125] found "default-k8s-diff-port-990751" server: "https://192.168.50.235:8444"
	I0806 08:34:20.902781   79614 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0806 08:34:20.914810   79614 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.235
	I0806 08:34:20.914847   79614 kubeadm.go:1160] stopping kube-system containers ...
	I0806 08:34:20.914859   79614 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0806 08:34:20.914905   79614 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0806 08:34:20.956719   79614 cri.go:89] found id: ""
	I0806 08:34:20.956772   79614 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0806 08:34:20.977367   79614 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0806 08:34:20.989491   79614 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0806 08:34:20.989513   79614 kubeadm.go:157] found existing configuration files:
	
	I0806 08:34:20.989565   79614 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0806 08:34:21.000759   79614 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0806 08:34:21.000820   79614 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0806 08:34:21.012404   79614 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0806 08:34:21.023449   79614 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0806 08:34:21.023536   79614 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0806 08:34:21.035354   79614 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0806 08:34:21.045285   79614 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0806 08:34:21.045347   79614 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0806 08:34:21.057182   79614 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0806 08:34:21.067431   79614 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0806 08:34:21.067497   79614 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0806 08:34:21.077955   79614 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0806 08:34:21.088824   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:21.209627   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:21.772624   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:22.022615   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:22.107829   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:22.235446   79614 api_server.go:52] waiting for apiserver process to appear ...
	I0806 08:34:22.235552   79614 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:22.735843   79614 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:23.236385   79614 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:23.314994   79614 api_server.go:72] duration metric: took 1.079547638s to wait for apiserver process to appear ...
	I0806 08:34:23.315025   79614 api_server.go:88] waiting for apiserver healthz status ...
	I0806 08:34:23.315071   79614 api_server.go:253] Checking apiserver healthz at https://192.168.50.235:8444/healthz ...
	I0806 08:34:23.315581   79614 api_server.go:269] stopped: https://192.168.50.235:8444/healthz: Get "https://192.168.50.235:8444/healthz": dial tcp 192.168.50.235:8444: connect: connection refused
	I0806 08:34:23.815428   79614 api_server.go:253] Checking apiserver healthz at https://192.168.50.235:8444/healthz ...
	I0806 08:34:20.145118   79219 pod_ready.go:102] pod "kube-controller-manager-embed-certs-324808" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:23.051556   79219 pod_ready.go:102] pod "kube-controller-manager-embed-certs-324808" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:23.077556   79219 pod_ready.go:92] pod "kube-controller-manager-embed-certs-324808" in "kube-system" namespace has status "Ready":"True"
	I0806 08:34:23.077581   79219 pod_ready.go:81] duration metric: took 4.94079164s for pod "kube-controller-manager-embed-certs-324808" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:23.077594   79219 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8lbzv" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:23.091745   79219 pod_ready.go:92] pod "kube-proxy-8lbzv" in "kube-system" namespace has status "Ready":"True"
	I0806 08:34:23.091773   79219 pod_ready.go:81] duration metric: took 14.170363ms for pod "kube-proxy-8lbzv" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:23.091789   79219 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-324808" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:23.105567   79219 pod_ready.go:92] pod "kube-scheduler-embed-certs-324808" in "kube-system" namespace has status "Ready":"True"
	I0806 08:34:23.105597   79219 pod_ready.go:81] duration metric: took 13.798762ms for pod "kube-scheduler-embed-certs-324808" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:23.105612   79219 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:23.987256   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:23.987658   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:34:23.987683   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:34:23.987617   80849 retry.go:31] will retry after 2.000071561s: waiting for machine to come up
	I0806 08:34:25.989501   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:25.989880   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:34:25.989906   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:34:25.989838   80849 retry.go:31] will retry after 2.508269501s: waiting for machine to come up
	I0806 08:34:28.501449   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:28.501842   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:34:28.501874   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:34:28.501802   80849 retry.go:31] will retry after 2.7583106s: waiting for machine to come up
	I0806 08:34:26.696273   79614 api_server.go:279] https://192.168.50.235:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0806 08:34:26.696325   79614 api_server.go:103] status: https://192.168.50.235:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0806 08:34:26.696342   79614 api_server.go:253] Checking apiserver healthz at https://192.168.50.235:8444/healthz ...
	I0806 08:34:26.704047   79614 api_server.go:279] https://192.168.50.235:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0806 08:34:26.704078   79614 api_server.go:103] status: https://192.168.50.235:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0806 08:34:26.815341   79614 api_server.go:253] Checking apiserver healthz at https://192.168.50.235:8444/healthz ...
	I0806 08:34:26.819686   79614 api_server.go:279] https://192.168.50.235:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0806 08:34:26.819718   79614 api_server.go:103] status: https://192.168.50.235:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0806 08:34:27.315242   79614 api_server.go:253] Checking apiserver healthz at https://192.168.50.235:8444/healthz ...
	I0806 08:34:27.320753   79614 api_server.go:279] https://192.168.50.235:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0806 08:34:27.320789   79614 api_server.go:103] status: https://192.168.50.235:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0806 08:34:27.815164   79614 api_server.go:253] Checking apiserver healthz at https://192.168.50.235:8444/healthz ...
	I0806 08:34:27.821943   79614 api_server.go:279] https://192.168.50.235:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0806 08:34:27.821967   79614 api_server.go:103] status: https://192.168.50.235:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0806 08:34:28.315169   79614 api_server.go:253] Checking apiserver healthz at https://192.168.50.235:8444/healthz ...
	I0806 08:34:28.321921   79614 api_server.go:279] https://192.168.50.235:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0806 08:34:28.321945   79614 api_server.go:103] status: https://192.168.50.235:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0806 08:34:28.815717   79614 api_server.go:253] Checking apiserver healthz at https://192.168.50.235:8444/healthz ...
	I0806 08:34:28.819946   79614 api_server.go:279] https://192.168.50.235:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0806 08:34:28.819973   79614 api_server.go:103] status: https://192.168.50.235:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0806 08:34:25.112333   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:27.112488   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:29.315643   79614 api_server.go:253] Checking apiserver healthz at https://192.168.50.235:8444/healthz ...
	I0806 08:34:29.319784   79614 api_server.go:279] https://192.168.50.235:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0806 08:34:29.319813   79614 api_server.go:103] status: https://192.168.50.235:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0806 08:34:29.815945   79614 api_server.go:253] Checking apiserver healthz at https://192.168.50.235:8444/healthz ...
	I0806 08:34:29.821441   79614 api_server.go:279] https://192.168.50.235:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0806 08:34:29.821476   79614 api_server.go:103] status: https://192.168.50.235:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0806 08:34:30.315997   79614 api_server.go:253] Checking apiserver healthz at https://192.168.50.235:8444/healthz ...
	I0806 08:34:30.321179   79614 api_server.go:279] https://192.168.50.235:8444/healthz returned 200:
	ok
	I0806 08:34:30.327083   79614 api_server.go:141] control plane version: v1.30.3
	I0806 08:34:30.327113   79614 api_server.go:131] duration metric: took 7.012080775s to wait for apiserver health ...
	I0806 08:34:30.327122   79614 cni.go:84] Creating CNI manager for ""
	I0806 08:34:30.327128   79614 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0806 08:34:30.328860   79614 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0806 08:34:30.330232   79614 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0806 08:34:30.341069   79614 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0806 08:34:30.359470   79614 system_pods.go:43] waiting for kube-system pods to appear ...
	I0806 08:34:30.369165   79614 system_pods.go:59] 8 kube-system pods found
	I0806 08:34:30.369209   79614 system_pods.go:61] "coredns-7db6d8ff4d-gx8tt" [48f83ba8-a238-4baf-94af-88370fefcb02] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0806 08:34:30.369227   79614 system_pods.go:61] "etcd-default-k8s-diff-port-990751" [bf342ba4-1e11-40bf-80fd-02b402043ecf] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0806 08:34:30.369246   79614 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-990751" [11ee42c4-c9e9-41cf-ace2-deac0f30153a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0806 08:34:30.369256   79614 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-990751" [f17fca30-0afc-4ed8-a8cc-cb64e69723f3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0806 08:34:30.369265   79614 system_pods.go:61] "kube-proxy-lpf88" [866c10f0-2c44-4c03-b460-ff2194bd1ec6] Running
	I0806 08:34:30.369272   79614 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-990751" [2e591ea7-07bd-464f-bf0e-bc24bfcca016] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0806 08:34:30.369282   79614 system_pods.go:61] "metrics-server-569cc877fc-hnxd4" [f369ac70-49b7-448a-9852-89232fb66da9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0806 08:34:30.369290   79614 system_pods.go:61] "storage-provisioner" [0faff3fc-d34b-418e-972b-95a2e36b577a] Running
	I0806 08:34:30.369299   79614 system_pods.go:74] duration metric: took 9.80217ms to wait for pod list to return data ...
	I0806 08:34:30.369311   79614 node_conditions.go:102] verifying NodePressure condition ...
	I0806 08:34:30.372222   79614 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0806 08:34:30.372250   79614 node_conditions.go:123] node cpu capacity is 2
	I0806 08:34:30.372261   79614 node_conditions.go:105] duration metric: took 2.944552ms to run NodePressure ...
	I0806 08:34:30.372279   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:30.639677   79614 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0806 08:34:30.643923   79614 kubeadm.go:739] kubelet initialised
	I0806 08:34:30.643952   79614 kubeadm.go:740] duration metric: took 4.248408ms waiting for restarted kubelet to initialise ...
	I0806 08:34:30.643962   79614 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 08:34:30.648766   79614 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-gx8tt" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:30.653672   79614 pod_ready.go:97] node "default-k8s-diff-port-990751" hosting pod "coredns-7db6d8ff4d-gx8tt" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-990751" has status "Ready":"False"
	I0806 08:34:30.653703   79614 pod_ready.go:81] duration metric: took 4.90961ms for pod "coredns-7db6d8ff4d-gx8tt" in "kube-system" namespace to be "Ready" ...
	E0806 08:34:30.653714   79614 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-990751" hosting pod "coredns-7db6d8ff4d-gx8tt" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-990751" has status "Ready":"False"
	I0806 08:34:30.653723   79614 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-990751" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:30.658080   79614 pod_ready.go:97] node "default-k8s-diff-port-990751" hosting pod "etcd-default-k8s-diff-port-990751" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-990751" has status "Ready":"False"
	I0806 08:34:30.658106   79614 pod_ready.go:81] duration metric: took 4.370083ms for pod "etcd-default-k8s-diff-port-990751" in "kube-system" namespace to be "Ready" ...
	E0806 08:34:30.658119   79614 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-990751" hosting pod "etcd-default-k8s-diff-port-990751" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-990751" has status "Ready":"False"
	I0806 08:34:30.658125   79614 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-990751" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:30.662246   79614 pod_ready.go:97] node "default-k8s-diff-port-990751" hosting pod "kube-apiserver-default-k8s-diff-port-990751" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-990751" has status "Ready":"False"
	I0806 08:34:30.662271   79614 pod_ready.go:81] duration metric: took 4.139553ms for pod "kube-apiserver-default-k8s-diff-port-990751" in "kube-system" namespace to be "Ready" ...
	E0806 08:34:30.662280   79614 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-990751" hosting pod "kube-apiserver-default-k8s-diff-port-990751" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-990751" has status "Ready":"False"
	I0806 08:34:30.662286   79614 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-990751" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:30.763246   79614 pod_ready.go:97] node "default-k8s-diff-port-990751" hosting pod "kube-controller-manager-default-k8s-diff-port-990751" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-990751" has status "Ready":"False"
	I0806 08:34:30.763286   79614 pod_ready.go:81] duration metric: took 100.991212ms for pod "kube-controller-manager-default-k8s-diff-port-990751" in "kube-system" namespace to be "Ready" ...
	E0806 08:34:30.763302   79614 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-990751" hosting pod "kube-controller-manager-default-k8s-diff-port-990751" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-990751" has status "Ready":"False"
	I0806 08:34:30.763311   79614 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-lpf88" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:31.162972   79614 pod_ready.go:97] node "default-k8s-diff-port-990751" hosting pod "kube-proxy-lpf88" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-990751" has status "Ready":"False"
	I0806 08:34:31.162998   79614 pod_ready.go:81] duration metric: took 399.678401ms for pod "kube-proxy-lpf88" in "kube-system" namespace to be "Ready" ...
	E0806 08:34:31.163006   79614 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-990751" hosting pod "kube-proxy-lpf88" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-990751" has status "Ready":"False"
	I0806 08:34:31.163012   79614 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-990751" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:31.562538   79614 pod_ready.go:97] node "default-k8s-diff-port-990751" hosting pod "kube-scheduler-default-k8s-diff-port-990751" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-990751" has status "Ready":"False"
	I0806 08:34:31.562564   79614 pod_ready.go:81] duration metric: took 399.544055ms for pod "kube-scheduler-default-k8s-diff-port-990751" in "kube-system" namespace to be "Ready" ...
	E0806 08:34:31.562575   79614 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-990751" hosting pod "kube-scheduler-default-k8s-diff-port-990751" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-990751" has status "Ready":"False"
	I0806 08:34:31.562582   79614 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:31.963410   79614 pod_ready.go:97] node "default-k8s-diff-port-990751" hosting pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-990751" has status "Ready":"False"
	I0806 08:34:31.963438   79614 pod_ready.go:81] duration metric: took 400.849931ms for pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace to be "Ready" ...
	E0806 08:34:31.963449   79614 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-990751" hosting pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-990751" has status "Ready":"False"
	I0806 08:34:31.963456   79614 pod_ready.go:38] duration metric: took 1.319483278s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 08:34:31.963472   79614 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0806 08:34:31.975982   79614 ops.go:34] apiserver oom_adj: -16
	I0806 08:34:31.976006   79614 kubeadm.go:597] duration metric: took 11.086801084s to restartPrimaryControlPlane
	I0806 08:34:31.976016   79614 kubeadm.go:394] duration metric: took 11.145668347s to StartCluster
	I0806 08:34:31.976035   79614 settings.go:142] acquiring lock: {Name:mkb0bfbcae3b4205ef43487a9de84cfd8afd2e5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:34:31.976131   79614 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19370-13057/kubeconfig
	I0806 08:34:31.977626   79614 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/kubeconfig: {Name:mk5c066914acf8e36556b29792f75b98076ca34a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:34:31.977871   79614 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.235 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0806 08:34:31.977927   79614 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0806 08:34:31.978004   79614 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-990751"
	I0806 08:34:31.978058   79614 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-990751"
	W0806 08:34:31.978070   79614 addons.go:243] addon storage-provisioner should already be in state true
	I0806 08:34:31.978060   79614 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-990751"
	I0806 08:34:31.978092   79614 config.go:182] Loaded profile config "default-k8s-diff-port-990751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 08:34:31.978076   79614 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-990751"
	I0806 08:34:31.978124   79614 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-990751"
	I0806 08:34:31.978154   79614 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-990751"
	I0806 08:34:31.978111   79614 host.go:66] Checking if "default-k8s-diff-port-990751" exists ...
	W0806 08:34:31.978172   79614 addons.go:243] addon metrics-server should already be in state true
	I0806 08:34:31.978250   79614 host.go:66] Checking if "default-k8s-diff-port-990751" exists ...
	I0806 08:34:31.978595   79614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:34:31.978651   79614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:34:31.978602   79614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:34:31.978691   79614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:34:31.978650   79614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:34:31.978806   79614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:34:31.979749   79614 out.go:177] * Verifying Kubernetes components...
	I0806 08:34:31.981485   79614 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 08:34:31.994362   79614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36289
	I0806 08:34:31.995043   79614 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:34:31.995594   79614 main.go:141] libmachine: Using API Version  1
	I0806 08:34:31.995610   79614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:34:31.995940   79614 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:34:31.996152   79614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36601
	I0806 08:34:31.996485   79614 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:34:31.996569   79614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:34:31.996613   79614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:34:31.996785   79614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42273
	I0806 08:34:31.997043   79614 main.go:141] libmachine: Using API Version  1
	I0806 08:34:31.997062   79614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:34:31.997335   79614 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:34:31.997424   79614 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:34:31.997601   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetState
	I0806 08:34:31.997759   79614 main.go:141] libmachine: Using API Version  1
	I0806 08:34:31.997775   79614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:34:31.998047   79614 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:34:31.998511   79614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:34:31.998546   79614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:34:32.001381   79614 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-990751"
	W0806 08:34:32.001397   79614 addons.go:243] addon default-storageclass should already be in state true
	I0806 08:34:32.001427   79614 host.go:66] Checking if "default-k8s-diff-port-990751" exists ...
	I0806 08:34:32.001690   79614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:34:32.001721   79614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:34:32.015095   79614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41633
	I0806 08:34:32.015597   79614 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:34:32.016197   79614 main.go:141] libmachine: Using API Version  1
	I0806 08:34:32.016220   79614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:34:32.016245   79614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33731
	I0806 08:34:32.016951   79614 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:34:32.016953   79614 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:34:32.017491   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetState
	I0806 08:34:32.017572   79614 main.go:141] libmachine: Using API Version  1
	I0806 08:34:32.017601   79614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:34:32.017951   79614 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:34:32.018222   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetState
	I0806 08:34:32.020339   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .DriverName
	I0806 08:34:32.020347   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .DriverName
	I0806 08:34:32.021417   79614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40949
	I0806 08:34:32.021814   79614 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:34:32.022232   79614 main.go:141] libmachine: Using API Version  1
	I0806 08:34:32.022254   79614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:34:32.022609   79614 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:34:32.022690   79614 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 08:34:32.022690   79614 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0806 08:34:32.023365   79614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:34:32.023410   79614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:34:32.024304   79614 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0806 08:34:32.024326   79614 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0806 08:34:32.024343   79614 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0806 08:34:32.024358   79614 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0806 08:34:32.024387   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHHostname
	I0806 08:34:32.024346   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHHostname
	I0806 08:34:32.029046   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:32.029292   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:32.029490   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:32.029516   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:32.029672   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:32.029676   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHPort
	I0806 08:34:32.029694   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:32.029840   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHPort
	I0806 08:34:32.029866   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHKeyPath
	I0806 08:34:32.030000   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHKeyPath
	I0806 08:34:32.030122   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHUsername
	I0806 08:34:32.030124   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHUsername
	I0806 08:34:32.030265   79614 sshutil.go:53] new ssh client: &{IP:192.168.50.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/default-k8s-diff-port-990751/id_rsa Username:docker}
	I0806 08:34:32.030265   79614 sshutil.go:53] new ssh client: &{IP:192.168.50.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/default-k8s-diff-port-990751/id_rsa Username:docker}
	I0806 08:34:32.043523   79614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40021
	I0806 08:34:32.043915   79614 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:34:32.044560   79614 main.go:141] libmachine: Using API Version  1
	I0806 08:34:32.044583   79614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:34:32.044969   79614 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:34:32.045214   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetState
	I0806 08:34:32.046980   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .DriverName
	I0806 08:34:32.047261   79614 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0806 08:34:32.047277   79614 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0806 08:34:32.047296   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHHostname
	I0806 08:34:32.050654   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:32.050991   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:51:bd", ip: ""} in network mk-default-k8s-diff-port-990751: {Iface:virbr3 ExpiryTime:2024-08-06 09:34:06 +0000 UTC Type:0 Mac:52:54:00:fb:51:bd Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:default-k8s-diff-port-990751 Clientid:01:52:54:00:fb:51:bd}
	I0806 08:34:32.051020   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | domain default-k8s-diff-port-990751 has defined IP address 192.168.50.235 and MAC address 52:54:00:fb:51:bd in network mk-default-k8s-diff-port-990751
	I0806 08:34:32.051152   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHPort
	I0806 08:34:32.051329   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHKeyPath
	I0806 08:34:32.051495   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .GetSSHUsername
	I0806 08:34:32.051613   79614 sshutil.go:53] new ssh client: &{IP:192.168.50.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/default-k8s-diff-port-990751/id_rsa Username:docker}
	I0806 08:34:32.192056   79614 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 08:34:32.231428   79614 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-990751" to be "Ready" ...
	I0806 08:34:32.278049   79614 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0806 08:34:32.378290   79614 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0806 08:34:32.378317   79614 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0806 08:34:32.399728   79614 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0806 08:34:32.419412   79614 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0806 08:34:32.419441   79614 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0806 08:34:32.470691   79614 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0806 08:34:32.470726   79614 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0806 08:34:32.514191   79614 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0806 08:34:32.590737   79614 main.go:141] libmachine: Making call to close driver server
	I0806 08:34:32.590772   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .Close
	I0806 08:34:32.591110   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | Closing plugin on server side
	I0806 08:34:32.591177   79614 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:34:32.591190   79614 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:34:32.591202   79614 main.go:141] libmachine: Making call to close driver server
	I0806 08:34:32.591212   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .Close
	I0806 08:34:32.591465   79614 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:34:32.591492   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | Closing plugin on server side
	I0806 08:34:32.591505   79614 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:34:32.597276   79614 main.go:141] libmachine: Making call to close driver server
	I0806 08:34:32.597297   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .Close
	I0806 08:34:32.597547   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | Closing plugin on server side
	I0806 08:34:32.597564   79614 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:34:32.597579   79614 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:34:33.401000   79614 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.001229332s)
	I0806 08:34:33.401045   79614 main.go:141] libmachine: Making call to close driver server
	I0806 08:34:33.401062   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .Close
	I0806 08:34:33.401346   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | Closing plugin on server side
	I0806 08:34:33.401402   79614 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:34:33.401426   79614 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:34:33.401443   79614 main.go:141] libmachine: Making call to close driver server
	I0806 08:34:33.401452   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .Close
	I0806 08:34:33.401692   79614 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:34:33.401720   79614 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:34:33.455633   79614 main.go:141] libmachine: Making call to close driver server
	I0806 08:34:33.455663   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .Close
	I0806 08:34:33.455947   79614 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:34:33.455964   79614 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:34:33.455964   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | Closing plugin on server side
	I0806 08:34:33.455972   79614 main.go:141] libmachine: Making call to close driver server
	I0806 08:34:33.455980   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) Calling .Close
	I0806 08:34:33.456288   79614 main.go:141] libmachine: (default-k8s-diff-port-990751) DBG | Closing plugin on server side
	I0806 08:34:33.456286   79614 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:34:33.456328   79614 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:34:33.456347   79614 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-990751"
	I0806 08:34:33.459411   79614 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0806 08:34:31.262960   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:31.263282   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | unable to find current IP address of domain old-k8s-version-409903 in network mk-old-k8s-version-409903
	I0806 08:34:31.263304   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | I0806 08:34:31.263238   80849 retry.go:31] will retry after 4.76596973s: waiting for machine to come up
	I0806 08:34:33.460780   79614 addons.go:510] duration metric: took 1.482853001s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0806 08:34:29.611714   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:31.612385   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:34.111936   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:37.353538   78922 start.go:364] duration metric: took 58.142260904s to acquireMachinesLock for "no-preload-703340"
	I0806 08:34:37.353587   78922 start.go:96] Skipping create...Using existing machine configuration
	I0806 08:34:37.353598   78922 fix.go:54] fixHost starting: 
	I0806 08:34:37.354016   78922 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:34:37.354050   78922 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:34:37.373838   78922 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37209
	I0806 08:34:37.374283   78922 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:34:37.374874   78922 main.go:141] libmachine: Using API Version  1
	I0806 08:34:37.374910   78922 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:34:37.375250   78922 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:34:37.375448   78922 main.go:141] libmachine: (no-preload-703340) Calling .DriverName
	I0806 08:34:37.375584   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetState
	I0806 08:34:37.377251   78922 fix.go:112] recreateIfNeeded on no-preload-703340: state=Stopped err=<nil>
	I0806 08:34:37.377275   78922 main.go:141] libmachine: (no-preload-703340) Calling .DriverName
	W0806 08:34:37.377421   78922 fix.go:138] unexpected machine state, will restart: <nil>
	I0806 08:34:37.379343   78922 out.go:177] * Restarting existing kvm2 VM for "no-preload-703340" ...
	I0806 08:34:36.030512   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.030950   79927 main.go:141] libmachine: (old-k8s-version-409903) Found IP for machine: 192.168.61.158
	I0806 08:34:36.030972   79927 main.go:141] libmachine: (old-k8s-version-409903) Reserving static IP address...
	I0806 08:34:36.030989   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has current primary IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.031349   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "old-k8s-version-409903", mac: "52:54:00:12:30:88", ip: "192.168.61.158"} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:36.031376   79927 main.go:141] libmachine: (old-k8s-version-409903) Reserved static IP address: 192.168.61.158
	I0806 08:34:36.031398   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | skip adding static IP to network mk-old-k8s-version-409903 - found existing host DHCP lease matching {name: "old-k8s-version-409903", mac: "52:54:00:12:30:88", ip: "192.168.61.158"}
	I0806 08:34:36.031416   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | Getting to WaitForSSH function...
	I0806 08:34:36.031428   79927 main.go:141] libmachine: (old-k8s-version-409903) Waiting for SSH to be available...
	I0806 08:34:36.033684   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.034094   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:36.034124   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.034327   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | Using SSH client type: external
	I0806 08:34:36.034366   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | Using SSH private key: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/old-k8s-version-409903/id_rsa (-rw-------)
	I0806 08:34:36.034395   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.158 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19370-13057/.minikube/machines/old-k8s-version-409903/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0806 08:34:36.034407   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | About to run SSH command:
	I0806 08:34:36.034415   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | exit 0
	I0806 08:34:36.164605   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | SSH cmd err, output: <nil>: 
	I0806 08:34:36.164952   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetConfigRaw
	I0806 08:34:36.165509   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetIP
	I0806 08:34:36.168135   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.168471   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:36.168499   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.168756   79927 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/old-k8s-version-409903/config.json ...
	I0806 08:34:36.168956   79927 machine.go:94] provisionDockerMachine start ...
	I0806 08:34:36.168974   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .DriverName
	I0806 08:34:36.169186   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHHostname
	I0806 08:34:36.171367   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.171702   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:36.171723   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.171902   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHPort
	I0806 08:34:36.172080   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:34:36.172240   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:34:36.172400   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHUsername
	I0806 08:34:36.172569   79927 main.go:141] libmachine: Using SSH client type: native
	I0806 08:34:36.172798   79927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.158 22 <nil> <nil>}
	I0806 08:34:36.172812   79927 main.go:141] libmachine: About to run SSH command:
	hostname
	I0806 08:34:36.288738   79927 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0806 08:34:36.288770   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetMachineName
	I0806 08:34:36.289073   79927 buildroot.go:166] provisioning hostname "old-k8s-version-409903"
	I0806 08:34:36.289088   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetMachineName
	I0806 08:34:36.289272   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHHostname
	I0806 08:34:36.291958   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.292326   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:36.292354   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.292462   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHPort
	I0806 08:34:36.292662   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:34:36.292807   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:34:36.292953   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHUsername
	I0806 08:34:36.293122   79927 main.go:141] libmachine: Using SSH client type: native
	I0806 08:34:36.293287   79927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.158 22 <nil> <nil>}
	I0806 08:34:36.293300   79927 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-409903 && echo "old-k8s-version-409903" | sudo tee /etc/hostname
	I0806 08:34:36.427591   79927 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-409903
	
	I0806 08:34:36.427620   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHHostname
	I0806 08:34:36.430395   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.430724   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:36.430758   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.430900   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHPort
	I0806 08:34:36.431133   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:34:36.431311   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:34:36.431476   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHUsername
	I0806 08:34:36.431662   79927 main.go:141] libmachine: Using SSH client type: native
	I0806 08:34:36.431876   79927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.158 22 <nil> <nil>}
	I0806 08:34:36.431895   79927 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-409903' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-409903/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-409903' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0806 08:34:36.554542   79927 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 08:34:36.554574   79927 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19370-13057/.minikube CaCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19370-13057/.minikube}
	I0806 08:34:36.554613   79927 buildroot.go:174] setting up certificates
	I0806 08:34:36.554629   79927 provision.go:84] configureAuth start
	I0806 08:34:36.554646   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetMachineName
	I0806 08:34:36.554932   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetIP
	I0806 08:34:36.557899   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.558351   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:36.558373   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.558530   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHHostname
	I0806 08:34:36.560878   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.561153   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:36.561182   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.561339   79927 provision.go:143] copyHostCerts
	I0806 08:34:36.561400   79927 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem, removing ...
	I0806 08:34:36.561410   79927 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem
	I0806 08:34:36.561462   79927 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem (1078 bytes)
	I0806 08:34:36.561563   79927 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem, removing ...
	I0806 08:34:36.561572   79927 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem
	I0806 08:34:36.561594   79927 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem (1123 bytes)
	I0806 08:34:36.561647   79927 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem, removing ...
	I0806 08:34:36.561653   79927 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem
	I0806 08:34:36.561670   79927 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem (1675 bytes)
	I0806 08:34:36.561752   79927 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-409903 san=[127.0.0.1 192.168.61.158 localhost minikube old-k8s-version-409903]
	I0806 08:34:36.646180   79927 provision.go:177] copyRemoteCerts
	I0806 08:34:36.646233   79927 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0806 08:34:36.646257   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHHostname
	I0806 08:34:36.649020   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.649380   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:36.649405   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.649564   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHPort
	I0806 08:34:36.649791   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:34:36.649962   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHUsername
	I0806 08:34:36.650092   79927 sshutil.go:53] new ssh client: &{IP:192.168.61.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/old-k8s-version-409903/id_rsa Username:docker}
	I0806 08:34:36.738988   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0806 08:34:36.765104   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0806 08:34:36.791946   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0806 08:34:36.817128   79927 provision.go:87] duration metric: took 262.482885ms to configureAuth
	I0806 08:34:36.817157   79927 buildroot.go:189] setting minikube options for container-runtime
	I0806 08:34:36.817352   79927 config.go:182] Loaded profile config "old-k8s-version-409903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0806 08:34:36.817435   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHHostname
	I0806 08:34:36.820142   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.820480   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:36.820520   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:36.820659   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHPort
	I0806 08:34:36.820892   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:34:36.821095   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:34:36.821289   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHUsername
	I0806 08:34:36.821459   79927 main.go:141] libmachine: Using SSH client type: native
	I0806 08:34:36.821703   79927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.158 22 <nil> <nil>}
	I0806 08:34:36.821722   79927 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0806 08:34:37.099949   79927 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0806 08:34:37.099979   79927 machine.go:97] duration metric: took 931.010538ms to provisionDockerMachine
	I0806 08:34:37.099993   79927 start.go:293] postStartSetup for "old-k8s-version-409903" (driver="kvm2")
	I0806 08:34:37.100007   79927 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0806 08:34:37.100028   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .DriverName
	I0806 08:34:37.100379   79927 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0806 08:34:37.100414   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHHostname
	I0806 08:34:37.103297   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:37.103691   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:37.103717   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:37.103887   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHPort
	I0806 08:34:37.104106   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:34:37.104263   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHUsername
	I0806 08:34:37.104402   79927 sshutil.go:53] new ssh client: &{IP:192.168.61.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/old-k8s-version-409903/id_rsa Username:docker}
	I0806 08:34:37.192035   79927 ssh_runner.go:195] Run: cat /etc/os-release
	I0806 08:34:37.196693   79927 info.go:137] Remote host: Buildroot 2023.02.9
	I0806 08:34:37.196712   79927 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-13057/.minikube/addons for local assets ...
	I0806 08:34:37.196776   79927 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-13057/.minikube/files for local assets ...
	I0806 08:34:37.196861   79927 filesync.go:149] local asset: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem -> 202462.pem in /etc/ssl/certs
	I0806 08:34:37.196957   79927 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0806 08:34:37.206906   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem --> /etc/ssl/certs/202462.pem (1708 bytes)
	I0806 08:34:37.232270   79927 start.go:296] duration metric: took 132.262362ms for postStartSetup
	I0806 08:34:37.232311   79927 fix.go:56] duration metric: took 22.878683419s for fixHost
	I0806 08:34:37.232332   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHHostname
	I0806 08:34:37.235278   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:37.235721   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:37.235751   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:37.235946   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHPort
	I0806 08:34:37.236168   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:34:37.236337   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:34:37.236491   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHUsername
	I0806 08:34:37.236640   79927 main.go:141] libmachine: Using SSH client type: native
	I0806 08:34:37.236793   79927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.158 22 <nil> <nil>}
	I0806 08:34:37.236803   79927 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0806 08:34:37.353370   79927 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722933277.327882532
	
	I0806 08:34:37.353397   79927 fix.go:216] guest clock: 1722933277.327882532
	I0806 08:34:37.353408   79927 fix.go:229] Guest: 2024-08-06 08:34:37.327882532 +0000 UTC Remote: 2024-08-06 08:34:37.232315481 +0000 UTC m=+218.506631326 (delta=95.567051ms)
	I0806 08:34:37.353439   79927 fix.go:200] guest clock delta is within tolerance: 95.567051ms
	I0806 08:34:37.353451   79927 start.go:83] releasing machines lock for "old-k8s-version-409903", held for 22.999876259s
	I0806 08:34:37.353490   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .DriverName
	I0806 08:34:37.353775   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetIP
	I0806 08:34:37.356788   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:37.357197   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:37.357224   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:37.357434   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .DriverName
	I0806 08:34:37.358035   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .DriverName
	I0806 08:34:37.358238   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .DriverName
	I0806 08:34:37.358320   79927 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0806 08:34:37.358360   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHHostname
	I0806 08:34:37.358478   79927 ssh_runner.go:195] Run: cat /version.json
	I0806 08:34:37.358505   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHHostname
	I0806 08:34:37.361125   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:37.361429   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:37.361464   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:37.361490   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:37.361681   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHPort
	I0806 08:34:37.361846   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:34:37.361889   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:37.361913   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:37.362018   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHUsername
	I0806 08:34:37.362078   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHPort
	I0806 08:34:37.362188   79927 sshutil.go:53] new ssh client: &{IP:192.168.61.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/old-k8s-version-409903/id_rsa Username:docker}
	I0806 08:34:37.362319   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHKeyPath
	I0806 08:34:37.362452   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetSSHUsername
	I0806 08:34:37.362596   79927 sshutil.go:53] new ssh client: &{IP:192.168.61.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/old-k8s-version-409903/id_rsa Username:docker}
	I0806 08:34:37.468465   79927 ssh_runner.go:195] Run: systemctl --version
	I0806 08:34:37.475436   79927 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0806 08:34:37.627725   79927 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0806 08:34:37.635346   79927 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0806 08:34:37.635432   79927 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0806 08:34:37.652546   79927 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0806 08:34:37.652571   79927 start.go:495] detecting cgroup driver to use...
	I0806 08:34:37.652642   79927 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0806 08:34:37.671386   79927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 08:34:37.686408   79927 docker.go:217] disabling cri-docker service (if available) ...
	I0806 08:34:37.686473   79927 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0806 08:34:37.701731   79927 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0806 08:34:37.717528   79927 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0806 08:34:37.845160   79927 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0806 08:34:37.999278   79927 docker.go:233] disabling docker service ...
	I0806 08:34:37.999361   79927 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0806 08:34:38.017101   79927 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0806 08:34:38.033189   79927 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0806 08:34:38.213620   79927 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0806 08:34:38.374790   79927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0806 08:34:38.393394   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 08:34:38.419807   79927 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0806 08:34:38.419878   79927 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:38.434870   79927 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0806 08:34:38.434946   79927 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:38.449029   79927 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:38.460440   79927 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:38.473654   79927 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0806 08:34:38.486032   79927 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0806 08:34:38.495861   79927 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0806 08:34:38.495927   79927 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0806 08:34:38.511021   79927 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0806 08:34:38.521985   79927 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 08:34:38.651919   79927 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0806 08:34:34.235598   79614 node_ready.go:53] node "default-k8s-diff-port-990751" has status "Ready":"False"
	I0806 08:34:36.734741   79614 node_ready.go:53] node "default-k8s-diff-port-990751" has status "Ready":"False"
	I0806 08:34:37.235469   79614 node_ready.go:49] node "default-k8s-diff-port-990751" has status "Ready":"True"
	I0806 08:34:37.235496   79614 node_ready.go:38] duration metric: took 5.004029432s for node "default-k8s-diff-port-990751" to be "Ready" ...
	I0806 08:34:37.235506   79614 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 08:34:37.242299   79614 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-gx8tt" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:37.247917   79614 pod_ready.go:92] pod "coredns-7db6d8ff4d-gx8tt" in "kube-system" namespace has status "Ready":"True"
	I0806 08:34:37.247936   79614 pod_ready.go:81] duration metric: took 5.613775ms for pod "coredns-7db6d8ff4d-gx8tt" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:37.247944   79614 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-990751" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:38.822367   79927 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0806 08:34:38.822444   79927 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0806 08:34:38.828148   79927 start.go:563] Will wait 60s for crictl version
	I0806 08:34:38.828211   79927 ssh_runner.go:195] Run: which crictl
	I0806 08:34:38.832084   79927 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0806 08:34:38.872571   79927 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0806 08:34:38.872658   79927 ssh_runner.go:195] Run: crio --version
	I0806 08:34:38.903507   79927 ssh_runner.go:195] Run: crio --version
	I0806 08:34:38.934955   79927 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0806 08:34:36.612552   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:38.614644   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:37.380808   78922 main.go:141] libmachine: (no-preload-703340) Calling .Start
	I0806 08:34:37.381005   78922 main.go:141] libmachine: (no-preload-703340) Ensuring networks are active...
	I0806 08:34:37.381749   78922 main.go:141] libmachine: (no-preload-703340) Ensuring network default is active
	I0806 08:34:37.382052   78922 main.go:141] libmachine: (no-preload-703340) Ensuring network mk-no-preload-703340 is active
	I0806 08:34:37.382556   78922 main.go:141] libmachine: (no-preload-703340) Getting domain xml...
	I0806 08:34:37.383386   78922 main.go:141] libmachine: (no-preload-703340) Creating domain...
	I0806 08:34:38.779397   78922 main.go:141] libmachine: (no-preload-703340) Waiting to get IP...
	I0806 08:34:38.780262   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:38.780737   78922 main.go:141] libmachine: (no-preload-703340) DBG | unable to find current IP address of domain no-preload-703340 in network mk-no-preload-703340
	I0806 08:34:38.780809   78922 main.go:141] libmachine: (no-preload-703340) DBG | I0806 08:34:38.780723   81046 retry.go:31] will retry after 194.059215ms: waiting for machine to come up
	I0806 08:34:38.976254   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:38.976734   78922 main.go:141] libmachine: (no-preload-703340) DBG | unable to find current IP address of domain no-preload-703340 in network mk-no-preload-703340
	I0806 08:34:38.976769   78922 main.go:141] libmachine: (no-preload-703340) DBG | I0806 08:34:38.976653   81046 retry.go:31] will retry after 288.140112ms: waiting for machine to come up
	I0806 08:34:39.266297   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:39.267031   78922 main.go:141] libmachine: (no-preload-703340) DBG | unable to find current IP address of domain no-preload-703340 in network mk-no-preload-703340
	I0806 08:34:39.267060   78922 main.go:141] libmachine: (no-preload-703340) DBG | I0806 08:34:39.266948   81046 retry.go:31] will retry after 354.804756ms: waiting for machine to come up
	I0806 08:34:39.623548   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:39.624083   78922 main.go:141] libmachine: (no-preload-703340) DBG | unable to find current IP address of domain no-preload-703340 in network mk-no-preload-703340
	I0806 08:34:39.624111   78922 main.go:141] libmachine: (no-preload-703340) DBG | I0806 08:34:39.623994   81046 retry.go:31] will retry after 418.851823ms: waiting for machine to come up
	I0806 08:34:40.044734   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:40.045142   78922 main.go:141] libmachine: (no-preload-703340) DBG | unable to find current IP address of domain no-preload-703340 in network mk-no-preload-703340
	I0806 08:34:40.045165   78922 main.go:141] libmachine: (no-preload-703340) DBG | I0806 08:34:40.045103   81046 retry.go:31] will retry after 525.29917ms: waiting for machine to come up
	I0806 08:34:40.571928   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:40.572395   78922 main.go:141] libmachine: (no-preload-703340) DBG | unable to find current IP address of domain no-preload-703340 in network mk-no-preload-703340
	I0806 08:34:40.572431   78922 main.go:141] libmachine: (no-preload-703340) DBG | I0806 08:34:40.572332   81046 retry.go:31] will retry after 609.965809ms: waiting for machine to come up
	I0806 08:34:41.184209   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:41.184793   78922 main.go:141] libmachine: (no-preload-703340) DBG | unable to find current IP address of domain no-preload-703340 in network mk-no-preload-703340
	I0806 08:34:41.184820   78922 main.go:141] libmachine: (no-preload-703340) DBG | I0806 08:34:41.184739   81046 retry.go:31] will retry after 1.120596s: waiting for machine to come up
	I0806 08:34:38.936206   79927 main.go:141] libmachine: (old-k8s-version-409903) Calling .GetIP
	I0806 08:34:38.939581   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:38.940049   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:30:88", ip: ""} in network mk-old-k8s-version-409903: {Iface:virbr1 ExpiryTime:2024-08-06 09:34:26 +0000 UTC Type:0 Mac:52:54:00:12:30:88 Iaid: IPaddr:192.168.61.158 Prefix:24 Hostname:old-k8s-version-409903 Clientid:01:52:54:00:12:30:88}
	I0806 08:34:38.940076   79927 main.go:141] libmachine: (old-k8s-version-409903) DBG | domain old-k8s-version-409903 has defined IP address 192.168.61.158 and MAC address 52:54:00:12:30:88 in network mk-old-k8s-version-409903
	I0806 08:34:38.940336   79927 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0806 08:34:38.944801   79927 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 08:34:38.959454   79927 kubeadm.go:883] updating cluster {Name:old-k8s-version-409903 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-409903 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.158 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0806 08:34:38.959581   79927 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0806 08:34:38.959647   79927 ssh_runner.go:195] Run: sudo crictl images --output json
	I0806 08:34:39.014798   79927 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0806 08:34:39.014908   79927 ssh_runner.go:195] Run: which lz4
	I0806 08:34:39.019642   79927 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0806 08:34:39.025125   79927 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0806 08:34:39.025165   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0806 08:34:40.772902   79927 crio.go:462] duration metric: took 1.753273519s to copy over tarball
	I0806 08:34:40.772994   79927 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0806 08:34:39.256627   79614 pod_ready.go:102] pod "etcd-default-k8s-diff-port-990751" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:41.257158   79614 pod_ready.go:102] pod "etcd-default-k8s-diff-port-990751" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:42.754699   79614 pod_ready.go:92] pod "etcd-default-k8s-diff-port-990751" in "kube-system" namespace has status "Ready":"True"
	I0806 08:34:42.754729   79614 pod_ready.go:81] duration metric: took 5.506777292s for pod "etcd-default-k8s-diff-port-990751" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:42.754742   79614 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-990751" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:42.761260   79614 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-990751" in "kube-system" namespace has status "Ready":"True"
	I0806 08:34:42.761282   79614 pod_ready.go:81] duration metric: took 6.531767ms for pod "kube-apiserver-default-k8s-diff-port-990751" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:42.761292   79614 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-990751" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:42.769860   79614 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-990751" in "kube-system" namespace has status "Ready":"True"
	I0806 08:34:42.769884   79614 pod_ready.go:81] duration metric: took 8.585216ms for pod "kube-controller-manager-default-k8s-diff-port-990751" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:42.769898   79614 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lpf88" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:42.775420   79614 pod_ready.go:92] pod "kube-proxy-lpf88" in "kube-system" namespace has status "Ready":"True"
	I0806 08:34:42.775445   79614 pod_ready.go:81] duration metric: took 5.540587ms for pod "kube-proxy-lpf88" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:42.775453   79614 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-990751" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:42.781317   79614 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-990751" in "kube-system" namespace has status "Ready":"True"
	I0806 08:34:42.781341   79614 pod_ready.go:81] duration metric: took 5.88019ms for pod "kube-scheduler-default-k8s-diff-port-990751" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:42.781353   79614 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace to be "Ready" ...
	I0806 08:34:40.614686   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:43.114736   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:42.306869   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:42.307378   78922 main.go:141] libmachine: (no-preload-703340) DBG | unable to find current IP address of domain no-preload-703340 in network mk-no-preload-703340
	I0806 08:34:42.307408   78922 main.go:141] libmachine: (no-preload-703340) DBG | I0806 08:34:42.307341   81046 retry.go:31] will retry after 1.26489185s: waiting for machine to come up
	I0806 08:34:43.574072   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:43.574494   78922 main.go:141] libmachine: (no-preload-703340) DBG | unable to find current IP address of domain no-preload-703340 in network mk-no-preload-703340
	I0806 08:34:43.574517   78922 main.go:141] libmachine: (no-preload-703340) DBG | I0806 08:34:43.574440   81046 retry.go:31] will retry after 1.547737608s: waiting for machine to come up
	I0806 08:34:45.124236   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:45.124794   78922 main.go:141] libmachine: (no-preload-703340) DBG | unable to find current IP address of domain no-preload-703340 in network mk-no-preload-703340
	I0806 08:34:45.124822   78922 main.go:141] libmachine: (no-preload-703340) DBG | I0806 08:34:45.124745   81046 retry.go:31] will retry after 1.491615644s: waiting for machine to come up
	I0806 08:34:46.617663   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:46.618159   78922 main.go:141] libmachine: (no-preload-703340) DBG | unable to find current IP address of domain no-preload-703340 in network mk-no-preload-703340
	I0806 08:34:46.618183   78922 main.go:141] libmachine: (no-preload-703340) DBG | I0806 08:34:46.618117   81046 retry.go:31] will retry after 2.284217057s: waiting for machine to come up
	I0806 08:34:43.977375   79927 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.204351244s)
	I0806 08:34:43.977404   79927 crio.go:469] duration metric: took 3.204457213s to extract the tarball
	I0806 08:34:43.977412   79927 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0806 08:34:44.023819   79927 ssh_runner.go:195] Run: sudo crictl images --output json
	I0806 08:34:44.070991   79927 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0806 08:34:44.071016   79927 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0806 08:34:44.071089   79927 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0806 08:34:44.071129   79927 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0806 08:34:44.071138   79927 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0806 08:34:44.071090   79927 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 08:34:44.071184   79927 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0806 08:34:44.071192   79927 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0806 08:34:44.071210   79927 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0806 08:34:44.071255   79927 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0806 08:34:44.072641   79927 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0806 08:34:44.072728   79927 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 08:34:44.072743   79927 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0806 08:34:44.072642   79927 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0806 08:34:44.072749   79927 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0806 08:34:44.072811   79927 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0806 08:34:44.073085   79927 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0806 08:34:44.073298   79927 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0806 08:34:44.239872   79927 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0806 08:34:44.287412   79927 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0806 08:34:44.287458   79927 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0806 08:34:44.287504   79927 ssh_runner.go:195] Run: which crictl
	I0806 08:34:44.292050   79927 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0806 08:34:44.309229   79927 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0806 08:34:44.336611   79927 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0806 08:34:44.371389   79927 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0806 08:34:44.371447   79927 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0806 08:34:44.371500   79927 ssh_runner.go:195] Run: which crictl
	I0806 08:34:44.375821   79927 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0806 08:34:44.412139   79927 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0806 08:34:44.415159   79927 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0806 08:34:44.415339   79927 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0806 08:34:44.418296   79927 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0806 08:34:44.423718   79927 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0806 08:34:44.437866   79927 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0806 08:34:44.494716   79927 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0806 08:34:44.494765   79927 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0806 08:34:44.494819   79927 ssh_runner.go:195] Run: which crictl
	I0806 08:34:44.526849   79927 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0806 08:34:44.526898   79927 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0806 08:34:44.526961   79927 ssh_runner.go:195] Run: which crictl
	I0806 08:34:44.555530   79927 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0806 08:34:44.555587   79927 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0806 08:34:44.555620   79927 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0806 08:34:44.555634   79927 ssh_runner.go:195] Run: which crictl
	I0806 08:34:44.555655   79927 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0806 08:34:44.555692   79927 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0806 08:34:44.555707   79927 ssh_runner.go:195] Run: which crictl
	I0806 08:34:44.555714   79927 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0806 08:34:44.555743   79927 ssh_runner.go:195] Run: which crictl
	I0806 08:34:44.555748   79927 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0806 08:34:44.555792   79927 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0806 08:34:44.561072   79927 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0806 08:34:44.624216   79927 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0806 08:34:44.639877   79927 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0806 08:34:44.639895   79927 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0806 08:34:44.639979   79927 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0806 08:34:44.653684   79927 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0806 08:34:44.711774   79927 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0806 08:34:44.711804   79927 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0806 08:34:44.964733   79927 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 08:34:45.112727   79927 cache_images.go:92] duration metric: took 1.041682516s to LoadCachedImages
	W0806 08:34:45.112813   79927 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0806 08:34:45.112837   79927 kubeadm.go:934] updating node { 192.168.61.158 8443 v1.20.0 crio true true} ...
	I0806 08:34:45.112964   79927 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-409903 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.158
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-409903 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0806 08:34:45.113047   79927 ssh_runner.go:195] Run: crio config
	I0806 08:34:45.165383   79927 cni.go:84] Creating CNI manager for ""
	I0806 08:34:45.165413   79927 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0806 08:34:45.165429   79927 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0806 08:34:45.165455   79927 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.158 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-409903 NodeName:old-k8s-version-409903 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.158"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.158 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0806 08:34:45.165687   79927 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.158
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-409903"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.158
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.158"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0806 08:34:45.165760   79927 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0806 08:34:45.177296   79927 binaries.go:44] Found k8s binaries, skipping transfer
	I0806 08:34:45.177371   79927 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0806 08:34:45.187299   79927 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0806 08:34:45.205986   79927 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0806 08:34:45.225072   79927 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0806 08:34:45.244222   79927 ssh_runner.go:195] Run: grep 192.168.61.158	control-plane.minikube.internal$ /etc/hosts
	I0806 08:34:45.248728   79927 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.158	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 08:34:45.263094   79927 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 08:34:45.399106   79927 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 08:34:45.417589   79927 certs.go:68] Setting up /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/old-k8s-version-409903 for IP: 192.168.61.158
	I0806 08:34:45.417611   79927 certs.go:194] generating shared ca certs ...
	I0806 08:34:45.417630   79927 certs.go:226] acquiring lock for ca certs: {Name:mkf5c5aed91f4108054d9f2c742cad66c86afbed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:34:45.417779   79927 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.key
	I0806 08:34:45.417820   79927 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.key
	I0806 08:34:45.417829   79927 certs.go:256] generating profile certs ...
	I0806 08:34:45.417913   79927 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/old-k8s-version-409903/client.key
	I0806 08:34:45.417970   79927 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/old-k8s-version-409903/apiserver.key.1ffd6785
	I0806 08:34:45.418060   79927 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/old-k8s-version-409903/proxy-client.key
	I0806 08:34:45.418242   79927 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246.pem (1338 bytes)
	W0806 08:34:45.418273   79927 certs.go:480] ignoring /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246_empty.pem, impossibly tiny 0 bytes
	I0806 08:34:45.418282   79927 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem (1675 bytes)
	I0806 08:34:45.418302   79927 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem (1078 bytes)
	I0806 08:34:45.418322   79927 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem (1123 bytes)
	I0806 08:34:45.418350   79927 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem (1675 bytes)
	I0806 08:34:45.418389   79927 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem (1708 bytes)
	I0806 08:34:45.419018   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0806 08:34:45.464550   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0806 08:34:45.500764   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0806 08:34:45.549225   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0806 08:34:45.597480   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/old-k8s-version-409903/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0806 08:34:45.639834   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/old-k8s-version-409903/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0806 08:34:45.680789   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/old-k8s-version-409903/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0806 08:34:45.737986   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/old-k8s-version-409903/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0806 08:34:45.766471   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem --> /usr/share/ca-certificates/202462.pem (1708 bytes)
	I0806 08:34:45.797047   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0806 08:34:45.825789   79927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246.pem --> /usr/share/ca-certificates/20246.pem (1338 bytes)
	I0806 08:34:45.860217   79927 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0806 08:34:45.883841   79927 ssh_runner.go:195] Run: openssl version
	I0806 08:34:45.891103   79927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202462.pem && ln -fs /usr/share/ca-certificates/202462.pem /etc/ssl/certs/202462.pem"
	I0806 08:34:45.905260   79927 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202462.pem
	I0806 08:34:45.910759   79927 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  6 07:17 /usr/share/ca-certificates/202462.pem
	I0806 08:34:45.910837   79927 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202462.pem
	I0806 08:34:45.918440   79927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202462.pem /etc/ssl/certs/3ec20f2e.0"
	I0806 08:34:45.932479   79927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0806 08:34:45.945678   79927 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0806 08:34:45.951612   79927 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  6 07:07 /usr/share/ca-certificates/minikubeCA.pem
	I0806 08:34:45.951691   79927 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0806 08:34:45.957749   79927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0806 08:34:45.969987   79927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20246.pem && ln -fs /usr/share/ca-certificates/20246.pem /etc/ssl/certs/20246.pem"
	I0806 08:34:45.982878   79927 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20246.pem
	I0806 08:34:45.987953   79927 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  6 07:17 /usr/share/ca-certificates/20246.pem
	I0806 08:34:45.988032   79927 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20246.pem
	I0806 08:34:45.994219   79927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20246.pem /etc/ssl/certs/51391683.0"
	I0806 08:34:46.006666   79927 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0806 08:34:46.011624   79927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0806 08:34:46.018416   79927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0806 08:34:46.025220   79927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0806 08:34:46.031854   79927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0806 08:34:46.038570   79927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0806 08:34:46.045142   79927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0806 08:34:46.052198   79927 kubeadm.go:392] StartCluster: {Name:old-k8s-version-409903 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-409903 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.158 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 08:34:46.052313   79927 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0806 08:34:46.052372   79927 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0806 08:34:46.098382   79927 cri.go:89] found id: ""
	I0806 08:34:46.098454   79927 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0806 08:34:46.110861   79927 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0806 08:34:46.110888   79927 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0806 08:34:46.110956   79927 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0806 08:34:46.123420   79927 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0806 08:34:46.124617   79927 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-409903" does not appear in /home/jenkins/minikube-integration/19370-13057/kubeconfig
	I0806 08:34:46.125562   79927 kubeconfig.go:62] /home/jenkins/minikube-integration/19370-13057/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-409903" cluster setting kubeconfig missing "old-k8s-version-409903" context setting]
	I0806 08:34:46.127017   79927 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/kubeconfig: {Name:mk5c066914acf8e36556b29792f75b98076ca34a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:34:46.173767   79927 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0806 08:34:46.186472   79927 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.158
	I0806 08:34:46.186517   79927 kubeadm.go:1160] stopping kube-system containers ...
	I0806 08:34:46.186532   79927 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0806 08:34:46.186592   79927 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0806 08:34:46.227817   79927 cri.go:89] found id: ""
	I0806 08:34:46.227914   79927 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0806 08:34:46.248248   79927 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0806 08:34:46.259555   79927 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0806 08:34:46.259578   79927 kubeadm.go:157] found existing configuration files:
	
	I0806 08:34:46.259647   79927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0806 08:34:46.271246   79927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0806 08:34:46.271311   79927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0806 08:34:46.283559   79927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0806 08:34:46.295584   79927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0806 08:34:46.295660   79927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0806 08:34:46.307223   79927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0806 08:34:46.318046   79927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0806 08:34:46.318136   79927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0806 08:34:46.328827   79927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0806 08:34:46.339577   79927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0806 08:34:46.339665   79927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0806 08:34:46.352107   79927 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0806 08:34:46.369776   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:46.505082   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:47.036627   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:47.285779   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:47.397777   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:34:47.517913   79927 api_server.go:52] waiting for apiserver process to appear ...
	I0806 08:34:47.518028   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:48.018399   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:48.518708   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:44.788404   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:46.789746   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:45.115046   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:47.611343   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:48.905056   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:48.905522   78922 main.go:141] libmachine: (no-preload-703340) DBG | unable to find current IP address of domain no-preload-703340 in network mk-no-preload-703340
	I0806 08:34:48.905548   78922 main.go:141] libmachine: (no-preload-703340) DBG | I0806 08:34:48.905484   81046 retry.go:31] will retry after 2.647234658s: waiting for machine to come up
	I0806 08:34:51.556304   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:51.556689   78922 main.go:141] libmachine: (no-preload-703340) DBG | unable to find current IP address of domain no-preload-703340 in network mk-no-preload-703340
	I0806 08:34:51.556712   78922 main.go:141] libmachine: (no-preload-703340) DBG | I0806 08:34:51.556663   81046 retry.go:31] will retry after 4.275379711s: waiting for machine to come up
	I0806 08:34:49.018572   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:49.518975   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:50.018355   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:50.518866   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:51.018156   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:51.518855   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:52.018270   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:52.518584   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:53.018606   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:53.518118   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:49.288691   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:51.787435   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:53.787824   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:49.612621   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:52.112274   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:55.834423   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:55.834961   78922 main.go:141] libmachine: (no-preload-703340) Found IP for machine: 192.168.72.126
	I0806 08:34:55.834980   78922 main.go:141] libmachine: (no-preload-703340) Reserving static IP address...
	I0806 08:34:55.834991   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has current primary IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:55.835314   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "no-preload-703340", mac: "52:54:00:a1:fa:d3", ip: "192.168.72.126"} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:34:55.835348   78922 main.go:141] libmachine: (no-preload-703340) Reserved static IP address: 192.168.72.126
	I0806 08:34:55.835363   78922 main.go:141] libmachine: (no-preload-703340) DBG | skip adding static IP to network mk-no-preload-703340 - found existing host DHCP lease matching {name: "no-preload-703340", mac: "52:54:00:a1:fa:d3", ip: "192.168.72.126"}
	I0806 08:34:55.835376   78922 main.go:141] libmachine: (no-preload-703340) DBG | Getting to WaitForSSH function...
	I0806 08:34:55.835388   78922 main.go:141] libmachine: (no-preload-703340) Waiting for SSH to be available...
	I0806 08:34:55.837645   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:55.838001   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:34:55.838030   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:55.838167   78922 main.go:141] libmachine: (no-preload-703340) DBG | Using SSH client type: external
	I0806 08:34:55.838193   78922 main.go:141] libmachine: (no-preload-703340) DBG | Using SSH private key: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/no-preload-703340/id_rsa (-rw-------)
	I0806 08:34:55.838234   78922 main.go:141] libmachine: (no-preload-703340) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.126 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19370-13057/.minikube/machines/no-preload-703340/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0806 08:34:55.838256   78922 main.go:141] libmachine: (no-preload-703340) DBG | About to run SSH command:
	I0806 08:34:55.838272   78922 main.go:141] libmachine: (no-preload-703340) DBG | exit 0
	I0806 08:34:55.968557   78922 main.go:141] libmachine: (no-preload-703340) DBG | SSH cmd err, output: <nil>: 
	I0806 08:34:55.968893   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetConfigRaw
	I0806 08:34:55.969485   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetIP
	I0806 08:34:55.971998   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:55.972457   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:34:55.972487   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:55.972694   78922 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/no-preload-703340/config.json ...
	I0806 08:34:55.972889   78922 machine.go:94] provisionDockerMachine start ...
	I0806 08:34:55.972908   78922 main.go:141] libmachine: (no-preload-703340) Calling .DriverName
	I0806 08:34:55.973115   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHHostname
	I0806 08:34:55.975361   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:55.975734   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:34:55.975753   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:55.975905   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHPort
	I0806 08:34:55.976051   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHKeyPath
	I0806 08:34:55.976265   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHKeyPath
	I0806 08:34:55.976424   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHUsername
	I0806 08:34:55.976635   78922 main.go:141] libmachine: Using SSH client type: native
	I0806 08:34:55.976891   78922 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.126 22 <nil> <nil>}
	I0806 08:34:55.976905   78922 main.go:141] libmachine: About to run SSH command:
	hostname
	I0806 08:34:56.084844   78922 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0806 08:34:56.084905   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetMachineName
	I0806 08:34:56.085226   78922 buildroot.go:166] provisioning hostname "no-preload-703340"
	I0806 08:34:56.085253   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetMachineName
	I0806 08:34:56.085477   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHHostname
	I0806 08:34:56.088358   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:56.088775   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:34:56.088806   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:56.088904   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHPort
	I0806 08:34:56.089070   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHKeyPath
	I0806 08:34:56.089262   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHKeyPath
	I0806 08:34:56.089417   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHUsername
	I0806 08:34:56.089600   78922 main.go:141] libmachine: Using SSH client type: native
	I0806 08:34:56.089810   78922 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.126 22 <nil> <nil>}
	I0806 08:34:56.089840   78922 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-703340 && echo "no-preload-703340" | sudo tee /etc/hostname
	I0806 08:34:56.216217   78922 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-703340
	
	I0806 08:34:56.216257   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHHostname
	I0806 08:34:56.218949   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:56.219274   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:34:56.219309   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:56.219488   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHPort
	I0806 08:34:56.219699   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHKeyPath
	I0806 08:34:56.219899   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHKeyPath
	I0806 08:34:56.220054   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHUsername
	I0806 08:34:56.220231   78922 main.go:141] libmachine: Using SSH client type: native
	I0806 08:34:56.220513   78922 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.126 22 <nil> <nil>}
	I0806 08:34:56.220544   78922 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-703340' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-703340/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-703340' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0806 08:34:56.337357   78922 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 08:34:56.337390   78922 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19370-13057/.minikube CaCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19370-13057/.minikube}
	I0806 08:34:56.337428   78922 buildroot.go:174] setting up certificates
	I0806 08:34:56.337439   78922 provision.go:84] configureAuth start
	I0806 08:34:56.337454   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetMachineName
	I0806 08:34:56.337736   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetIP
	I0806 08:34:56.340297   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:56.340655   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:34:56.340680   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:56.340898   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHHostname
	I0806 08:34:56.343013   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:56.343356   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:34:56.343395   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:56.343544   78922 provision.go:143] copyHostCerts
	I0806 08:34:56.343613   78922 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem, removing ...
	I0806 08:34:56.343626   78922 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem
	I0806 08:34:56.343692   78922 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/ca.pem (1078 bytes)
	I0806 08:34:56.343806   78922 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem, removing ...
	I0806 08:34:56.343815   78922 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem
	I0806 08:34:56.343838   78922 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/cert.pem (1123 bytes)
	I0806 08:34:56.343915   78922 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem, removing ...
	I0806 08:34:56.343929   78922 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem
	I0806 08:34:56.343948   78922 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19370-13057/.minikube/key.pem (1675 bytes)
	I0806 08:34:56.344044   78922 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem org=jenkins.no-preload-703340 san=[127.0.0.1 192.168.72.126 localhost minikube no-preload-703340]
	I0806 08:34:56.441531   78922 provision.go:177] copyRemoteCerts
	I0806 08:34:56.441588   78922 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0806 08:34:56.441610   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHHostname
	I0806 08:34:56.444202   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:56.444549   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:34:56.444574   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:56.444786   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHPort
	I0806 08:34:56.444977   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHKeyPath
	I0806 08:34:56.445125   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHUsername
	I0806 08:34:56.445327   78922 sshutil.go:53] new ssh client: &{IP:192.168.72.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/no-preload-703340/id_rsa Username:docker}
	I0806 08:34:56.530980   78922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0806 08:34:56.557880   78922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0806 08:34:56.584108   78922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0806 08:34:56.608415   78922 provision.go:87] duration metric: took 270.961494ms to configureAuth
	I0806 08:34:56.608449   78922 buildroot.go:189] setting minikube options for container-runtime
	I0806 08:34:56.608675   78922 config.go:182] Loaded profile config "no-preload-703340": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-rc.0
	I0806 08:34:56.608794   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHHostname
	I0806 08:34:56.611937   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:56.612383   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:34:56.612409   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:56.612585   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHPort
	I0806 08:34:56.612774   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHKeyPath
	I0806 08:34:56.612953   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHKeyPath
	I0806 08:34:56.613134   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHUsername
	I0806 08:34:56.613282   78922 main.go:141] libmachine: Using SSH client type: native
	I0806 08:34:56.613436   78922 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.126 22 <nil> <nil>}
	I0806 08:34:56.613450   78922 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0806 08:34:56.904446   78922 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0806 08:34:56.904474   78922 machine.go:97] duration metric: took 931.572346ms to provisionDockerMachine
	I0806 08:34:56.904484   78922 start.go:293] postStartSetup for "no-preload-703340" (driver="kvm2")
	I0806 08:34:56.904496   78922 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0806 08:34:56.904517   78922 main.go:141] libmachine: (no-preload-703340) Calling .DriverName
	I0806 08:34:56.905043   78922 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0806 08:34:56.905076   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHHostname
	I0806 08:34:56.907707   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:56.908101   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:34:56.908124   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:56.908356   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHPort
	I0806 08:34:56.908586   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHKeyPath
	I0806 08:34:56.908753   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHUsername
	I0806 08:34:56.908924   78922 sshutil.go:53] new ssh client: &{IP:192.168.72.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/no-preload-703340/id_rsa Username:docker}
	I0806 08:34:56.995555   78922 ssh_runner.go:195] Run: cat /etc/os-release
	I0806 08:34:56.999971   78922 info.go:137] Remote host: Buildroot 2023.02.9
	I0806 08:34:57.000004   78922 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-13057/.minikube/addons for local assets ...
	I0806 08:34:57.000079   78922 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-13057/.minikube/files for local assets ...
	I0806 08:34:57.000150   78922 filesync.go:149] local asset: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem -> 202462.pem in /etc/ssl/certs
	I0806 08:34:57.000243   78922 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0806 08:34:57.010178   78922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem --> /etc/ssl/certs/202462.pem (1708 bytes)
	I0806 08:34:57.035704   78922 start.go:296] duration metric: took 131.206204ms for postStartSetup
	I0806 08:34:57.035751   78922 fix.go:56] duration metric: took 19.682153204s for fixHost
	I0806 08:34:57.035775   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHHostname
	I0806 08:34:57.038230   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:57.038565   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:34:57.038596   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:57.038809   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHPort
	I0806 08:34:57.039016   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHKeyPath
	I0806 08:34:57.039197   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHKeyPath
	I0806 08:34:57.039308   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHUsername
	I0806 08:34:57.039486   78922 main.go:141] libmachine: Using SSH client type: native
	I0806 08:34:57.039697   78922 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.126 22 <nil> <nil>}
	I0806 08:34:57.039712   78922 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0806 08:34:57.149272   78922 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722933297.121300330
	
	I0806 08:34:57.149291   78922 fix.go:216] guest clock: 1722933297.121300330
	I0806 08:34:57.149298   78922 fix.go:229] Guest: 2024-08-06 08:34:57.12130033 +0000 UTC Remote: 2024-08-06 08:34:57.035756139 +0000 UTC m=+360.415626547 (delta=85.544191ms)
	I0806 08:34:57.149319   78922 fix.go:200] guest clock delta is within tolerance: 85.544191ms
	I0806 08:34:57.149326   78922 start.go:83] releasing machines lock for "no-preload-703340", held for 19.795759576s
	I0806 08:34:57.149348   78922 main.go:141] libmachine: (no-preload-703340) Calling .DriverName
	I0806 08:34:57.149624   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetIP
	I0806 08:34:57.152286   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:57.152699   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:34:57.152731   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:57.152905   78922 main.go:141] libmachine: (no-preload-703340) Calling .DriverName
	I0806 08:34:57.153488   78922 main.go:141] libmachine: (no-preload-703340) Calling .DriverName
	I0806 08:34:57.153680   78922 main.go:141] libmachine: (no-preload-703340) Calling .DriverName
	I0806 08:34:57.153764   78922 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0806 08:34:57.153803   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHHostname
	I0806 08:34:57.154068   78922 ssh_runner.go:195] Run: cat /version.json
	I0806 08:34:57.154090   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHHostname
	I0806 08:34:57.156678   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:57.157026   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:34:57.157053   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:57.157123   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:57.157327   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHPort
	I0806 08:34:57.157492   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHKeyPath
	I0806 08:34:57.157580   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:34:57.157621   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHUsername
	I0806 08:34:57.157602   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:57.157701   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHPort
	I0806 08:34:57.157919   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHKeyPath
	I0806 08:34:57.157933   78922 sshutil.go:53] new ssh client: &{IP:192.168.72.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/no-preload-703340/id_rsa Username:docker}
	I0806 08:34:57.158089   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHUsername
	I0806 08:34:57.158243   78922 sshutil.go:53] new ssh client: &{IP:192.168.72.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/no-preload-703340/id_rsa Username:docker}
	I0806 08:34:57.237574   78922 ssh_runner.go:195] Run: systemctl --version
	I0806 08:34:57.260498   78922 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0806 08:34:57.403619   78922 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0806 08:34:57.410198   78922 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0806 08:34:57.410267   78922 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0806 08:34:57.426321   78922 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0806 08:34:57.426345   78922 start.go:495] detecting cgroup driver to use...
	I0806 08:34:57.426414   78922 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0806 08:34:57.442340   78922 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 08:34:57.458008   78922 docker.go:217] disabling cri-docker service (if available) ...
	I0806 08:34:57.458061   78922 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0806 08:34:57.471516   78922 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0806 08:34:57.484996   78922 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0806 08:34:57.604605   78922 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0806 08:34:57.753590   78922 docker.go:233] disabling docker service ...
	I0806 08:34:57.753651   78922 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0806 08:34:57.768534   78922 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0806 08:34:57.782187   78922 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0806 08:34:57.936046   78922 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0806 08:34:58.076402   78922 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0806 08:34:58.091386   78922 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 08:34:58.112810   78922 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0806 08:34:58.112873   78922 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:58.123376   78922 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0806 08:34:58.123438   78922 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:58.133981   78922 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:58.144891   78922 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:58.156276   78922 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0806 08:34:58.166927   78922 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:58.176956   78922 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:58.193969   78922 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 08:34:58.204122   78922 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0806 08:34:58.213745   78922 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0806 08:34:58.213859   78922 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0806 08:34:58.226556   78922 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0806 08:34:58.236413   78922 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 08:34:58.359776   78922 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0806 08:34:58.505947   78922 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0806 08:34:58.506016   78922 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0806 08:34:58.511591   78922 start.go:563] Will wait 60s for crictl version
	I0806 08:34:58.511649   78922 ssh_runner.go:195] Run: which crictl
	I0806 08:34:58.516259   78922 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0806 08:34:58.568580   78922 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0806 08:34:58.568658   78922 ssh_runner.go:195] Run: crio --version
	I0806 08:34:58.607964   78922 ssh_runner.go:195] Run: crio --version
	I0806 08:34:58.642564   78922 out.go:177] * Preparing Kubernetes v1.31.0-rc.0 on CRI-O 1.29.1 ...
	I0806 08:34:54.018818   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:54.518987   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:55.018584   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:55.518848   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:56.018094   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:56.518052   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:57.018765   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:57.518128   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:58.018977   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:58.518233   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:55.789183   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:58.288863   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:54.612520   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:56.614634   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:59.115966   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:34:58.643802   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetIP
	I0806 08:34:58.646395   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:58.646714   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:34:58.646744   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:34:58.646948   78922 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0806 08:34:58.651136   78922 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 08:34:58.664347   78922 kubeadm.go:883] updating cluster {Name:no-preload-703340 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-rc.0 ClusterName:no-preload-703340 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.126 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m
0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0806 08:34:58.664479   78922 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime crio
	I0806 08:34:58.664511   78922 ssh_runner.go:195] Run: sudo crictl images --output json
	I0806 08:34:58.705043   78922 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-rc.0". assuming images are not preloaded.
	I0806 08:34:58.705080   78922 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-rc.0 registry.k8s.io/kube-controller-manager:v1.31.0-rc.0 registry.k8s.io/kube-scheduler:v1.31.0-rc.0 registry.k8s.io/kube-proxy:v1.31.0-rc.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0806 08:34:58.705157   78922 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 08:34:58.705170   78922 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0806 08:34:58.705177   78922 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0806 08:34:58.705245   78922 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0806 08:34:58.705272   78922 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0806 08:34:58.705281   78922 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0806 08:34:58.705160   78922 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0806 08:34:58.705469   78922 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0806 08:34:58.706962   78922 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0806 08:34:58.706993   78922 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 08:34:58.706986   78922 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0806 08:34:58.707018   78922 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0806 08:34:58.706968   78922 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0806 08:34:58.706989   78922 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0806 08:34:58.707066   78922 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0806 08:34:58.707315   78922 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0806 08:34:58.835957   78922 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0806 08:34:58.839572   78922 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0806 08:34:58.846955   78922 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0806 08:34:58.865707   78922 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0806 08:34:58.872729   78922 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0806 08:34:58.878484   78922 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0806 08:34:58.898935   78922 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0806 08:34:58.919621   78922 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-rc.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-rc.0" does not exist at hash "fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c" in container runtime
	I0806 08:34:58.919654   78922 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0806 08:34:58.919691   78922 ssh_runner.go:195] Run: which crictl
	I0806 08:34:58.949493   78922 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0806 08:34:58.949544   78922 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0806 08:34:58.949597   78922 ssh_runner.go:195] Run: which crictl
	I0806 08:34:58.971429   78922 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0806 08:34:58.971476   78922 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0806 08:34:58.971522   78922 ssh_runner.go:195] Run: which crictl
	I0806 08:34:59.015321   78922 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-rc.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-rc.0" does not exist at hash "0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c" in container runtime
	I0806 08:34:59.015359   78922 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0806 08:34:59.015397   78922 ssh_runner.go:195] Run: which crictl
	I0806 08:34:59.107718   78922 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-rc.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-rc.0" does not exist at hash "41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318" in container runtime
	I0806 08:34:59.107761   78922 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0806 08:34:59.107797   78922 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-rc.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-rc.0" does not exist at hash "c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0" in container runtime
	I0806 08:34:59.107867   78922 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0806 08:34:59.107904   78922 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0806 08:34:59.107987   78922 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0806 08:34:59.107809   78922 ssh_runner.go:195] Run: which crictl
	I0806 08:34:59.108037   78922 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0806 08:34:59.107994   78922 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0806 08:34:59.108037   78922 ssh_runner.go:195] Run: which crictl
	I0806 08:34:59.120323   78922 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0806 08:34:59.218864   78922 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0
	I0806 08:34:59.218982   78922 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-rc.0
	I0806 08:34:59.222696   78922 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0806 08:34:59.222735   78922 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0806 08:34:59.222794   78922 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0
	I0806 08:34:59.222820   78922 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0806 08:34:59.222854   78922 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0806 08:34:59.222882   78922 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-rc.0
	I0806 08:34:59.222900   78922 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-rc.0
	I0806 08:34:59.222933   78922 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.15-0
	I0806 08:34:59.222964   78922 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-rc.0
	I0806 08:34:59.227260   78922 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-rc.0 (exists)
	I0806 08:34:59.227280   78922 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-rc.0
	I0806 08:34:59.227319   78922 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-rc.0
	I0806 08:34:59.268220   78922 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-rc.0 (exists)
	I0806 08:34:59.268240   78922 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0
	I0806 08:34:59.268291   78922 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0806 08:34:59.268325   78922 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0806 08:34:59.268332   78922 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-rc.0
	I0806 08:34:59.268349   78922 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-rc.0 (exists)
	I0806 08:34:59.613734   78922 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 08:35:01.623115   78922 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-rc.0: (2.395772878s)
	I0806 08:35:01.623140   78922 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-rc.0: (2.354795538s)
	I0806 08:35:01.623152   78922 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0 from cache
	I0806 08:35:01.623161   78922 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-rc.0 (exists)
	I0806 08:35:01.623168   78922 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-rc.0
	I0806 08:35:01.623217   78922 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-rc.0
	I0806 08:35:01.623218   78922 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.009459709s)
	I0806 08:35:01.623310   78922 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0806 08:35:01.623340   78922 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 08:35:01.623366   78922 ssh_runner.go:195] Run: which crictl
	I0806 08:34:59.019117   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:34:59.518121   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:00.018102   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:00.518402   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:01.019125   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:01.518141   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:02.018063   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:02.518834   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:03.018074   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:03.518904   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:00.789181   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:02.790704   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:01.612449   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:03.655402   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:03.781865   78922 ssh_runner.go:235] Completed: which crictl: (2.158478864s)
	I0806 08:35:03.781942   78922 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 08:35:03.782003   78922 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-rc.0: (2.158736373s)
	I0806 08:35:03.782023   78922 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0 from cache
	I0806 08:35:03.782042   78922 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-rc.0
	I0806 08:35:03.782112   78922 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-rc.0
	I0806 08:35:05.247976   78922 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.466007397s)
	I0806 08:35:05.248028   78922 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-rc.0: (1.465889339s)
	I0806 08:35:05.248049   78922 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0806 08:35:05.248054   78922 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0 from cache
	I0806 08:35:05.248065   78922 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0806 08:35:05.248117   78922 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0806 08:35:05.248166   78922 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0806 08:35:04.018047   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:04.518891   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:05.018447   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:05.518506   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:06.019133   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:06.518070   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:07.018362   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:07.518320   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:08.019034   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:08.518740   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:05.288137   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:07.288969   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:06.113473   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:08.612266   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:09.135551   78922 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.887407478s)
	I0806 08:35:09.135591   78922 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0806 08:35:09.135592   78922 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (3.887405091s)
	I0806 08:35:09.135605   78922 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0806 08:35:09.135618   78922 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0806 08:35:09.135645   78922 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0806 08:35:11.131984   78922 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.996319459s)
	I0806 08:35:11.132013   78922 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0806 08:35:11.132024   78922 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-rc.0
	I0806 08:35:11.132070   78922 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-rc.0
	I0806 08:35:09.018349   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:09.518189   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:10.018824   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:10.518260   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:11.018914   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:11.518709   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:12.018349   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:12.518719   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:13.019022   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:13.518318   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:09.787576   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:11.789165   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:10.614185   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:13.112391   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:13.395512   78922 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-rc.0: (2.263417409s)
	I0806 08:35:13.395548   78922 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-rc.0 from cache
	I0806 08:35:13.395560   78922 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0806 08:35:13.395608   78922 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0806 08:35:14.150250   78922 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19370-13057/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0806 08:35:14.150301   78922 cache_images.go:123] Successfully loaded all cached images
	I0806 08:35:14.150309   78922 cache_images.go:92] duration metric: took 15.445214072s to LoadCachedImages
	I0806 08:35:14.150322   78922 kubeadm.go:934] updating node { 192.168.72.126 8443 v1.31.0-rc.0 crio true true} ...
	I0806 08:35:14.150540   78922 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-703340 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.126
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-rc.0 ClusterName:no-preload-703340 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0806 08:35:14.150640   78922 ssh_runner.go:195] Run: crio config
	I0806 08:35:14.210763   78922 cni.go:84] Creating CNI manager for ""
	I0806 08:35:14.210794   78922 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0806 08:35:14.210806   78922 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0806 08:35:14.210842   78922 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.126 APIServerPort:8443 KubernetesVersion:v1.31.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-703340 NodeName:no-preload-703340 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.126"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.126 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0806 08:35:14.210979   78922 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.126
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-703340"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.126
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.126"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0806 08:35:14.211036   78922 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-rc.0
	I0806 08:35:14.222106   78922 binaries.go:44] Found k8s binaries, skipping transfer
	I0806 08:35:14.222189   78922 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0806 08:35:14.231944   78922 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0806 08:35:14.250491   78922 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0806 08:35:14.268955   78922 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0806 08:35:14.289661   78922 ssh_runner.go:195] Run: grep 192.168.72.126	control-plane.minikube.internal$ /etc/hosts
	I0806 08:35:14.294005   78922 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.126	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 08:35:14.307700   78922 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 08:35:14.451462   78922 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 08:35:14.478155   78922 certs.go:68] Setting up /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/no-preload-703340 for IP: 192.168.72.126
	I0806 08:35:14.478177   78922 certs.go:194] generating shared ca certs ...
	I0806 08:35:14.478191   78922 certs.go:226] acquiring lock for ca certs: {Name:mkf5c5aed91f4108054d9f2c742cad66c86afbed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:35:14.478365   78922 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.key
	I0806 08:35:14.478420   78922 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.key
	I0806 08:35:14.478434   78922 certs.go:256] generating profile certs ...
	I0806 08:35:14.478529   78922 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/no-preload-703340/client.key
	I0806 08:35:14.478615   78922 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/no-preload-703340/apiserver.key.0b170e14
	I0806 08:35:14.478665   78922 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/no-preload-703340/proxy-client.key
	I0806 08:35:14.478822   78922 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246.pem (1338 bytes)
	W0806 08:35:14.478871   78922 certs.go:480] ignoring /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246_empty.pem, impossibly tiny 0 bytes
	I0806 08:35:14.478885   78922 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca-key.pem (1675 bytes)
	I0806 08:35:14.478941   78922 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/ca.pem (1078 bytes)
	I0806 08:35:14.478978   78922 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/cert.pem (1123 bytes)
	I0806 08:35:14.479014   78922 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/certs/key.pem (1675 bytes)
	I0806 08:35:14.479078   78922 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem (1708 bytes)
	I0806 08:35:14.479773   78922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0806 08:35:14.527838   78922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0806 08:35:14.579788   78922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0806 08:35:14.614739   78922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0806 08:35:14.657559   78922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/no-preload-703340/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0806 08:35:14.699354   78922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/no-preload-703340/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0806 08:35:14.724875   78922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/no-preload-703340/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0806 08:35:14.751020   78922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/no-preload-703340/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0806 08:35:14.778300   78922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/ssl/certs/202462.pem --> /usr/share/ca-certificates/202462.pem (1708 bytes)
	I0806 08:35:14.805018   78922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0806 08:35:14.830791   78922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-13057/.minikube/certs/20246.pem --> /usr/share/ca-certificates/20246.pem (1338 bytes)
	I0806 08:35:14.856120   78922 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0806 08:35:14.874909   78922 ssh_runner.go:195] Run: openssl version
	I0806 08:35:14.880931   78922 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202462.pem && ln -fs /usr/share/ca-certificates/202462.pem /etc/ssl/certs/202462.pem"
	I0806 08:35:14.892741   78922 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202462.pem
	I0806 08:35:14.897430   78922 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  6 07:17 /usr/share/ca-certificates/202462.pem
	I0806 08:35:14.897489   78922 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202462.pem
	I0806 08:35:14.903324   78922 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202462.pem /etc/ssl/certs/3ec20f2e.0"
	I0806 08:35:14.914505   78922 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0806 08:35:14.925465   78922 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0806 08:35:14.930490   78922 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  6 07:07 /usr/share/ca-certificates/minikubeCA.pem
	I0806 08:35:14.930559   78922 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0806 08:35:14.936384   78922 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0806 08:35:14.947836   78922 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20246.pem && ln -fs /usr/share/ca-certificates/20246.pem /etc/ssl/certs/20246.pem"
	I0806 08:35:14.959376   78922 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20246.pem
	I0806 08:35:14.964118   78922 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  6 07:17 /usr/share/ca-certificates/20246.pem
	I0806 08:35:14.964168   78922 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20246.pem
	I0806 08:35:14.970266   78922 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20246.pem /etc/ssl/certs/51391683.0"
	I0806 08:35:14.982244   78922 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0806 08:35:14.987367   78922 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0806 08:35:14.993508   78922 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0806 08:35:14.999400   78922 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0806 08:35:15.005751   78922 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0806 08:35:15.011588   78922 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0806 08:35:15.017766   78922 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0806 08:35:15.023853   78922 kubeadm.go:392] StartCluster: {Name:no-preload-703340 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-rc.0 ClusterName:no-preload-703340 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.126 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 08:35:15.024060   78922 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0806 08:35:15.024113   78922 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0806 08:35:15.072318   78922 cri.go:89] found id: ""
	I0806 08:35:15.072415   78922 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0806 08:35:15.083158   78922 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0806 08:35:15.083182   78922 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0806 08:35:15.083225   78922 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0806 08:35:15.093301   78922 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0806 08:35:15.094369   78922 kubeconfig.go:125] found "no-preload-703340" server: "https://192.168.72.126:8443"
	I0806 08:35:15.096545   78922 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0806 08:35:15.106190   78922 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.126
	I0806 08:35:15.106226   78922 kubeadm.go:1160] stopping kube-system containers ...
	I0806 08:35:15.106237   78922 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0806 08:35:15.106288   78922 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0806 08:35:15.145974   78922 cri.go:89] found id: ""
	I0806 08:35:15.146060   78922 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0806 08:35:15.163821   78922 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0806 08:35:15.174473   78922 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0806 08:35:15.174498   78922 kubeadm.go:157] found existing configuration files:
	
	I0806 08:35:15.174562   78922 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0806 08:35:15.185227   78922 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0806 08:35:15.185309   78922 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0806 08:35:15.195614   78922 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0806 08:35:15.205197   78922 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0806 08:35:15.205261   78922 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0806 08:35:15.215511   78922 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0806 08:35:15.225354   78922 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0806 08:35:15.225412   78922 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0806 08:35:15.234912   78922 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0806 08:35:15.244638   78922 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0806 08:35:15.244703   78922 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0806 08:35:15.255080   78922 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0806 08:35:15.265861   78922 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:35:15.399192   78922 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:35:16.182816   78922 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:35:16.397386   78922 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:35:16.463980   78922 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:35:16.550229   78922 api_server.go:52] waiting for apiserver process to appear ...
	I0806 08:35:16.550347   78922 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:14.018930   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:14.518251   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:15.018981   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:15.518764   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:16.018114   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:16.518157   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:17.018124   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:17.518604   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:18.018742   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:18.518755   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:14.290050   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:16.788472   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:18.788526   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:15.112664   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:17.113214   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:17.050447   78922 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:17.551220   78922 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:17.603538   78922 api_server.go:72] duration metric: took 1.053308623s to wait for apiserver process to appear ...
	I0806 08:35:17.603574   78922 api_server.go:88] waiting for apiserver healthz status ...
	I0806 08:35:17.603595   78922 api_server.go:253] Checking apiserver healthz at https://192.168.72.126:8443/healthz ...
	I0806 08:35:17.604064   78922 api_server.go:269] stopped: https://192.168.72.126:8443/healthz: Get "https://192.168.72.126:8443/healthz": dial tcp 192.168.72.126:8443: connect: connection refused
	I0806 08:35:18.104696   78922 api_server.go:253] Checking apiserver healthz at https://192.168.72.126:8443/healthz ...
	I0806 08:35:20.458841   78922 api_server.go:279] https://192.168.72.126:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0806 08:35:20.458876   78922 api_server.go:103] status: https://192.168.72.126:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0806 08:35:20.458893   78922 api_server.go:253] Checking apiserver healthz at https://192.168.72.126:8443/healthz ...
	I0806 08:35:20.495570   78922 api_server.go:279] https://192.168.72.126:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0806 08:35:20.495605   78922 api_server.go:103] status: https://192.168.72.126:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0806 08:35:20.603728   78922 api_server.go:253] Checking apiserver healthz at https://192.168.72.126:8443/healthz ...
	I0806 08:35:20.628351   78922 api_server.go:279] https://192.168.72.126:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0806 08:35:20.628409   78922 api_server.go:103] status: https://192.168.72.126:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0806 08:35:21.103717   78922 api_server.go:253] Checking apiserver healthz at https://192.168.72.126:8443/healthz ...
	I0806 08:35:21.109031   78922 api_server.go:279] https://192.168.72.126:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0806 08:35:21.109064   78922 api_server.go:103] status: https://192.168.72.126:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0806 08:35:21.604400   78922 api_server.go:253] Checking apiserver healthz at https://192.168.72.126:8443/healthz ...
	I0806 08:35:21.613667   78922 api_server.go:279] https://192.168.72.126:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0806 08:35:21.613706   78922 api_server.go:103] status: https://192.168.72.126:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0806 08:35:22.103921   78922 api_server.go:253] Checking apiserver healthz at https://192.168.72.126:8443/healthz ...
	I0806 08:35:22.111933   78922 api_server.go:279] https://192.168.72.126:8443/healthz returned 200:
	ok
	I0806 08:35:22.118406   78922 api_server.go:141] control plane version: v1.31.0-rc.0
	I0806 08:35:22.118430   78922 api_server.go:131] duration metric: took 4.514848902s to wait for apiserver health ...
	I0806 08:35:22.118438   78922 cni.go:84] Creating CNI manager for ""
	I0806 08:35:22.118444   78922 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0806 08:35:22.119689   78922 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0806 08:35:19.018302   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:19.518693   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:20.018804   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:20.518878   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:21.018974   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:21.518747   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:22.018530   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:22.519082   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:23.019088   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:23.518932   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:20.788766   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:23.288382   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:19.614878   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:22.113841   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:22.121321   78922 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0806 08:35:22.132760   78922 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0806 08:35:22.151770   78922 system_pods.go:43] waiting for kube-system pods to appear ...
	I0806 08:35:22.161775   78922 system_pods.go:59] 8 kube-system pods found
	I0806 08:35:22.161806   78922 system_pods.go:61] "coredns-6f6b679f8f-ft9vg" [0f9ea981-764b-4777-bf78-8798570b20d8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0806 08:35:22.161818   78922 system_pods.go:61] "etcd-no-preload-703340" [6adb80b2-0d33-4d53-a868-f83226cc7c26] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0806 08:35:22.161826   78922 system_pods.go:61] "kube-apiserver-no-preload-703340" [4a415305-2f94-42c5-901d-49f0b0114cf6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0806 08:35:22.161832   78922 system_pods.go:61] "kube-controller-manager-no-preload-703340" [0351f8f4-5046-4068-ada3-103394ee9ae2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0806 08:35:22.161837   78922 system_pods.go:61] "kube-proxy-7t2sd" [27f3aa37-eca2-450b-a98d-b3246a7e48ac] Running
	I0806 08:35:22.161841   78922 system_pods.go:61] "kube-scheduler-no-preload-703340" [ad4f6239-f5e1-4a76-91ea-8da5478e8992] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0806 08:35:22.161845   78922 system_pods.go:61] "metrics-server-6867b74b74-qz4vx" [800676b3-4940-4ae2-ad37-7adbd67ea8fa] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0806 08:35:22.161849   78922 system_pods.go:61] "storage-provisioner" [8fa414bf-3923-4c20-acd7-a48fba6f4044] Running
	I0806 08:35:22.161855   78922 system_pods.go:74] duration metric: took 10.062633ms to wait for pod list to return data ...
	I0806 08:35:22.161865   78922 node_conditions.go:102] verifying NodePressure condition ...
	I0806 08:35:22.165604   78922 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0806 08:35:22.165626   78922 node_conditions.go:123] node cpu capacity is 2
	I0806 08:35:22.165637   78922 node_conditions.go:105] duration metric: took 3.767899ms to run NodePressure ...
	I0806 08:35:22.165656   78922 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 08:35:22.445769   78922 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0806 08:35:22.451056   78922 kubeadm.go:739] kubelet initialised
	I0806 08:35:22.451078   78922 kubeadm.go:740] duration metric: took 5.279835ms waiting for restarted kubelet to initialise ...
	I0806 08:35:22.451085   78922 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 08:35:22.458199   78922 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6f6b679f8f-ft9vg" in "kube-system" namespace to be "Ready" ...
	I0806 08:35:22.463913   78922 pod_ready.go:97] node "no-preload-703340" hosting pod "coredns-6f6b679f8f-ft9vg" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-703340" has status "Ready":"False"
	I0806 08:35:22.463936   78922 pod_ready.go:81] duration metric: took 5.712258ms for pod "coredns-6f6b679f8f-ft9vg" in "kube-system" namespace to be "Ready" ...
	E0806 08:35:22.463945   78922 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-703340" hosting pod "coredns-6f6b679f8f-ft9vg" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-703340" has status "Ready":"False"
	I0806 08:35:22.463952   78922 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-703340" in "kube-system" namespace to be "Ready" ...
	I0806 08:35:22.471896   78922 pod_ready.go:97] node "no-preload-703340" hosting pod "etcd-no-preload-703340" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-703340" has status "Ready":"False"
	I0806 08:35:22.471921   78922 pod_ready.go:81] duration metric: took 7.962236ms for pod "etcd-no-preload-703340" in "kube-system" namespace to be "Ready" ...
	E0806 08:35:22.471930   78922 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-703340" hosting pod "etcd-no-preload-703340" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-703340" has status "Ready":"False"
	I0806 08:35:22.471936   78922 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-703340" in "kube-system" namespace to be "Ready" ...
	I0806 08:35:22.478365   78922 pod_ready.go:97] node "no-preload-703340" hosting pod "kube-apiserver-no-preload-703340" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-703340" has status "Ready":"False"
	I0806 08:35:22.478394   78922 pod_ready.go:81] duration metric: took 6.451038ms for pod "kube-apiserver-no-preload-703340" in "kube-system" namespace to be "Ready" ...
	E0806 08:35:22.478405   78922 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-703340" hosting pod "kube-apiserver-no-preload-703340" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-703340" has status "Ready":"False"
	I0806 08:35:22.478414   78922 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-703340" in "kube-system" namespace to be "Ready" ...
	I0806 08:35:22.559022   78922 pod_ready.go:97] node "no-preload-703340" hosting pod "kube-controller-manager-no-preload-703340" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-703340" has status "Ready":"False"
	I0806 08:35:22.559058   78922 pod_ready.go:81] duration metric: took 80.632686ms for pod "kube-controller-manager-no-preload-703340" in "kube-system" namespace to be "Ready" ...
	E0806 08:35:22.559071   78922 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-703340" hosting pod "kube-controller-manager-no-preload-703340" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-703340" has status "Ready":"False"
	I0806 08:35:22.559080   78922 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-7t2sd" in "kube-system" namespace to be "Ready" ...
	I0806 08:35:22.955121   78922 pod_ready.go:92] pod "kube-proxy-7t2sd" in "kube-system" namespace has status "Ready":"True"
	I0806 08:35:22.955142   78922 pod_ready.go:81] duration metric: took 396.053517ms for pod "kube-proxy-7t2sd" in "kube-system" namespace to be "Ready" ...
	I0806 08:35:22.955152   78922 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-703340" in "kube-system" namespace to be "Ready" ...
	I0806 08:35:24.961266   78922 pod_ready.go:102] pod "kube-scheduler-no-preload-703340" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:24.018764   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:24.519010   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:25.019090   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:25.518260   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:26.018784   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:26.518903   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:27.019011   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:27.518740   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:28.018299   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:28.518391   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:25.787771   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:27.788719   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:24.612790   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:26.612992   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:29.112518   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:26.962028   78922 pod_ready.go:102] pod "kube-scheduler-no-preload-703340" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:29.461941   78922 pod_ready.go:102] pod "kube-scheduler-no-preload-703340" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:29.018449   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:29.518446   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:30.018179   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:30.518270   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:31.018913   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:31.518140   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:32.018621   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:32.518573   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:33.018512   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:33.518604   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:29.790220   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:32.287668   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:31.112680   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:33.612484   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:31.961716   78922 pod_ready.go:102] pod "kube-scheduler-no-preload-703340" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:34.461368   78922 pod_ready.go:102] pod "kube-scheduler-no-preload-703340" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:34.961460   78922 pod_ready.go:92] pod "kube-scheduler-no-preload-703340" in "kube-system" namespace has status "Ready":"True"
	I0806 08:35:34.961483   78922 pod_ready.go:81] duration metric: took 12.006324097s for pod "kube-scheduler-no-preload-703340" in "kube-system" namespace to be "Ready" ...
	I0806 08:35:34.961495   78922 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace to be "Ready" ...
	I0806 08:35:34.018132   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:34.518476   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:35.018797   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:35.518507   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:36.018976   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:36.518062   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:37.018463   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:37.518648   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:38.018861   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:38.518236   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:34.288032   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:36.787530   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:38.789235   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:35.612530   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:38.114833   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:36.967646   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:38.969106   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:41.469056   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:39.018424   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:39.518065   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:40.018381   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:40.518979   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:41.018310   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:41.518277   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:42.018881   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:42.518157   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:43.018166   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:43.518740   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:41.287534   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:43.288666   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:40.115187   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:42.611895   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:43.967648   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:45.968277   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:44.018075   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:44.518545   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:45.018342   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:45.519066   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:46.018796   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:46.519031   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:47.018876   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:47.518649   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:35:47.518746   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:35:47.559141   79927 cri.go:89] found id: ""
	I0806 08:35:47.559175   79927 logs.go:276] 0 containers: []
	W0806 08:35:47.559188   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:35:47.559195   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:35:47.559253   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:35:47.596840   79927 cri.go:89] found id: ""
	I0806 08:35:47.596900   79927 logs.go:276] 0 containers: []
	W0806 08:35:47.596913   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:35:47.596921   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:35:47.596995   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:35:47.634932   79927 cri.go:89] found id: ""
	I0806 08:35:47.634961   79927 logs.go:276] 0 containers: []
	W0806 08:35:47.634969   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:35:47.634975   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:35:47.635031   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:35:47.672170   79927 cri.go:89] found id: ""
	I0806 08:35:47.672202   79927 logs.go:276] 0 containers: []
	W0806 08:35:47.672213   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:35:47.672220   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:35:47.672284   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:35:47.709643   79927 cri.go:89] found id: ""
	I0806 08:35:47.709672   79927 logs.go:276] 0 containers: []
	W0806 08:35:47.709683   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:35:47.709690   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:35:47.709752   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:35:47.750036   79927 cri.go:89] found id: ""
	I0806 08:35:47.750067   79927 logs.go:276] 0 containers: []
	W0806 08:35:47.750076   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:35:47.750082   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:35:47.750149   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:35:47.791837   79927 cri.go:89] found id: ""
	I0806 08:35:47.791862   79927 logs.go:276] 0 containers: []
	W0806 08:35:47.791872   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:35:47.791879   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:35:47.791938   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:35:47.831972   79927 cri.go:89] found id: ""
	I0806 08:35:47.831997   79927 logs.go:276] 0 containers: []
	W0806 08:35:47.832005   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:35:47.832014   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:35:47.832028   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:35:47.962228   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:35:47.962252   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:35:47.962267   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:35:48.031276   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:35:48.031314   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:35:48.075869   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:35:48.075912   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:35:48.131817   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:35:48.131859   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:35:45.788042   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:47.789501   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:44.612796   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:47.112009   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:48.468289   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:50.469101   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:50.647351   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:50.661916   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:35:50.662002   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:35:50.707294   79927 cri.go:89] found id: ""
	I0806 08:35:50.707323   79927 logs.go:276] 0 containers: []
	W0806 08:35:50.707335   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:35:50.707342   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:35:50.707396   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:35:50.744111   79927 cri.go:89] found id: ""
	I0806 08:35:50.744133   79927 logs.go:276] 0 containers: []
	W0806 08:35:50.744149   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:35:50.744160   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:35:50.744205   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:35:50.781482   79927 cri.go:89] found id: ""
	I0806 08:35:50.781513   79927 logs.go:276] 0 containers: []
	W0806 08:35:50.781524   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:35:50.781531   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:35:50.781591   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:35:50.828788   79927 cri.go:89] found id: ""
	I0806 08:35:50.828817   79927 logs.go:276] 0 containers: []
	W0806 08:35:50.828827   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:35:50.828834   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:35:50.828896   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:35:50.865168   79927 cri.go:89] found id: ""
	I0806 08:35:50.865199   79927 logs.go:276] 0 containers: []
	W0806 08:35:50.865211   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:35:50.865219   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:35:50.865273   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:35:50.902814   79927 cri.go:89] found id: ""
	I0806 08:35:50.902842   79927 logs.go:276] 0 containers: []
	W0806 08:35:50.902851   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:35:50.902856   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:35:50.902907   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:35:50.939861   79927 cri.go:89] found id: ""
	I0806 08:35:50.939891   79927 logs.go:276] 0 containers: []
	W0806 08:35:50.939899   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:35:50.939905   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:35:50.939961   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:35:50.984743   79927 cri.go:89] found id: ""
	I0806 08:35:50.984773   79927 logs.go:276] 0 containers: []
	W0806 08:35:50.984782   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:35:50.984793   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:35:50.984809   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:35:50.998476   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:35:50.998515   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:35:51.081100   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:35:51.081126   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:35:51.081139   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:35:51.157634   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:35:51.157679   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:35:51.210685   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:35:51.210716   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:35:50.288343   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:52.804477   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:49.612661   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:52.113950   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:54.114200   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:52.969251   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:55.468041   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:53.778620   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:53.793090   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:35:53.793154   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:35:53.837558   79927 cri.go:89] found id: ""
	I0806 08:35:53.837584   79927 logs.go:276] 0 containers: []
	W0806 08:35:53.837594   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:35:53.837599   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:35:53.837670   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:35:53.878855   79927 cri.go:89] found id: ""
	I0806 08:35:53.878884   79927 logs.go:276] 0 containers: []
	W0806 08:35:53.878893   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:35:53.878898   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:35:53.878945   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:35:53.918330   79927 cri.go:89] found id: ""
	I0806 08:35:53.918357   79927 logs.go:276] 0 containers: []
	W0806 08:35:53.918366   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:35:53.918371   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:35:53.918422   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:35:53.959156   79927 cri.go:89] found id: ""
	I0806 08:35:53.959186   79927 logs.go:276] 0 containers: []
	W0806 08:35:53.959196   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:35:53.959204   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:35:53.959309   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:35:53.996775   79927 cri.go:89] found id: ""
	I0806 08:35:53.996802   79927 logs.go:276] 0 containers: []
	W0806 08:35:53.996810   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:35:53.996816   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:35:53.996874   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:35:54.033047   79927 cri.go:89] found id: ""
	I0806 08:35:54.033072   79927 logs.go:276] 0 containers: []
	W0806 08:35:54.033082   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:35:54.033089   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:35:54.033152   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:35:54.069426   79927 cri.go:89] found id: ""
	I0806 08:35:54.069458   79927 logs.go:276] 0 containers: []
	W0806 08:35:54.069468   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:35:54.069475   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:35:54.069536   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:35:54.109114   79927 cri.go:89] found id: ""
	I0806 08:35:54.109143   79927 logs.go:276] 0 containers: []
	W0806 08:35:54.109153   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:35:54.109164   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:35:54.109179   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:35:54.161264   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:35:54.161298   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:35:54.175216   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:35:54.175247   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:35:54.254970   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:35:54.254999   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:35:54.255014   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:35:54.331561   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:35:54.331596   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:35:56.873110   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:35:56.889302   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:35:56.889378   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:35:56.931875   79927 cri.go:89] found id: ""
	I0806 08:35:56.931904   79927 logs.go:276] 0 containers: []
	W0806 08:35:56.931913   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:35:56.931919   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:35:56.931975   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:35:56.995549   79927 cri.go:89] found id: ""
	I0806 08:35:56.995587   79927 logs.go:276] 0 containers: []
	W0806 08:35:56.995601   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:35:56.995608   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:35:56.995675   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:35:57.042708   79927 cri.go:89] found id: ""
	I0806 08:35:57.042735   79927 logs.go:276] 0 containers: []
	W0806 08:35:57.042743   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:35:57.042749   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:35:57.042833   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:35:57.079160   79927 cri.go:89] found id: ""
	I0806 08:35:57.079187   79927 logs.go:276] 0 containers: []
	W0806 08:35:57.079197   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:35:57.079203   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:35:57.079260   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:35:57.116358   79927 cri.go:89] found id: ""
	I0806 08:35:57.116409   79927 logs.go:276] 0 containers: []
	W0806 08:35:57.116421   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:35:57.116429   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:35:57.116489   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:35:57.151051   79927 cri.go:89] found id: ""
	I0806 08:35:57.151079   79927 logs.go:276] 0 containers: []
	W0806 08:35:57.151087   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:35:57.151096   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:35:57.151148   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:35:57.189747   79927 cri.go:89] found id: ""
	I0806 08:35:57.189791   79927 logs.go:276] 0 containers: []
	W0806 08:35:57.189800   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:35:57.189806   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:35:57.189871   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:35:57.225251   79927 cri.go:89] found id: ""
	I0806 08:35:57.225277   79927 logs.go:276] 0 containers: []
	W0806 08:35:57.225287   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:35:57.225298   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:35:57.225314   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:35:57.238746   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:35:57.238773   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:35:57.312931   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:35:57.312959   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:35:57.312976   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:35:57.394104   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:35:57.394139   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:35:57.435993   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:35:57.436028   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:35:55.288712   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:57.789512   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:56.612875   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:58.613571   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:57.468429   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:59.969618   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:35:59.989556   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:00.004129   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:00.004205   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:00.042192   79927 cri.go:89] found id: ""
	I0806 08:36:00.042215   79927 logs.go:276] 0 containers: []
	W0806 08:36:00.042223   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:00.042228   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:00.042273   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:00.080921   79927 cri.go:89] found id: ""
	I0806 08:36:00.080956   79927 logs.go:276] 0 containers: []
	W0806 08:36:00.080966   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:00.080973   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:00.081033   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:00.123186   79927 cri.go:89] found id: ""
	I0806 08:36:00.123206   79927 logs.go:276] 0 containers: []
	W0806 08:36:00.123216   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:00.123223   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:00.123274   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:00.165252   79927 cri.go:89] found id: ""
	I0806 08:36:00.165281   79927 logs.go:276] 0 containers: []
	W0806 08:36:00.165289   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:00.165294   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:00.165340   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:00.201601   79927 cri.go:89] found id: ""
	I0806 08:36:00.201639   79927 logs.go:276] 0 containers: []
	W0806 08:36:00.201653   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:00.201662   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:00.201730   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:00.236933   79927 cri.go:89] found id: ""
	I0806 08:36:00.236964   79927 logs.go:276] 0 containers: []
	W0806 08:36:00.236975   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:00.236983   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:00.237043   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:00.273215   79927 cri.go:89] found id: ""
	I0806 08:36:00.273247   79927 logs.go:276] 0 containers: []
	W0806 08:36:00.273256   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:00.273261   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:00.273319   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:00.308075   79927 cri.go:89] found id: ""
	I0806 08:36:00.308105   79927 logs.go:276] 0 containers: []
	W0806 08:36:00.308116   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:00.308126   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:00.308141   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:00.360096   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:00.360146   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:00.374484   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:00.374521   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:00.454297   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:00.454318   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:00.454334   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:00.542665   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:00.542702   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:03.084297   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:03.098661   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:03.098737   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:03.134751   79927 cri.go:89] found id: ""
	I0806 08:36:03.134786   79927 logs.go:276] 0 containers: []
	W0806 08:36:03.134797   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:03.134805   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:03.134873   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:03.177412   79927 cri.go:89] found id: ""
	I0806 08:36:03.177442   79927 logs.go:276] 0 containers: []
	W0806 08:36:03.177453   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:03.177461   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:03.177516   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:03.213177   79927 cri.go:89] found id: ""
	I0806 08:36:03.213209   79927 logs.go:276] 0 containers: []
	W0806 08:36:03.213221   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:03.213229   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:03.213291   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:03.254569   79927 cri.go:89] found id: ""
	I0806 08:36:03.254594   79927 logs.go:276] 0 containers: []
	W0806 08:36:03.254603   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:03.254608   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:03.254656   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:03.292564   79927 cri.go:89] found id: ""
	I0806 08:36:03.292590   79927 logs.go:276] 0 containers: []
	W0806 08:36:03.292597   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:03.292610   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:03.292671   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:03.331572   79927 cri.go:89] found id: ""
	I0806 08:36:03.331596   79927 logs.go:276] 0 containers: []
	W0806 08:36:03.331605   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:03.331610   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:03.331658   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:03.366368   79927 cri.go:89] found id: ""
	I0806 08:36:03.366398   79927 logs.go:276] 0 containers: []
	W0806 08:36:03.366409   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:03.366417   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:03.366469   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:03.404345   79927 cri.go:89] found id: ""
	I0806 08:36:03.404387   79927 logs.go:276] 0 containers: []
	W0806 08:36:03.404399   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:03.404411   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:03.404425   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:03.454825   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:03.454864   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:03.469437   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:03.469466   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:03.553247   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:03.553274   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:03.553290   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:03.635461   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:03.635509   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:00.287469   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:02.787634   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:01.111573   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:03.613750   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:02.468084   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:04.468169   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:06.469036   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:06.175216   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:06.190083   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:06.190158   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:06.226428   79927 cri.go:89] found id: ""
	I0806 08:36:06.226459   79927 logs.go:276] 0 containers: []
	W0806 08:36:06.226468   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:06.226473   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:06.226529   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:06.263781   79927 cri.go:89] found id: ""
	I0806 08:36:06.263805   79927 logs.go:276] 0 containers: []
	W0806 08:36:06.263813   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:06.263819   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:06.263895   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:06.304335   79927 cri.go:89] found id: ""
	I0806 08:36:06.304386   79927 logs.go:276] 0 containers: []
	W0806 08:36:06.304397   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:06.304404   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:06.304466   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:06.341138   79927 cri.go:89] found id: ""
	I0806 08:36:06.341166   79927 logs.go:276] 0 containers: []
	W0806 08:36:06.341177   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:06.341184   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:06.341248   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:06.381547   79927 cri.go:89] found id: ""
	I0806 08:36:06.381577   79927 logs.go:276] 0 containers: []
	W0806 08:36:06.381587   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:06.381594   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:06.381662   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:06.417606   79927 cri.go:89] found id: ""
	I0806 08:36:06.417630   79927 logs.go:276] 0 containers: []
	W0806 08:36:06.417637   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:06.417643   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:06.417700   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:06.456258   79927 cri.go:89] found id: ""
	I0806 08:36:06.456287   79927 logs.go:276] 0 containers: []
	W0806 08:36:06.456298   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:06.456307   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:06.456404   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:06.494268   79927 cri.go:89] found id: ""
	I0806 08:36:06.494309   79927 logs.go:276] 0 containers: []
	W0806 08:36:06.494321   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:06.494332   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:06.494346   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:06.547670   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:06.547710   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:06.562020   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:06.562049   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:06.632691   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:06.632717   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:06.632729   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:06.721737   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:06.721780   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:04.787931   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:07.287463   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:06.111545   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:08.113637   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:08.968767   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:11.468847   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:09.265154   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:09.278664   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:09.278743   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:09.313464   79927 cri.go:89] found id: ""
	I0806 08:36:09.313494   79927 logs.go:276] 0 containers: []
	W0806 08:36:09.313506   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:09.313513   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:09.313561   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:09.352107   79927 cri.go:89] found id: ""
	I0806 08:36:09.352133   79927 logs.go:276] 0 containers: []
	W0806 08:36:09.352146   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:09.352153   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:09.352215   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:09.387547   79927 cri.go:89] found id: ""
	I0806 08:36:09.387581   79927 logs.go:276] 0 containers: []
	W0806 08:36:09.387594   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:09.387601   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:09.387684   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:09.422157   79927 cri.go:89] found id: ""
	I0806 08:36:09.422193   79927 logs.go:276] 0 containers: []
	W0806 08:36:09.422202   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:09.422207   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:09.422275   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:09.463065   79927 cri.go:89] found id: ""
	I0806 08:36:09.463106   79927 logs.go:276] 0 containers: []
	W0806 08:36:09.463122   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:09.463145   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:09.463209   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:09.499450   79927 cri.go:89] found id: ""
	I0806 08:36:09.499474   79927 logs.go:276] 0 containers: []
	W0806 08:36:09.499484   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:09.499491   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:09.499550   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:09.536411   79927 cri.go:89] found id: ""
	I0806 08:36:09.536442   79927 logs.go:276] 0 containers: []
	W0806 08:36:09.536453   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:09.536460   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:09.536533   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:09.572044   79927 cri.go:89] found id: ""
	I0806 08:36:09.572070   79927 logs.go:276] 0 containers: []
	W0806 08:36:09.572079   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:09.572087   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:09.572138   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:09.612385   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:09.612418   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:09.666790   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:09.666852   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:09.680707   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:09.680753   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:09.753446   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:09.753473   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:09.753491   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:12.336824   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:12.352545   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:12.352626   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:12.393158   79927 cri.go:89] found id: ""
	I0806 08:36:12.393188   79927 logs.go:276] 0 containers: []
	W0806 08:36:12.393199   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:12.393206   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:12.393255   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:12.428501   79927 cri.go:89] found id: ""
	I0806 08:36:12.428525   79927 logs.go:276] 0 containers: []
	W0806 08:36:12.428534   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:12.428539   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:12.428596   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:12.463033   79927 cri.go:89] found id: ""
	I0806 08:36:12.463062   79927 logs.go:276] 0 containers: []
	W0806 08:36:12.463072   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:12.463079   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:12.463136   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:12.500350   79927 cri.go:89] found id: ""
	I0806 08:36:12.500402   79927 logs.go:276] 0 containers: []
	W0806 08:36:12.500413   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:12.500421   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:12.500477   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:12.537323   79927 cri.go:89] found id: ""
	I0806 08:36:12.537350   79927 logs.go:276] 0 containers: []
	W0806 08:36:12.537360   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:12.537366   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:12.537414   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:12.575525   79927 cri.go:89] found id: ""
	I0806 08:36:12.575551   79927 logs.go:276] 0 containers: []
	W0806 08:36:12.575561   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:12.575568   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:12.575629   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:12.615375   79927 cri.go:89] found id: ""
	I0806 08:36:12.615436   79927 logs.go:276] 0 containers: []
	W0806 08:36:12.615451   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:12.615459   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:12.615531   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:12.655269   79927 cri.go:89] found id: ""
	I0806 08:36:12.655300   79927 logs.go:276] 0 containers: []
	W0806 08:36:12.655311   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:12.655324   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:12.655342   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:12.709097   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:12.709134   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:12.724330   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:12.724380   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:12.801075   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:12.801099   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:12.801111   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:12.885670   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:12.885704   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:09.288192   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:11.789794   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:10.613121   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:12.613812   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:13.969367   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:16.468276   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:15.427222   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:15.442092   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:15.442206   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:15.485634   79927 cri.go:89] found id: ""
	I0806 08:36:15.485657   79927 logs.go:276] 0 containers: []
	W0806 08:36:15.485665   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:15.485671   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:15.485715   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:15.520791   79927 cri.go:89] found id: ""
	I0806 08:36:15.520820   79927 logs.go:276] 0 containers: []
	W0806 08:36:15.520830   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:15.520837   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:15.520897   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:15.554041   79927 cri.go:89] found id: ""
	I0806 08:36:15.554065   79927 logs.go:276] 0 containers: []
	W0806 08:36:15.554073   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:15.554079   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:15.554143   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:15.587550   79927 cri.go:89] found id: ""
	I0806 08:36:15.587579   79927 logs.go:276] 0 containers: []
	W0806 08:36:15.587591   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:15.587598   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:15.587659   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:15.628657   79927 cri.go:89] found id: ""
	I0806 08:36:15.628681   79927 logs.go:276] 0 containers: []
	W0806 08:36:15.628689   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:15.628695   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:15.628750   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:15.665212   79927 cri.go:89] found id: ""
	I0806 08:36:15.665239   79927 logs.go:276] 0 containers: []
	W0806 08:36:15.665247   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:15.665252   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:15.665298   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:15.705675   79927 cri.go:89] found id: ""
	I0806 08:36:15.705701   79927 logs.go:276] 0 containers: []
	W0806 08:36:15.705709   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:15.705714   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:15.705762   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:15.739835   79927 cri.go:89] found id: ""
	I0806 08:36:15.739862   79927 logs.go:276] 0 containers: []
	W0806 08:36:15.739871   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:15.739879   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:15.739892   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:15.795719   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:15.795749   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:15.809940   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:15.809967   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:15.888710   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:15.888737   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:15.888753   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:15.972728   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:15.972772   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:18.513565   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:18.527480   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:18.527538   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:18.562783   79927 cri.go:89] found id: ""
	I0806 08:36:18.562812   79927 logs.go:276] 0 containers: []
	W0806 08:36:18.562824   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:18.562832   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:18.562896   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:18.600613   79927 cri.go:89] found id: ""
	I0806 08:36:18.600645   79927 logs.go:276] 0 containers: []
	W0806 08:36:18.600659   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:18.600668   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:18.600735   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:18.638074   79927 cri.go:89] found id: ""
	I0806 08:36:18.638104   79927 logs.go:276] 0 containers: []
	W0806 08:36:18.638114   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:18.638119   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:18.638181   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:18.679795   79927 cri.go:89] found id: ""
	I0806 08:36:18.679825   79927 logs.go:276] 0 containers: []
	W0806 08:36:18.679836   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:18.679844   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:18.679907   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:18.715535   79927 cri.go:89] found id: ""
	I0806 08:36:18.715568   79927 logs.go:276] 0 containers: []
	W0806 08:36:18.715577   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:18.715585   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:18.715650   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:18.752705   79927 cri.go:89] found id: ""
	I0806 08:36:18.752736   79927 logs.go:276] 0 containers: []
	W0806 08:36:18.752748   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:18.752759   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:18.752821   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:14.288048   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:16.288422   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:18.788928   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:15.112857   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:17.114585   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:18.468697   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:20.967985   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:18.791653   79927 cri.go:89] found id: ""
	I0806 08:36:18.791678   79927 logs.go:276] 0 containers: []
	W0806 08:36:18.791689   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:18.791696   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:18.791763   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:18.828328   79927 cri.go:89] found id: ""
	I0806 08:36:18.828357   79927 logs.go:276] 0 containers: []
	W0806 08:36:18.828378   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:18.828389   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:18.828408   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:18.888111   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:18.888148   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:18.902718   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:18.902750   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:18.986327   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:18.986352   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:18.986365   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:19.072858   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:19.072899   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:21.615497   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:21.631434   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:21.631492   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:21.671446   79927 cri.go:89] found id: ""
	I0806 08:36:21.671471   79927 logs.go:276] 0 containers: []
	W0806 08:36:21.671479   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:21.671484   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:21.671540   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:21.707598   79927 cri.go:89] found id: ""
	I0806 08:36:21.707624   79927 logs.go:276] 0 containers: []
	W0806 08:36:21.707634   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:21.707640   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:21.707686   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:21.753647   79927 cri.go:89] found id: ""
	I0806 08:36:21.753675   79927 logs.go:276] 0 containers: []
	W0806 08:36:21.753686   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:21.753692   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:21.753749   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:21.795284   79927 cri.go:89] found id: ""
	I0806 08:36:21.795311   79927 logs.go:276] 0 containers: []
	W0806 08:36:21.795321   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:21.795329   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:21.795388   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:21.832436   79927 cri.go:89] found id: ""
	I0806 08:36:21.832466   79927 logs.go:276] 0 containers: []
	W0806 08:36:21.832476   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:21.832483   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:21.832550   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:21.870295   79927 cri.go:89] found id: ""
	I0806 08:36:21.870327   79927 logs.go:276] 0 containers: []
	W0806 08:36:21.870338   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:21.870345   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:21.870412   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:21.908210   79927 cri.go:89] found id: ""
	I0806 08:36:21.908242   79927 logs.go:276] 0 containers: []
	W0806 08:36:21.908253   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:21.908261   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:21.908322   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:21.944055   79927 cri.go:89] found id: ""
	I0806 08:36:21.944084   79927 logs.go:276] 0 containers: []
	W0806 08:36:21.944095   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:21.944104   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:21.944125   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:22.000467   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:22.000506   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:22.014590   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:22.014629   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:22.083873   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:22.083907   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:22.083927   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:22.175993   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:22.176056   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:21.288435   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:23.788424   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:19.612021   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:21.612092   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:23.612557   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:22.968059   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:24.968111   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:24.720476   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:24.734469   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:24.734551   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:24.770663   79927 cri.go:89] found id: ""
	I0806 08:36:24.770692   79927 logs.go:276] 0 containers: []
	W0806 08:36:24.770705   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:24.770712   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:24.770768   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:24.807861   79927 cri.go:89] found id: ""
	I0806 08:36:24.807889   79927 logs.go:276] 0 containers: []
	W0806 08:36:24.807897   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:24.807908   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:24.807957   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:24.849827   79927 cri.go:89] found id: ""
	I0806 08:36:24.849857   79927 logs.go:276] 0 containers: []
	W0806 08:36:24.849869   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:24.849877   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:24.849937   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:24.885260   79927 cri.go:89] found id: ""
	I0806 08:36:24.885291   79927 logs.go:276] 0 containers: []
	W0806 08:36:24.885303   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:24.885315   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:24.885381   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:24.919918   79927 cri.go:89] found id: ""
	I0806 08:36:24.919949   79927 logs.go:276] 0 containers: []
	W0806 08:36:24.919957   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:24.919964   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:24.920026   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:24.956010   79927 cri.go:89] found id: ""
	I0806 08:36:24.956036   79927 logs.go:276] 0 containers: []
	W0806 08:36:24.956045   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:24.956050   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:24.956109   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:24.993126   79927 cri.go:89] found id: ""
	I0806 08:36:24.993154   79927 logs.go:276] 0 containers: []
	W0806 08:36:24.993165   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:24.993173   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:24.993240   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:25.026732   79927 cri.go:89] found id: ""
	I0806 08:36:25.026756   79927 logs.go:276] 0 containers: []
	W0806 08:36:25.026763   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:25.026771   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:25.026781   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:25.064076   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:25.064113   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:25.118820   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:25.118856   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:25.133640   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:25.133671   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:25.201686   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:25.201714   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:25.201729   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:27.801947   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:27.815472   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:27.815539   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:27.851073   79927 cri.go:89] found id: ""
	I0806 08:36:27.851103   79927 logs.go:276] 0 containers: []
	W0806 08:36:27.851119   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:27.851127   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:27.851184   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:27.886121   79927 cri.go:89] found id: ""
	I0806 08:36:27.886151   79927 logs.go:276] 0 containers: []
	W0806 08:36:27.886162   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:27.886169   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:27.886231   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:27.920950   79927 cri.go:89] found id: ""
	I0806 08:36:27.920979   79927 logs.go:276] 0 containers: []
	W0806 08:36:27.920990   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:27.920997   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:27.921055   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:27.956529   79927 cri.go:89] found id: ""
	I0806 08:36:27.956562   79927 logs.go:276] 0 containers: []
	W0806 08:36:27.956574   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:27.956581   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:27.956631   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:27.992272   79927 cri.go:89] found id: ""
	I0806 08:36:27.992298   79927 logs.go:276] 0 containers: []
	W0806 08:36:27.992306   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:27.992322   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:27.992405   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:28.027242   79927 cri.go:89] found id: ""
	I0806 08:36:28.027269   79927 logs.go:276] 0 containers: []
	W0806 08:36:28.027277   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:28.027283   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:28.027335   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:28.063043   79927 cri.go:89] found id: ""
	I0806 08:36:28.063068   79927 logs.go:276] 0 containers: []
	W0806 08:36:28.063076   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:28.063083   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:28.063148   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:28.099743   79927 cri.go:89] found id: ""
	I0806 08:36:28.099769   79927 logs.go:276] 0 containers: []
	W0806 08:36:28.099780   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:28.099791   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:28.099806   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:28.151354   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:28.151392   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:28.166704   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:28.166731   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:28.242621   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:28.242642   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:28.242656   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:28.324352   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:28.324405   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:25.789745   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:28.290656   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:26.112264   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:28.112769   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:27.466228   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:29.469998   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:30.869063   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:30.882309   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:30.882377   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:30.916264   79927 cri.go:89] found id: ""
	I0806 08:36:30.916296   79927 logs.go:276] 0 containers: []
	W0806 08:36:30.916306   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:30.916313   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:30.916390   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:30.951995   79927 cri.go:89] found id: ""
	I0806 08:36:30.952021   79927 logs.go:276] 0 containers: []
	W0806 08:36:30.952028   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:30.952034   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:30.952111   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:31.005229   79927 cri.go:89] found id: ""
	I0806 08:36:31.005270   79927 logs.go:276] 0 containers: []
	W0806 08:36:31.005281   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:31.005288   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:31.005350   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:31.040439   79927 cri.go:89] found id: ""
	I0806 08:36:31.040465   79927 logs.go:276] 0 containers: []
	W0806 08:36:31.040476   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:31.040482   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:31.040534   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:31.078880   79927 cri.go:89] found id: ""
	I0806 08:36:31.078910   79927 logs.go:276] 0 containers: []
	W0806 08:36:31.078920   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:31.078934   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:31.078996   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:31.115520   79927 cri.go:89] found id: ""
	I0806 08:36:31.115551   79927 logs.go:276] 0 containers: []
	W0806 08:36:31.115562   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:31.115569   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:31.115615   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:31.150450   79927 cri.go:89] found id: ""
	I0806 08:36:31.150483   79927 logs.go:276] 0 containers: []
	W0806 08:36:31.150497   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:31.150506   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:31.150565   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:31.187449   79927 cri.go:89] found id: ""
	I0806 08:36:31.187476   79927 logs.go:276] 0 containers: []
	W0806 08:36:31.187485   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:31.187493   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:31.187507   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:31.246232   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:31.246272   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:31.260691   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:31.260720   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:31.342356   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:31.342383   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:31.342401   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:31.426820   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:31.426865   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:30.787370   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:32.787711   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:30.611466   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:32.611550   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:31.968048   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:34.469062   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:33.974768   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:33.988377   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:33.988442   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:34.023284   79927 cri.go:89] found id: ""
	I0806 08:36:34.023316   79927 logs.go:276] 0 containers: []
	W0806 08:36:34.023328   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:34.023336   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:34.023397   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:34.058633   79927 cri.go:89] found id: ""
	I0806 08:36:34.058676   79927 logs.go:276] 0 containers: []
	W0806 08:36:34.058695   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:34.058704   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:34.058767   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:34.098343   79927 cri.go:89] found id: ""
	I0806 08:36:34.098379   79927 logs.go:276] 0 containers: []
	W0806 08:36:34.098390   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:34.098398   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:34.098471   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:34.134638   79927 cri.go:89] found id: ""
	I0806 08:36:34.134667   79927 logs.go:276] 0 containers: []
	W0806 08:36:34.134677   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:34.134683   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:34.134729   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:34.170118   79927 cri.go:89] found id: ""
	I0806 08:36:34.170144   79927 logs.go:276] 0 containers: []
	W0806 08:36:34.170153   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:34.170159   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:34.170207   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:34.204789   79927 cri.go:89] found id: ""
	I0806 08:36:34.204815   79927 logs.go:276] 0 containers: []
	W0806 08:36:34.204823   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:34.204829   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:34.204877   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:34.245170   79927 cri.go:89] found id: ""
	I0806 08:36:34.245194   79927 logs.go:276] 0 containers: []
	W0806 08:36:34.245203   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:34.245208   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:34.245264   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:34.280026   79927 cri.go:89] found id: ""
	I0806 08:36:34.280063   79927 logs.go:276] 0 containers: []
	W0806 08:36:34.280076   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:34.280086   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:34.280101   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:34.335363   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:34.335408   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:34.350201   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:34.350226   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:34.425205   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:34.425231   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:34.425244   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:34.506385   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:34.506419   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:37.047263   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:37.060179   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:37.060254   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:37.093168   79927 cri.go:89] found id: ""
	I0806 08:36:37.093201   79927 logs.go:276] 0 containers: []
	W0806 08:36:37.093209   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:37.093216   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:37.093264   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:37.131315   79927 cri.go:89] found id: ""
	I0806 08:36:37.131343   79927 logs.go:276] 0 containers: []
	W0806 08:36:37.131353   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:37.131360   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:37.131426   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:37.169799   79927 cri.go:89] found id: ""
	I0806 08:36:37.169829   79927 logs.go:276] 0 containers: []
	W0806 08:36:37.169839   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:37.169845   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:37.169898   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:37.205852   79927 cri.go:89] found id: ""
	I0806 08:36:37.205878   79927 logs.go:276] 0 containers: []
	W0806 08:36:37.205885   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:37.205891   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:37.205938   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:37.242469   79927 cri.go:89] found id: ""
	I0806 08:36:37.242493   79927 logs.go:276] 0 containers: []
	W0806 08:36:37.242501   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:37.242507   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:37.242554   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:37.278515   79927 cri.go:89] found id: ""
	I0806 08:36:37.278545   79927 logs.go:276] 0 containers: []
	W0806 08:36:37.278556   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:37.278563   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:37.278627   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:37.316157   79927 cri.go:89] found id: ""
	I0806 08:36:37.316181   79927 logs.go:276] 0 containers: []
	W0806 08:36:37.316189   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:37.316195   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:37.316249   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:37.353752   79927 cri.go:89] found id: ""
	I0806 08:36:37.353778   79927 logs.go:276] 0 containers: []
	W0806 08:36:37.353785   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:37.353796   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:37.353807   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:37.439435   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:37.439471   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:37.488866   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:37.488903   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:37.540031   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:37.540072   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:37.556109   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:37.556140   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:37.629023   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:35.288565   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:37.788336   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:35.113182   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:37.615523   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:36.967665   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:38.973884   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:41.467445   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:40.129579   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:40.143971   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:40.144050   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:40.180490   79927 cri.go:89] found id: ""
	I0806 08:36:40.180515   79927 logs.go:276] 0 containers: []
	W0806 08:36:40.180524   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:40.180530   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:40.180586   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:40.219812   79927 cri.go:89] found id: ""
	I0806 08:36:40.219860   79927 logs.go:276] 0 containers: []
	W0806 08:36:40.219869   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:40.219877   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:40.219939   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:40.256048   79927 cri.go:89] found id: ""
	I0806 08:36:40.256076   79927 logs.go:276] 0 containers: []
	W0806 08:36:40.256088   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:40.256106   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:40.256166   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:40.294693   79927 cri.go:89] found id: ""
	I0806 08:36:40.294720   79927 logs.go:276] 0 containers: []
	W0806 08:36:40.294729   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:40.294734   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:40.294792   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:40.331246   79927 cri.go:89] found id: ""
	I0806 08:36:40.331276   79927 logs.go:276] 0 containers: []
	W0806 08:36:40.331287   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:40.331295   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:40.331351   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:40.373039   79927 cri.go:89] found id: ""
	I0806 08:36:40.373068   79927 logs.go:276] 0 containers: []
	W0806 08:36:40.373095   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:40.373103   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:40.373163   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:40.408137   79927 cri.go:89] found id: ""
	I0806 08:36:40.408167   79927 logs.go:276] 0 containers: []
	W0806 08:36:40.408177   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:40.408184   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:40.408249   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:40.442996   79927 cri.go:89] found id: ""
	I0806 08:36:40.443035   79927 logs.go:276] 0 containers: []
	W0806 08:36:40.443043   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:40.443051   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:40.443063   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:40.499559   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:40.499599   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:40.514687   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:40.514716   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:40.586034   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:40.586059   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:40.586071   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:40.669756   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:40.669796   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:43.213555   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:43.229683   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:43.229751   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:43.267107   79927 cri.go:89] found id: ""
	I0806 08:36:43.267140   79927 logs.go:276] 0 containers: []
	W0806 08:36:43.267153   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:43.267161   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:43.267235   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:43.303405   79927 cri.go:89] found id: ""
	I0806 08:36:43.303435   79927 logs.go:276] 0 containers: []
	W0806 08:36:43.303443   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:43.303449   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:43.303509   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:43.338583   79927 cri.go:89] found id: ""
	I0806 08:36:43.338610   79927 logs.go:276] 0 containers: []
	W0806 08:36:43.338619   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:43.338628   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:43.338692   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:43.377413   79927 cri.go:89] found id: ""
	I0806 08:36:43.377441   79927 logs.go:276] 0 containers: []
	W0806 08:36:43.377449   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:43.377454   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:43.377501   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:43.413036   79927 cri.go:89] found id: ""
	I0806 08:36:43.413133   79927 logs.go:276] 0 containers: []
	W0806 08:36:43.413152   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:43.413160   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:43.413221   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:43.450216   79927 cri.go:89] found id: ""
	I0806 08:36:43.450245   79927 logs.go:276] 0 containers: []
	W0806 08:36:43.450253   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:43.450259   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:43.450305   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:43.485811   79927 cri.go:89] found id: ""
	I0806 08:36:43.485856   79927 logs.go:276] 0 containers: []
	W0806 08:36:43.485864   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:43.485870   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:43.485919   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:43.521586   79927 cri.go:89] found id: ""
	I0806 08:36:43.521617   79927 logs.go:276] 0 containers: []
	W0806 08:36:43.521628   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:43.521650   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:43.521673   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:43.561842   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:43.561889   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:43.617652   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:43.617683   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:43.631239   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:43.631267   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:43.707297   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:43.707317   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:43.707330   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:39.789019   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:42.287817   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:40.111420   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:42.112503   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:43.468602   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:45.974699   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:46.287309   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:46.301917   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:46.301991   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:46.335040   79927 cri.go:89] found id: ""
	I0806 08:36:46.335069   79927 logs.go:276] 0 containers: []
	W0806 08:36:46.335080   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:46.335088   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:46.335177   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:46.371902   79927 cri.go:89] found id: ""
	I0806 08:36:46.371930   79927 logs.go:276] 0 containers: []
	W0806 08:36:46.371939   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:46.371944   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:46.372000   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:46.411945   79927 cri.go:89] found id: ""
	I0806 08:36:46.412119   79927 logs.go:276] 0 containers: []
	W0806 08:36:46.412134   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:46.412145   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:46.412248   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:46.448578   79927 cri.go:89] found id: ""
	I0806 08:36:46.448612   79927 logs.go:276] 0 containers: []
	W0806 08:36:46.448624   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:46.448631   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:46.448691   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:46.492231   79927 cri.go:89] found id: ""
	I0806 08:36:46.492263   79927 logs.go:276] 0 containers: []
	W0806 08:36:46.492276   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:46.492283   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:46.492338   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:46.538502   79927 cri.go:89] found id: ""
	I0806 08:36:46.538528   79927 logs.go:276] 0 containers: []
	W0806 08:36:46.538547   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:46.538555   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:46.538610   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:46.577750   79927 cri.go:89] found id: ""
	I0806 08:36:46.577778   79927 logs.go:276] 0 containers: []
	W0806 08:36:46.577785   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:46.577790   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:46.577872   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:46.611717   79927 cri.go:89] found id: ""
	I0806 08:36:46.611742   79927 logs.go:276] 0 containers: []
	W0806 08:36:46.611750   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:46.611758   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:46.611770   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:46.629144   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:46.629178   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:46.714703   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:46.714723   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:46.714736   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:46.798978   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:46.799025   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:46.837373   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:46.837407   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:44.288387   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:46.288470   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:48.788762   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:44.611739   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:46.615083   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:49.113041   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:48.469024   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:50.967906   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:49.393231   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:49.407372   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:49.407436   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:49.444210   79927 cri.go:89] found id: ""
	I0806 08:36:49.444240   79927 logs.go:276] 0 containers: []
	W0806 08:36:49.444249   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:49.444254   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:49.444313   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:49.484165   79927 cri.go:89] found id: ""
	I0806 08:36:49.484190   79927 logs.go:276] 0 containers: []
	W0806 08:36:49.484200   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:49.484206   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:49.484278   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:49.519880   79927 cri.go:89] found id: ""
	I0806 08:36:49.519921   79927 logs.go:276] 0 containers: []
	W0806 08:36:49.519932   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:49.519939   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:49.519999   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:49.554954   79927 cri.go:89] found id: ""
	I0806 08:36:49.554979   79927 logs.go:276] 0 containers: []
	W0806 08:36:49.554988   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:49.554995   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:49.555045   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:49.593332   79927 cri.go:89] found id: ""
	I0806 08:36:49.593359   79927 logs.go:276] 0 containers: []
	W0806 08:36:49.593367   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:49.593373   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:49.593432   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:49.630736   79927 cri.go:89] found id: ""
	I0806 08:36:49.630766   79927 logs.go:276] 0 containers: []
	W0806 08:36:49.630774   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:49.630780   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:49.630839   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:49.667460   79927 cri.go:89] found id: ""
	I0806 08:36:49.667487   79927 logs.go:276] 0 containers: []
	W0806 08:36:49.667499   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:49.667505   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:49.667566   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:49.700161   79927 cri.go:89] found id: ""
	I0806 08:36:49.700190   79927 logs.go:276] 0 containers: []
	W0806 08:36:49.700199   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:49.700207   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:49.700218   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:49.786656   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:49.786704   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:49.825711   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:49.825745   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:49.879696   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:49.879731   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:49.893336   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:49.893369   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:49.969462   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:52.469939   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:52.483906   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:52.483974   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:52.523575   79927 cri.go:89] found id: ""
	I0806 08:36:52.523603   79927 logs.go:276] 0 containers: []
	W0806 08:36:52.523615   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:52.523622   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:52.523684   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:52.561590   79927 cri.go:89] found id: ""
	I0806 08:36:52.561620   79927 logs.go:276] 0 containers: []
	W0806 08:36:52.561630   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:52.561639   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:52.561698   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:52.598204   79927 cri.go:89] found id: ""
	I0806 08:36:52.598227   79927 logs.go:276] 0 containers: []
	W0806 08:36:52.598236   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:52.598241   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:52.598302   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:52.636993   79927 cri.go:89] found id: ""
	I0806 08:36:52.637019   79927 logs.go:276] 0 containers: []
	W0806 08:36:52.637027   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:52.637034   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:52.637085   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:52.675090   79927 cri.go:89] found id: ""
	I0806 08:36:52.675118   79927 logs.go:276] 0 containers: []
	W0806 08:36:52.675130   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:52.675138   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:52.675199   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:52.711045   79927 cri.go:89] found id: ""
	I0806 08:36:52.711072   79927 logs.go:276] 0 containers: []
	W0806 08:36:52.711081   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:52.711086   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:52.711151   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:52.749001   79927 cri.go:89] found id: ""
	I0806 08:36:52.749029   79927 logs.go:276] 0 containers: []
	W0806 08:36:52.749039   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:52.749046   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:52.749113   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:52.783692   79927 cri.go:89] found id: ""
	I0806 08:36:52.783720   79927 logs.go:276] 0 containers: []
	W0806 08:36:52.783737   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:52.783748   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:52.783762   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:52.863240   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:52.863281   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:52.902471   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:52.902496   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:52.953299   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:52.953341   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:52.968786   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:52.968818   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:53.045904   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:51.288280   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:53.289276   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:51.612945   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:54.112574   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:52.968486   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:55.468458   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:55.546407   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:55.560166   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:55.560243   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:55.592961   79927 cri.go:89] found id: ""
	I0806 08:36:55.592992   79927 logs.go:276] 0 containers: []
	W0806 08:36:55.593004   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:55.593012   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:55.593075   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:55.629546   79927 cri.go:89] found id: ""
	I0806 08:36:55.629568   79927 logs.go:276] 0 containers: []
	W0806 08:36:55.629578   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:55.629585   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:55.629639   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:55.668677   79927 cri.go:89] found id: ""
	I0806 08:36:55.668707   79927 logs.go:276] 0 containers: []
	W0806 08:36:55.668718   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:55.668726   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:55.668784   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:55.704141   79927 cri.go:89] found id: ""
	I0806 08:36:55.704173   79927 logs.go:276] 0 containers: []
	W0806 08:36:55.704183   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:55.704188   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:55.704257   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:55.740125   79927 cri.go:89] found id: ""
	I0806 08:36:55.740152   79927 logs.go:276] 0 containers: []
	W0806 08:36:55.740160   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:55.740166   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:55.740210   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:55.773709   79927 cri.go:89] found id: ""
	I0806 08:36:55.773739   79927 logs.go:276] 0 containers: []
	W0806 08:36:55.773750   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:55.773759   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:55.773827   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:55.813913   79927 cri.go:89] found id: ""
	I0806 08:36:55.813940   79927 logs.go:276] 0 containers: []
	W0806 08:36:55.813953   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:55.813959   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:55.814004   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:55.850807   79927 cri.go:89] found id: ""
	I0806 08:36:55.850841   79927 logs.go:276] 0 containers: []
	W0806 08:36:55.850853   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:55.850864   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:55.850878   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:55.908202   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:55.908247   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:55.922278   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:55.922311   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:55.996448   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:55.996476   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:55.996489   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:56.076612   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:56.076647   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:36:58.625610   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:36:58.640939   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:36:58.641008   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:36:58.675638   79927 cri.go:89] found id: ""
	I0806 08:36:58.675666   79927 logs.go:276] 0 containers: []
	W0806 08:36:58.675674   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:36:58.675680   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:36:58.675728   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:36:58.712020   79927 cri.go:89] found id: ""
	I0806 08:36:58.712055   79927 logs.go:276] 0 containers: []
	W0806 08:36:58.712065   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:36:58.712072   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:36:58.712126   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:36:58.749683   79927 cri.go:89] found id: ""
	I0806 08:36:58.749712   79927 logs.go:276] 0 containers: []
	W0806 08:36:58.749724   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:36:58.749731   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:36:58.749792   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:36:55.788883   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:58.288825   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:56.613051   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:59.113176   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:57.968553   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:59.968580   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:36:58.790005   79927 cri.go:89] found id: ""
	I0806 08:36:58.790026   79927 logs.go:276] 0 containers: []
	W0806 08:36:58.790033   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:36:58.790039   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:36:58.790089   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:36:58.828781   79927 cri.go:89] found id: ""
	I0806 08:36:58.828806   79927 logs.go:276] 0 containers: []
	W0806 08:36:58.828815   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:36:58.828821   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:36:58.828873   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:36:58.867039   79927 cri.go:89] found id: ""
	I0806 08:36:58.867067   79927 logs.go:276] 0 containers: []
	W0806 08:36:58.867075   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:36:58.867081   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:36:58.867125   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:36:58.901362   79927 cri.go:89] found id: ""
	I0806 08:36:58.901396   79927 logs.go:276] 0 containers: []
	W0806 08:36:58.901415   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:36:58.901423   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:36:58.901472   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:36:58.940351   79927 cri.go:89] found id: ""
	I0806 08:36:58.940388   79927 logs.go:276] 0 containers: []
	W0806 08:36:58.940399   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:36:58.940410   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:36:58.940425   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:36:58.995737   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:36:58.995776   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:36:59.009191   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:36:59.009222   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:36:59.085109   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:36:59.085133   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:36:59.085150   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:36:59.164778   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:36:59.164819   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:01.712282   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:01.725701   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:01.725759   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:01.762005   79927 cri.go:89] found id: ""
	I0806 08:37:01.762029   79927 logs.go:276] 0 containers: []
	W0806 08:37:01.762038   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:01.762044   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:01.762092   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:01.799277   79927 cri.go:89] found id: ""
	I0806 08:37:01.799304   79927 logs.go:276] 0 containers: []
	W0806 08:37:01.799316   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:01.799322   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:01.799391   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:01.845100   79927 cri.go:89] found id: ""
	I0806 08:37:01.845133   79927 logs.go:276] 0 containers: []
	W0806 08:37:01.845147   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:01.845154   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:01.845214   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:01.882019   79927 cri.go:89] found id: ""
	I0806 08:37:01.882050   79927 logs.go:276] 0 containers: []
	W0806 08:37:01.882061   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:01.882069   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:01.882160   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:01.919224   79927 cri.go:89] found id: ""
	I0806 08:37:01.919251   79927 logs.go:276] 0 containers: []
	W0806 08:37:01.919262   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:01.919269   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:01.919327   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:01.957300   79927 cri.go:89] found id: ""
	I0806 08:37:01.957327   79927 logs.go:276] 0 containers: []
	W0806 08:37:01.957335   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:01.957342   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:01.957389   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:02.000717   79927 cri.go:89] found id: ""
	I0806 08:37:02.000740   79927 logs.go:276] 0 containers: []
	W0806 08:37:02.000749   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:02.000754   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:02.000801   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:02.034710   79927 cri.go:89] found id: ""
	I0806 08:37:02.034739   79927 logs.go:276] 0 containers: []
	W0806 08:37:02.034753   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:02.034763   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:02.034776   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:02.090553   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:02.090593   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:02.104247   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:02.104276   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:02.181990   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:02.182015   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:02.182030   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:02.268127   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:02.268167   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:00.789157   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:03.288510   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:01.611956   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:04.111772   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:02.467718   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:04.968898   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:04.814144   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:04.829120   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:04.829194   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:04.873503   79927 cri.go:89] found id: ""
	I0806 08:37:04.873537   79927 logs.go:276] 0 containers: []
	W0806 08:37:04.873548   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:04.873557   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:04.873614   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:04.913878   79927 cri.go:89] found id: ""
	I0806 08:37:04.913910   79927 logs.go:276] 0 containers: []
	W0806 08:37:04.913918   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:04.913924   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:04.913984   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:04.950273   79927 cri.go:89] found id: ""
	I0806 08:37:04.950303   79927 logs.go:276] 0 containers: []
	W0806 08:37:04.950316   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:04.950323   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:04.950388   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:04.985108   79927 cri.go:89] found id: ""
	I0806 08:37:04.985135   79927 logs.go:276] 0 containers: []
	W0806 08:37:04.985145   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:04.985151   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:04.985215   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:05.021033   79927 cri.go:89] found id: ""
	I0806 08:37:05.021056   79927 logs.go:276] 0 containers: []
	W0806 08:37:05.021064   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:05.021069   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:05.021120   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:05.058416   79927 cri.go:89] found id: ""
	I0806 08:37:05.058440   79927 logs.go:276] 0 containers: []
	W0806 08:37:05.058448   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:05.058454   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:05.058510   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:05.093709   79927 cri.go:89] found id: ""
	I0806 08:37:05.093737   79927 logs.go:276] 0 containers: []
	W0806 08:37:05.093745   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:05.093751   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:05.093815   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:05.129391   79927 cri.go:89] found id: ""
	I0806 08:37:05.129416   79927 logs.go:276] 0 containers: []
	W0806 08:37:05.129424   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:05.129432   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:05.129444   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:05.172149   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:05.172176   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:05.224745   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:05.224781   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:05.238967   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:05.238997   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:05.314834   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:05.314868   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:05.314883   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:07.903732   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:07.918551   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:07.918619   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:07.966068   79927 cri.go:89] found id: ""
	I0806 08:37:07.966093   79927 logs.go:276] 0 containers: []
	W0806 08:37:07.966104   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:07.966112   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:07.966180   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:08.001557   79927 cri.go:89] found id: ""
	I0806 08:37:08.001586   79927 logs.go:276] 0 containers: []
	W0806 08:37:08.001595   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:08.001602   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:08.001663   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:08.034214   79927 cri.go:89] found id: ""
	I0806 08:37:08.034239   79927 logs.go:276] 0 containers: []
	W0806 08:37:08.034249   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:08.034256   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:08.034316   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:08.070645   79927 cri.go:89] found id: ""
	I0806 08:37:08.070673   79927 logs.go:276] 0 containers: []
	W0806 08:37:08.070683   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:08.070689   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:08.070761   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:08.110537   79927 cri.go:89] found id: ""
	I0806 08:37:08.110565   79927 logs.go:276] 0 containers: []
	W0806 08:37:08.110574   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:08.110581   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:08.110635   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:08.146179   79927 cri.go:89] found id: ""
	I0806 08:37:08.146203   79927 logs.go:276] 0 containers: []
	W0806 08:37:08.146211   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:08.146224   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:08.146272   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:08.182924   79927 cri.go:89] found id: ""
	I0806 08:37:08.182953   79927 logs.go:276] 0 containers: []
	W0806 08:37:08.182962   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:08.182968   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:08.183024   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:08.219547   79927 cri.go:89] found id: ""
	I0806 08:37:08.219570   79927 logs.go:276] 0 containers: []
	W0806 08:37:08.219576   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:08.219585   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:08.219599   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:08.298919   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:08.298956   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:08.338475   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:08.338512   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:08.390859   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:08.390898   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:08.404854   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:08.404891   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:08.473553   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:05.788534   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:08.288635   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:06.112204   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:08.612960   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:06.969444   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:09.468423   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:11.469280   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:10.973824   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:10.987720   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:10.987777   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:11.022637   79927 cri.go:89] found id: ""
	I0806 08:37:11.022667   79927 logs.go:276] 0 containers: []
	W0806 08:37:11.022677   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:11.022683   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:11.022739   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:11.062625   79927 cri.go:89] found id: ""
	I0806 08:37:11.062655   79927 logs.go:276] 0 containers: []
	W0806 08:37:11.062673   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:11.062681   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:11.062736   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:11.099906   79927 cri.go:89] found id: ""
	I0806 08:37:11.099937   79927 logs.go:276] 0 containers: []
	W0806 08:37:11.099945   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:11.099951   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:11.100019   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:11.139461   79927 cri.go:89] found id: ""
	I0806 08:37:11.139485   79927 logs.go:276] 0 containers: []
	W0806 08:37:11.139493   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:11.139498   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:11.139541   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:11.177916   79927 cri.go:89] found id: ""
	I0806 08:37:11.177940   79927 logs.go:276] 0 containers: []
	W0806 08:37:11.177948   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:11.177954   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:11.178007   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:11.216335   79927 cri.go:89] found id: ""
	I0806 08:37:11.216377   79927 logs.go:276] 0 containers: []
	W0806 08:37:11.216389   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:11.216398   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:11.216453   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:11.278457   79927 cri.go:89] found id: ""
	I0806 08:37:11.278479   79927 logs.go:276] 0 containers: []
	W0806 08:37:11.278487   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:11.278493   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:11.278541   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:11.316325   79927 cri.go:89] found id: ""
	I0806 08:37:11.316348   79927 logs.go:276] 0 containers: []
	W0806 08:37:11.316356   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:11.316381   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:11.316397   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:11.357435   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:11.357469   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:11.413151   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:11.413184   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:11.427149   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:11.427182   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:11.500691   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:11.500723   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:11.500738   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:10.787717   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:12.788263   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:11.113471   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:13.612216   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:13.968959   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:16.468099   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:14.077817   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:14.090888   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:14.090960   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:14.126454   79927 cri.go:89] found id: ""
	I0806 08:37:14.126476   79927 logs.go:276] 0 containers: []
	W0806 08:37:14.126484   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:14.126489   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:14.126535   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:14.163910   79927 cri.go:89] found id: ""
	I0806 08:37:14.163939   79927 logs.go:276] 0 containers: []
	W0806 08:37:14.163950   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:14.163957   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:14.164025   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:14.202918   79927 cri.go:89] found id: ""
	I0806 08:37:14.202942   79927 logs.go:276] 0 containers: []
	W0806 08:37:14.202950   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:14.202956   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:14.202999   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:14.240997   79927 cri.go:89] found id: ""
	I0806 08:37:14.241021   79927 logs.go:276] 0 containers: []
	W0806 08:37:14.241030   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:14.241036   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:14.241083   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:14.277424   79927 cri.go:89] found id: ""
	I0806 08:37:14.277448   79927 logs.go:276] 0 containers: []
	W0806 08:37:14.277465   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:14.277473   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:14.277545   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:14.317402   79927 cri.go:89] found id: ""
	I0806 08:37:14.317426   79927 logs.go:276] 0 containers: []
	W0806 08:37:14.317435   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:14.317440   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:14.317487   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:14.364310   79927 cri.go:89] found id: ""
	I0806 08:37:14.364341   79927 logs.go:276] 0 containers: []
	W0806 08:37:14.364351   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:14.364358   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:14.364445   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:14.398836   79927 cri.go:89] found id: ""
	I0806 08:37:14.398866   79927 logs.go:276] 0 containers: []
	W0806 08:37:14.398874   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:14.398891   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:14.398904   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:14.450550   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:14.450586   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:14.467190   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:14.467220   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:14.538786   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:14.538809   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:14.538824   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:14.615049   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:14.615080   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:17.155906   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:17.170401   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:17.170479   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:17.208276   79927 cri.go:89] found id: ""
	I0806 08:37:17.208311   79927 logs.go:276] 0 containers: []
	W0806 08:37:17.208322   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:17.208330   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:17.208403   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:17.258671   79927 cri.go:89] found id: ""
	I0806 08:37:17.258696   79927 logs.go:276] 0 containers: []
	W0806 08:37:17.258705   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:17.258711   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:17.258771   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:17.296999   79927 cri.go:89] found id: ""
	I0806 08:37:17.297027   79927 logs.go:276] 0 containers: []
	W0806 08:37:17.297035   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:17.297041   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:17.297098   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:17.335818   79927 cri.go:89] found id: ""
	I0806 08:37:17.335841   79927 logs.go:276] 0 containers: []
	W0806 08:37:17.335849   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:17.335855   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:17.335903   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:17.371650   79927 cri.go:89] found id: ""
	I0806 08:37:17.371674   79927 logs.go:276] 0 containers: []
	W0806 08:37:17.371682   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:17.371688   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:17.371737   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:17.408413   79927 cri.go:89] found id: ""
	I0806 08:37:17.408440   79927 logs.go:276] 0 containers: []
	W0806 08:37:17.408448   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:17.408454   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:17.408502   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:17.443877   79927 cri.go:89] found id: ""
	I0806 08:37:17.443909   79927 logs.go:276] 0 containers: []
	W0806 08:37:17.443920   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:17.443928   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:17.443988   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:17.484292   79927 cri.go:89] found id: ""
	I0806 08:37:17.484321   79927 logs.go:276] 0 containers: []
	W0806 08:37:17.484331   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:17.484341   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:17.484356   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:17.541365   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:17.541418   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:17.555961   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:17.555993   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:17.625904   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:17.625923   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:17.625941   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:17.708997   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:17.709045   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:14.788822   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:17.288567   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:16.112034   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:18.612595   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:18.468580   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:20.967605   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:20.250351   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:20.264560   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:20.264625   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:20.301216   79927 cri.go:89] found id: ""
	I0806 08:37:20.301244   79927 logs.go:276] 0 containers: []
	W0806 08:37:20.301252   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:20.301257   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:20.301317   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:20.345820   79927 cri.go:89] found id: ""
	I0806 08:37:20.345849   79927 logs.go:276] 0 containers: []
	W0806 08:37:20.345860   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:20.345867   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:20.345923   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:20.382333   79927 cri.go:89] found id: ""
	I0806 08:37:20.382357   79927 logs.go:276] 0 containers: []
	W0806 08:37:20.382365   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:20.382371   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:20.382426   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:20.420193   79927 cri.go:89] found id: ""
	I0806 08:37:20.420228   79927 logs.go:276] 0 containers: []
	W0806 08:37:20.420238   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:20.420249   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:20.420316   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:20.456200   79927 cri.go:89] found id: ""
	I0806 08:37:20.456227   79927 logs.go:276] 0 containers: []
	W0806 08:37:20.456235   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:20.456241   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:20.456298   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:20.493321   79927 cri.go:89] found id: ""
	I0806 08:37:20.493349   79927 logs.go:276] 0 containers: []
	W0806 08:37:20.493356   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:20.493362   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:20.493414   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:20.531007   79927 cri.go:89] found id: ""
	I0806 08:37:20.531034   79927 logs.go:276] 0 containers: []
	W0806 08:37:20.531044   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:20.531051   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:20.531118   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:20.572014   79927 cri.go:89] found id: ""
	I0806 08:37:20.572044   79927 logs.go:276] 0 containers: []
	W0806 08:37:20.572062   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:20.572071   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:20.572083   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:20.623368   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:20.623399   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:20.638941   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:20.638984   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:20.717287   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:20.717313   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:20.717324   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:20.800157   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:20.800192   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:23.342061   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:23.356954   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:23.357026   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:23.391415   79927 cri.go:89] found id: ""
	I0806 08:37:23.391445   79927 logs.go:276] 0 containers: []
	W0806 08:37:23.391456   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:23.391463   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:23.391526   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:23.426460   79927 cri.go:89] found id: ""
	I0806 08:37:23.426482   79927 logs.go:276] 0 containers: []
	W0806 08:37:23.426491   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:23.426496   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:23.426541   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:23.461078   79927 cri.go:89] found id: ""
	I0806 08:37:23.461111   79927 logs.go:276] 0 containers: []
	W0806 08:37:23.461122   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:23.461129   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:23.461183   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:23.496566   79927 cri.go:89] found id: ""
	I0806 08:37:23.496594   79927 logs.go:276] 0 containers: []
	W0806 08:37:23.496602   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:23.496607   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:23.496665   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:23.529097   79927 cri.go:89] found id: ""
	I0806 08:37:23.529123   79927 logs.go:276] 0 containers: []
	W0806 08:37:23.529134   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:23.529153   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:23.529243   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:23.563690   79927 cri.go:89] found id: ""
	I0806 08:37:23.563720   79927 logs.go:276] 0 containers: []
	W0806 08:37:23.563730   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:23.563737   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:23.563796   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:23.598787   79927 cri.go:89] found id: ""
	I0806 08:37:23.598861   79927 logs.go:276] 0 containers: []
	W0806 08:37:23.598875   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:23.598884   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:23.598942   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:23.655675   79927 cri.go:89] found id: ""
	I0806 08:37:23.655705   79927 logs.go:276] 0 containers: []
	W0806 08:37:23.655715   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:23.655723   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:23.655734   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:23.715853   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:23.715887   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:23.735805   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:23.735844   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 08:37:19.288768   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:21.788292   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:23.788808   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:20.612865   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:22.613432   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:22.968064   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:24.968171   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	W0806 08:37:23.815238   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:23.815261   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:23.815274   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:23.899436   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:23.899479   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:26.441837   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:26.456410   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:26.456472   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:26.497707   79927 cri.go:89] found id: ""
	I0806 08:37:26.497733   79927 logs.go:276] 0 containers: []
	W0806 08:37:26.497741   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:26.497746   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:26.497795   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:26.536101   79927 cri.go:89] found id: ""
	I0806 08:37:26.536126   79927 logs.go:276] 0 containers: []
	W0806 08:37:26.536133   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:26.536145   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:26.536190   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:26.573067   79927 cri.go:89] found id: ""
	I0806 08:37:26.573091   79927 logs.go:276] 0 containers: []
	W0806 08:37:26.573102   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:26.573110   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:26.573162   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:26.609820   79927 cri.go:89] found id: ""
	I0806 08:37:26.609846   79927 logs.go:276] 0 containers: []
	W0806 08:37:26.609854   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:26.609860   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:26.609914   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:26.645721   79927 cri.go:89] found id: ""
	I0806 08:37:26.645748   79927 logs.go:276] 0 containers: []
	W0806 08:37:26.645758   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:26.645764   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:26.645825   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:26.679941   79927 cri.go:89] found id: ""
	I0806 08:37:26.679988   79927 logs.go:276] 0 containers: []
	W0806 08:37:26.679999   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:26.680006   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:26.680068   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:26.717247   79927 cri.go:89] found id: ""
	I0806 08:37:26.717277   79927 logs.go:276] 0 containers: []
	W0806 08:37:26.717288   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:26.717295   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:26.717382   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:26.760157   79927 cri.go:89] found id: ""
	I0806 08:37:26.760184   79927 logs.go:276] 0 containers: []
	W0806 08:37:26.760194   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:26.760205   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:26.760221   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:26.816034   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:26.816071   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:26.830141   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:26.830170   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:26.900078   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:26.900103   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:26.900121   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:26.984460   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:26.984501   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:26.287692   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:28.288990   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:25.112666   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:27.611170   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:27.466925   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:29.469989   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:29.527184   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:29.541974   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:29.542052   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:29.576062   79927 cri.go:89] found id: ""
	I0806 08:37:29.576095   79927 logs.go:276] 0 containers: []
	W0806 08:37:29.576107   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:29.576114   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:29.576176   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:29.610841   79927 cri.go:89] found id: ""
	I0806 08:37:29.610865   79927 logs.go:276] 0 containers: []
	W0806 08:37:29.610881   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:29.610892   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:29.610948   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:29.650326   79927 cri.go:89] found id: ""
	I0806 08:37:29.650350   79927 logs.go:276] 0 containers: []
	W0806 08:37:29.650360   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:29.650367   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:29.650434   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:29.689382   79927 cri.go:89] found id: ""
	I0806 08:37:29.689418   79927 logs.go:276] 0 containers: []
	W0806 08:37:29.689429   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:29.689436   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:29.689495   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:29.725893   79927 cri.go:89] found id: ""
	I0806 08:37:29.725918   79927 logs.go:276] 0 containers: []
	W0806 08:37:29.725926   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:29.725932   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:29.725980   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:29.767354   79927 cri.go:89] found id: ""
	I0806 08:37:29.767379   79927 logs.go:276] 0 containers: []
	W0806 08:37:29.767386   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:29.767392   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:29.767441   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:29.811139   79927 cri.go:89] found id: ""
	I0806 08:37:29.811161   79927 logs.go:276] 0 containers: []
	W0806 08:37:29.811169   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:29.811174   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:29.811219   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:29.848252   79927 cri.go:89] found id: ""
	I0806 08:37:29.848285   79927 logs.go:276] 0 containers: []
	W0806 08:37:29.848294   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:29.848303   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:29.848315   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:29.899850   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:29.899884   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:29.913744   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:29.913774   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:29.988016   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:29.988039   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:29.988054   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:30.069701   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:30.069735   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:32.606958   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:32.622621   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:32.622684   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:32.658816   79927 cri.go:89] found id: ""
	I0806 08:37:32.658844   79927 logs.go:276] 0 containers: []
	W0806 08:37:32.658855   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:32.658862   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:32.658928   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:32.697734   79927 cri.go:89] found id: ""
	I0806 08:37:32.697769   79927 logs.go:276] 0 containers: []
	W0806 08:37:32.697780   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:32.697786   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:32.697868   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:32.735540   79927 cri.go:89] found id: ""
	I0806 08:37:32.735572   79927 logs.go:276] 0 containers: []
	W0806 08:37:32.735584   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:32.735591   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:32.735658   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:32.771319   79927 cri.go:89] found id: ""
	I0806 08:37:32.771343   79927 logs.go:276] 0 containers: []
	W0806 08:37:32.771357   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:32.771365   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:32.771421   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:32.808137   79927 cri.go:89] found id: ""
	I0806 08:37:32.808162   79927 logs.go:276] 0 containers: []
	W0806 08:37:32.808172   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:32.808179   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:32.808234   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:32.843307   79927 cri.go:89] found id: ""
	I0806 08:37:32.843333   79927 logs.go:276] 0 containers: []
	W0806 08:37:32.843343   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:32.843350   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:32.843404   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:32.883855   79927 cri.go:89] found id: ""
	I0806 08:37:32.883922   79927 logs.go:276] 0 containers: []
	W0806 08:37:32.883935   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:32.883944   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:32.884014   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:32.920965   79927 cri.go:89] found id: ""
	I0806 08:37:32.921008   79927 logs.go:276] 0 containers: []
	W0806 08:37:32.921019   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:32.921027   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:32.921039   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:32.974933   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:32.974966   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:32.989326   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:32.989352   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:33.058274   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:33.058302   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:33.058314   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:33.136632   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:33.136670   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:30.789771   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:33.289001   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:29.613189   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:32.111086   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:34.112398   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:31.973697   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:34.469358   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:35.680315   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:35.694438   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:35.694497   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:35.729541   79927 cri.go:89] found id: ""
	I0806 08:37:35.729569   79927 logs.go:276] 0 containers: []
	W0806 08:37:35.729580   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:35.729587   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:35.729650   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:35.765783   79927 cri.go:89] found id: ""
	I0806 08:37:35.765813   79927 logs.go:276] 0 containers: []
	W0806 08:37:35.765822   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:35.765827   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:35.765885   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:35.802975   79927 cri.go:89] found id: ""
	I0806 08:37:35.803002   79927 logs.go:276] 0 containers: []
	W0806 08:37:35.803010   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:35.803016   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:35.803062   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:35.838588   79927 cri.go:89] found id: ""
	I0806 08:37:35.838614   79927 logs.go:276] 0 containers: []
	W0806 08:37:35.838623   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:35.838629   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:35.838675   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:35.874413   79927 cri.go:89] found id: ""
	I0806 08:37:35.874442   79927 logs.go:276] 0 containers: []
	W0806 08:37:35.874452   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:35.874460   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:35.874528   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:35.913816   79927 cri.go:89] found id: ""
	I0806 08:37:35.913852   79927 logs.go:276] 0 containers: []
	W0806 08:37:35.913871   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:35.913885   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:35.913963   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:35.951719   79927 cri.go:89] found id: ""
	I0806 08:37:35.951750   79927 logs.go:276] 0 containers: []
	W0806 08:37:35.951761   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:35.951770   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:35.951828   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:35.987720   79927 cri.go:89] found id: ""
	I0806 08:37:35.987749   79927 logs.go:276] 0 containers: []
	W0806 08:37:35.987760   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:35.987769   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:35.987780   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:36.043300   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:36.043339   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:36.057138   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:36.057169   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:36.126382   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:36.126399   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:36.126415   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:36.205243   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:36.205284   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:38.747978   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:35.289346   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:37.788785   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:36.114392   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:38.612174   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:36.967654   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:38.969141   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:40.969752   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:38.762123   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:38.762190   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:38.805074   79927 cri.go:89] found id: ""
	I0806 08:37:38.805101   79927 logs.go:276] 0 containers: []
	W0806 08:37:38.805111   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:38.805117   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:38.805185   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:38.839305   79927 cri.go:89] found id: ""
	I0806 08:37:38.839330   79927 logs.go:276] 0 containers: []
	W0806 08:37:38.839338   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:38.839344   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:38.839406   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:38.874174   79927 cri.go:89] found id: ""
	I0806 08:37:38.874203   79927 logs.go:276] 0 containers: []
	W0806 08:37:38.874214   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:38.874221   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:38.874277   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:38.909607   79927 cri.go:89] found id: ""
	I0806 08:37:38.909630   79927 logs.go:276] 0 containers: []
	W0806 08:37:38.909639   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:38.909645   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:38.909702   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:38.945584   79927 cri.go:89] found id: ""
	I0806 08:37:38.945615   79927 logs.go:276] 0 containers: []
	W0806 08:37:38.945626   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:38.945634   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:38.945696   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:38.988993   79927 cri.go:89] found id: ""
	I0806 08:37:38.989022   79927 logs.go:276] 0 containers: []
	W0806 08:37:38.989033   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:38.989041   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:38.989100   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:39.028902   79927 cri.go:89] found id: ""
	I0806 08:37:39.028934   79927 logs.go:276] 0 containers: []
	W0806 08:37:39.028942   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:39.028948   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:39.029005   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:39.070163   79927 cri.go:89] found id: ""
	I0806 08:37:39.070192   79927 logs.go:276] 0 containers: []
	W0806 08:37:39.070202   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:39.070213   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:39.070229   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:39.124891   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:39.124927   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:39.139898   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:39.139926   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:39.216157   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:39.216181   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:39.216196   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:39.301089   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:39.301136   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:41.840165   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:41.855659   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:41.855792   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:41.892620   79927 cri.go:89] found id: ""
	I0806 08:37:41.892648   79927 logs.go:276] 0 containers: []
	W0806 08:37:41.892660   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:41.892668   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:41.892728   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:41.929064   79927 cri.go:89] found id: ""
	I0806 08:37:41.929097   79927 logs.go:276] 0 containers: []
	W0806 08:37:41.929107   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:41.929114   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:41.929165   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:41.968334   79927 cri.go:89] found id: ""
	I0806 08:37:41.968371   79927 logs.go:276] 0 containers: []
	W0806 08:37:41.968382   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:41.968389   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:41.968450   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:42.008566   79927 cri.go:89] found id: ""
	I0806 08:37:42.008587   79927 logs.go:276] 0 containers: []
	W0806 08:37:42.008595   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:42.008601   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:42.008649   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:42.044281   79927 cri.go:89] found id: ""
	I0806 08:37:42.044308   79927 logs.go:276] 0 containers: []
	W0806 08:37:42.044315   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:42.044321   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:42.044392   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:42.081987   79927 cri.go:89] found id: ""
	I0806 08:37:42.082012   79927 logs.go:276] 0 containers: []
	W0806 08:37:42.082020   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:42.082026   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:42.082072   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:42.120818   79927 cri.go:89] found id: ""
	I0806 08:37:42.120847   79927 logs.go:276] 0 containers: []
	W0806 08:37:42.120857   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:42.120865   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:42.120926   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:42.160685   79927 cri.go:89] found id: ""
	I0806 08:37:42.160719   79927 logs.go:276] 0 containers: []
	W0806 08:37:42.160731   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:42.160742   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:42.160759   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:42.236322   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:42.236344   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:42.236357   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:42.328488   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:42.328526   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:42.369520   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:42.369545   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:42.425449   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:42.425490   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:39.788990   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:41.789236   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:43.789318   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:40.614326   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:43.112061   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:43.469284   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:45.469725   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:44.941557   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:44.954984   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:44.955043   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:44.992186   79927 cri.go:89] found id: ""
	I0806 08:37:44.992216   79927 logs.go:276] 0 containers: []
	W0806 08:37:44.992228   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:44.992234   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:44.992285   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:45.027114   79927 cri.go:89] found id: ""
	I0806 08:37:45.027143   79927 logs.go:276] 0 containers: []
	W0806 08:37:45.027154   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:45.027161   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:45.027222   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:45.062629   79927 cri.go:89] found id: ""
	I0806 08:37:45.062660   79927 logs.go:276] 0 containers: []
	W0806 08:37:45.062672   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:45.062679   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:45.062741   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:45.097499   79927 cri.go:89] found id: ""
	I0806 08:37:45.097530   79927 logs.go:276] 0 containers: []
	W0806 08:37:45.097541   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:45.097548   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:45.097605   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:45.137130   79927 cri.go:89] found id: ""
	I0806 08:37:45.137162   79927 logs.go:276] 0 containers: []
	W0806 08:37:45.137184   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:45.137191   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:45.137264   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:45.170617   79927 cri.go:89] found id: ""
	I0806 08:37:45.170642   79927 logs.go:276] 0 containers: []
	W0806 08:37:45.170659   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:45.170667   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:45.170726   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:45.205729   79927 cri.go:89] found id: ""
	I0806 08:37:45.205758   79927 logs.go:276] 0 containers: []
	W0806 08:37:45.205768   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:45.205774   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:45.205827   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:45.242345   79927 cri.go:89] found id: ""
	I0806 08:37:45.242373   79927 logs.go:276] 0 containers: []
	W0806 08:37:45.242384   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:45.242397   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:45.242414   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:45.283778   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:45.283810   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:45.341421   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:45.341457   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:45.356581   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:45.356606   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:45.428512   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:45.428541   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:45.428559   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:48.011744   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:48.025452   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:48.025533   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:48.064315   79927 cri.go:89] found id: ""
	I0806 08:37:48.064339   79927 logs.go:276] 0 containers: []
	W0806 08:37:48.064348   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:48.064353   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:48.064430   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:48.103855   79927 cri.go:89] found id: ""
	I0806 08:37:48.103891   79927 logs.go:276] 0 containers: []
	W0806 08:37:48.103902   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:48.103909   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:48.103968   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:48.141402   79927 cri.go:89] found id: ""
	I0806 08:37:48.141427   79927 logs.go:276] 0 containers: []
	W0806 08:37:48.141435   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:48.141440   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:48.141491   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:48.177449   79927 cri.go:89] found id: ""
	I0806 08:37:48.177479   79927 logs.go:276] 0 containers: []
	W0806 08:37:48.177490   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:48.177497   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:48.177566   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:48.214613   79927 cri.go:89] found id: ""
	I0806 08:37:48.214637   79927 logs.go:276] 0 containers: []
	W0806 08:37:48.214644   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:48.214650   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:48.214695   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:48.254049   79927 cri.go:89] found id: ""
	I0806 08:37:48.254075   79927 logs.go:276] 0 containers: []
	W0806 08:37:48.254083   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:48.254089   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:48.254147   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:48.296206   79927 cri.go:89] found id: ""
	I0806 08:37:48.296230   79927 logs.go:276] 0 containers: []
	W0806 08:37:48.296240   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:48.296246   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:48.296306   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:48.337614   79927 cri.go:89] found id: ""
	I0806 08:37:48.337641   79927 logs.go:276] 0 containers: []
	W0806 08:37:48.337651   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:48.337662   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:48.337679   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:48.391932   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:48.391971   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:48.405422   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:48.405450   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:48.477374   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:48.477402   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:48.477419   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:48.559252   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:48.559288   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:46.287167   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:48.288652   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:45.113308   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:47.611780   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:47.968109   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:50.469370   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:51.104881   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:51.120113   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:51.120181   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:51.155703   79927 cri.go:89] found id: ""
	I0806 08:37:51.155733   79927 logs.go:276] 0 containers: []
	W0806 08:37:51.155741   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:51.155747   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:51.155792   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:51.191211   79927 cri.go:89] found id: ""
	I0806 08:37:51.191240   79927 logs.go:276] 0 containers: []
	W0806 08:37:51.191250   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:51.191257   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:51.191324   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:51.226740   79927 cri.go:89] found id: ""
	I0806 08:37:51.226771   79927 logs.go:276] 0 containers: []
	W0806 08:37:51.226783   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:51.226790   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:51.226852   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:51.269101   79927 cri.go:89] found id: ""
	I0806 08:37:51.269132   79927 logs.go:276] 0 containers: []
	W0806 08:37:51.269141   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:51.269146   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:51.269196   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:51.307176   79927 cri.go:89] found id: ""
	I0806 08:37:51.307206   79927 logs.go:276] 0 containers: []
	W0806 08:37:51.307216   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:51.307224   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:51.307290   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:51.342824   79927 cri.go:89] found id: ""
	I0806 08:37:51.342854   79927 logs.go:276] 0 containers: []
	W0806 08:37:51.342864   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:51.342871   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:51.342935   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:51.383309   79927 cri.go:89] found id: ""
	I0806 08:37:51.383337   79927 logs.go:276] 0 containers: []
	W0806 08:37:51.383348   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:51.383355   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:51.383420   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:51.424851   79927 cri.go:89] found id: ""
	I0806 08:37:51.424896   79927 logs.go:276] 0 containers: []
	W0806 08:37:51.424908   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:51.424918   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:51.424950   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:51.507286   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:51.507307   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:51.507318   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:51.590512   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:51.590556   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:51.640453   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:51.640482   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:51.694699   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:51.694740   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:50.788163   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:52.788396   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:49.616347   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:52.112323   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:54.112911   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:52.470121   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:54.473264   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:54.211106   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:54.225858   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:54.225936   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:54.267219   79927 cri.go:89] found id: ""
	I0806 08:37:54.267247   79927 logs.go:276] 0 containers: []
	W0806 08:37:54.267258   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:54.267266   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:54.267323   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:54.306734   79927 cri.go:89] found id: ""
	I0806 08:37:54.306763   79927 logs.go:276] 0 containers: []
	W0806 08:37:54.306774   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:54.306781   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:54.306839   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:54.342257   79927 cri.go:89] found id: ""
	I0806 08:37:54.342294   79927 logs.go:276] 0 containers: []
	W0806 08:37:54.342306   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:54.342314   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:54.342383   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:54.380661   79927 cri.go:89] found id: ""
	I0806 08:37:54.380685   79927 logs.go:276] 0 containers: []
	W0806 08:37:54.380693   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:54.380699   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:54.380743   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:54.416503   79927 cri.go:89] found id: ""
	I0806 08:37:54.416525   79927 logs.go:276] 0 containers: []
	W0806 08:37:54.416533   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:54.416539   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:54.416589   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:54.453470   79927 cri.go:89] found id: ""
	I0806 08:37:54.453508   79927 logs.go:276] 0 containers: []
	W0806 08:37:54.453520   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:54.453528   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:54.453591   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:54.497630   79927 cri.go:89] found id: ""
	I0806 08:37:54.497671   79927 logs.go:276] 0 containers: []
	W0806 08:37:54.497682   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:54.497691   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:54.497750   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:54.536516   79927 cri.go:89] found id: ""
	I0806 08:37:54.536546   79927 logs.go:276] 0 containers: []
	W0806 08:37:54.536557   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:54.536568   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:54.536587   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:54.592997   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:54.593032   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:54.607568   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:54.607598   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:54.679369   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:54.679402   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:54.679423   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:54.760001   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:54.760041   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:57.305405   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:37:57.319427   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:37:57.319493   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:37:57.357465   79927 cri.go:89] found id: ""
	I0806 08:37:57.357493   79927 logs.go:276] 0 containers: []
	W0806 08:37:57.357504   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:37:57.357513   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:37:57.357571   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:37:57.390131   79927 cri.go:89] found id: ""
	I0806 08:37:57.390155   79927 logs.go:276] 0 containers: []
	W0806 08:37:57.390165   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:37:57.390172   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:37:57.390238   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:37:57.428571   79927 cri.go:89] found id: ""
	I0806 08:37:57.428600   79927 logs.go:276] 0 containers: []
	W0806 08:37:57.428608   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:37:57.428613   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:37:57.428675   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:37:57.462688   79927 cri.go:89] found id: ""
	I0806 08:37:57.462717   79927 logs.go:276] 0 containers: []
	W0806 08:37:57.462727   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:37:57.462733   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:37:57.462791   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:37:57.525008   79927 cri.go:89] found id: ""
	I0806 08:37:57.525033   79927 logs.go:276] 0 containers: []
	W0806 08:37:57.525041   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:37:57.525046   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:37:57.525108   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:37:57.565992   79927 cri.go:89] found id: ""
	I0806 08:37:57.566024   79927 logs.go:276] 0 containers: []
	W0806 08:37:57.566035   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:37:57.566045   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:37:57.566123   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:57.610170   79927 cri.go:89] found id: ""
	I0806 08:37:57.610198   79927 logs.go:276] 0 containers: []
	W0806 08:37:57.610208   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:37:57.610215   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:37:57.610262   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:37:57.647054   79927 cri.go:89] found id: ""
	I0806 08:37:57.647085   79927 logs.go:276] 0 containers: []
	W0806 08:37:57.647096   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:37:57.647106   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:37:57.647121   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:37:57.698177   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:37:57.698214   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:37:57.712244   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:37:57.712271   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:37:57.781683   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:37:57.781707   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:37:57.781723   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:37:57.861097   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:37:57.861133   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:37:54.789102   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:57.287564   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:56.613662   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:59.112248   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:56.967534   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:37:59.469122   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:00.401186   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:00.414546   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:00.414602   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:00.454163   79927 cri.go:89] found id: ""
	I0806 08:38:00.454190   79927 logs.go:276] 0 containers: []
	W0806 08:38:00.454201   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:00.454208   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:00.454269   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:00.497427   79927 cri.go:89] found id: ""
	I0806 08:38:00.497455   79927 logs.go:276] 0 containers: []
	W0806 08:38:00.497466   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:00.497473   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:00.497538   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:00.537590   79927 cri.go:89] found id: ""
	I0806 08:38:00.537619   79927 logs.go:276] 0 containers: []
	W0806 08:38:00.537627   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:00.537633   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:00.537695   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:00.572337   79927 cri.go:89] found id: ""
	I0806 08:38:00.572383   79927 logs.go:276] 0 containers: []
	W0806 08:38:00.572396   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:00.572403   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:00.572460   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:00.607729   79927 cri.go:89] found id: ""
	I0806 08:38:00.607772   79927 logs.go:276] 0 containers: []
	W0806 08:38:00.607783   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:00.607791   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:00.607852   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:00.644777   79927 cri.go:89] found id: ""
	I0806 08:38:00.644843   79927 logs.go:276] 0 containers: []
	W0806 08:38:00.644856   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:00.644864   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:00.644935   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:00.684698   79927 cri.go:89] found id: ""
	I0806 08:38:00.684727   79927 logs.go:276] 0 containers: []
	W0806 08:38:00.684739   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:00.684747   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:00.684863   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:00.718472   79927 cri.go:89] found id: ""
	I0806 08:38:00.718506   79927 logs.go:276] 0 containers: []
	W0806 08:38:00.718518   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:00.718529   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:00.718546   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:00.770265   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:00.770309   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:00.787438   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:00.787469   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:00.854538   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:00.854558   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:00.854570   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:00.936810   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:00.936847   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:03.477109   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:03.494379   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:03.494439   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:03.536166   79927 cri.go:89] found id: ""
	I0806 08:38:03.536193   79927 logs.go:276] 0 containers: []
	W0806 08:38:03.536202   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:03.536208   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:03.536255   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:03.577497   79927 cri.go:89] found id: ""
	I0806 08:38:03.577522   79927 logs.go:276] 0 containers: []
	W0806 08:38:03.577531   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:03.577536   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:03.577585   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:03.614534   79927 cri.go:89] found id: ""
	I0806 08:38:03.614560   79927 logs.go:276] 0 containers: []
	W0806 08:38:03.614569   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:03.614576   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:03.614634   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:03.655824   79927 cri.go:89] found id: ""
	I0806 08:38:03.655860   79927 logs.go:276] 0 containers: []
	W0806 08:38:03.655871   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:03.655878   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:03.655952   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:03.699843   79927 cri.go:89] found id: ""
	I0806 08:38:03.699875   79927 logs.go:276] 0 containers: []
	W0806 08:38:03.699888   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:03.699896   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:03.699957   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:03.740131   79927 cri.go:89] found id: ""
	I0806 08:38:03.740159   79927 logs.go:276] 0 containers: []
	W0806 08:38:03.740168   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:03.740174   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:03.740223   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:37:59.787634   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:01.788942   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:03.790134   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:01.112582   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:03.113004   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:01.972073   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:04.468917   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:03.774103   79927 cri.go:89] found id: ""
	I0806 08:38:03.774149   79927 logs.go:276] 0 containers: []
	W0806 08:38:03.774163   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:03.774170   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:03.774230   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:03.814051   79927 cri.go:89] found id: ""
	I0806 08:38:03.814075   79927 logs.go:276] 0 containers: []
	W0806 08:38:03.814083   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:03.814091   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:03.814102   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:03.854869   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:03.854898   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:03.908284   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:03.908318   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:03.923265   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:03.923303   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:04.001785   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:04.001806   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:04.001819   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:06.583809   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:06.596764   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:06.596827   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:06.636611   79927 cri.go:89] found id: ""
	I0806 08:38:06.636639   79927 logs.go:276] 0 containers: []
	W0806 08:38:06.636650   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:06.636657   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:06.636707   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:06.675392   79927 cri.go:89] found id: ""
	I0806 08:38:06.675419   79927 logs.go:276] 0 containers: []
	W0806 08:38:06.675430   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:06.675438   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:06.675492   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:06.711270   79927 cri.go:89] found id: ""
	I0806 08:38:06.711302   79927 logs.go:276] 0 containers: []
	W0806 08:38:06.711313   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:06.711320   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:06.711388   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:06.747252   79927 cri.go:89] found id: ""
	I0806 08:38:06.747280   79927 logs.go:276] 0 containers: []
	W0806 08:38:06.747291   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:06.747297   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:06.747360   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:06.781999   79927 cri.go:89] found id: ""
	I0806 08:38:06.782028   79927 logs.go:276] 0 containers: []
	W0806 08:38:06.782038   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:06.782043   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:06.782095   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:06.818810   79927 cri.go:89] found id: ""
	I0806 08:38:06.818847   79927 logs.go:276] 0 containers: []
	W0806 08:38:06.818860   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:06.818869   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:06.818935   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:06.857937   79927 cri.go:89] found id: ""
	I0806 08:38:06.857972   79927 logs.go:276] 0 containers: []
	W0806 08:38:06.857983   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:06.857989   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:06.858049   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:06.897920   79927 cri.go:89] found id: ""
	I0806 08:38:06.897949   79927 logs.go:276] 0 containers: []
	W0806 08:38:06.897957   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:06.897965   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:06.897979   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:06.912199   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:06.912228   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:06.983129   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:06.983157   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:06.983173   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:07.067783   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:07.067853   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:07.107756   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:07.107787   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:06.286856   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:08.287070   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:05.113664   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:07.611129   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:06.973447   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:09.468163   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:11.469075   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:09.657958   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:09.670726   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:09.670807   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:09.709184   79927 cri.go:89] found id: ""
	I0806 08:38:09.709220   79927 logs.go:276] 0 containers: []
	W0806 08:38:09.709232   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:09.709239   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:09.709299   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:09.747032   79927 cri.go:89] found id: ""
	I0806 08:38:09.747061   79927 logs.go:276] 0 containers: []
	W0806 08:38:09.747073   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:09.747080   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:09.747141   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:09.785134   79927 cri.go:89] found id: ""
	I0806 08:38:09.785170   79927 logs.go:276] 0 containers: []
	W0806 08:38:09.785180   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:09.785188   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:09.785248   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:09.825136   79927 cri.go:89] found id: ""
	I0806 08:38:09.825168   79927 logs.go:276] 0 containers: []
	W0806 08:38:09.825179   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:09.825186   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:09.825249   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:09.865292   79927 cri.go:89] found id: ""
	I0806 08:38:09.865327   79927 logs.go:276] 0 containers: []
	W0806 08:38:09.865336   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:09.865345   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:09.865415   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:09.911145   79927 cri.go:89] found id: ""
	I0806 08:38:09.911172   79927 logs.go:276] 0 containers: []
	W0806 08:38:09.911183   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:09.911190   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:09.911247   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:09.961259   79927 cri.go:89] found id: ""
	I0806 08:38:09.961289   79927 logs.go:276] 0 containers: []
	W0806 08:38:09.961299   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:09.961307   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:09.961364   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:10.009572   79927 cri.go:89] found id: ""
	I0806 08:38:10.009594   79927 logs.go:276] 0 containers: []
	W0806 08:38:10.009602   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:10.009610   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:10.009623   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:10.063733   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:10.063770   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:10.080004   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:10.080044   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:10.158792   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:10.158823   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:10.158840   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:10.242126   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:10.242162   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:12.781497   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:12.796715   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:12.796804   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:12.831223   79927 cri.go:89] found id: ""
	I0806 08:38:12.831254   79927 logs.go:276] 0 containers: []
	W0806 08:38:12.831265   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:12.831273   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:12.831332   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:12.872202   79927 cri.go:89] found id: ""
	I0806 08:38:12.872224   79927 logs.go:276] 0 containers: []
	W0806 08:38:12.872231   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:12.872236   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:12.872284   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:12.913457   79927 cri.go:89] found id: ""
	I0806 08:38:12.913486   79927 logs.go:276] 0 containers: []
	W0806 08:38:12.913497   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:12.913505   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:12.913563   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:12.950382   79927 cri.go:89] found id: ""
	I0806 08:38:12.950413   79927 logs.go:276] 0 containers: []
	W0806 08:38:12.950424   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:12.950432   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:12.950497   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:12.988956   79927 cri.go:89] found id: ""
	I0806 08:38:12.988984   79927 logs.go:276] 0 containers: []
	W0806 08:38:12.988992   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:12.988998   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:12.989054   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:13.025308   79927 cri.go:89] found id: ""
	I0806 08:38:13.025334   79927 logs.go:276] 0 containers: []
	W0806 08:38:13.025342   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:13.025348   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:13.025401   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:13.060156   79927 cri.go:89] found id: ""
	I0806 08:38:13.060184   79927 logs.go:276] 0 containers: []
	W0806 08:38:13.060196   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:13.060204   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:13.060268   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:13.096302   79927 cri.go:89] found id: ""
	I0806 08:38:13.096331   79927 logs.go:276] 0 containers: []
	W0806 08:38:13.096344   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:13.096360   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:13.096387   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:13.149567   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:13.149603   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:13.164680   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:13.164715   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:13.236695   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:13.236718   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:13.236733   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:13.323043   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:13.323081   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:10.288582   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:12.788556   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:09.611677   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:12.111364   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:14.112265   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:13.469940   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:15.969381   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:15.867128   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:15.885504   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:15.885579   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:15.929060   79927 cri.go:89] found id: ""
	I0806 08:38:15.929087   79927 logs.go:276] 0 containers: []
	W0806 08:38:15.929098   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:15.929106   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:15.929157   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:15.996919   79927 cri.go:89] found id: ""
	I0806 08:38:15.996946   79927 logs.go:276] 0 containers: []
	W0806 08:38:15.996954   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:15.996959   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:15.997013   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:16.033299   79927 cri.go:89] found id: ""
	I0806 08:38:16.033323   79927 logs.go:276] 0 containers: []
	W0806 08:38:16.033331   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:16.033338   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:16.033399   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:16.070754   79927 cri.go:89] found id: ""
	I0806 08:38:16.070781   79927 logs.go:276] 0 containers: []
	W0806 08:38:16.070789   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:16.070795   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:16.070853   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:16.107906   79927 cri.go:89] found id: ""
	I0806 08:38:16.107931   79927 logs.go:276] 0 containers: []
	W0806 08:38:16.107943   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:16.107951   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:16.108011   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:16.145814   79927 cri.go:89] found id: ""
	I0806 08:38:16.145844   79927 logs.go:276] 0 containers: []
	W0806 08:38:16.145853   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:16.145861   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:16.145921   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:16.183752   79927 cri.go:89] found id: ""
	I0806 08:38:16.183782   79927 logs.go:276] 0 containers: []
	W0806 08:38:16.183793   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:16.183828   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:16.183890   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:16.220973   79927 cri.go:89] found id: ""
	I0806 08:38:16.221000   79927 logs.go:276] 0 containers: []
	W0806 08:38:16.221010   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:16.221020   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:16.221034   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:16.274514   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:16.274555   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:16.291387   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:16.291417   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:16.365737   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:16.365769   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:16.365785   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:16.446578   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:16.446617   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:14.789440   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:17.288354   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:16.112334   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:18.612150   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:18.468700   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:20.469520   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:18.989406   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:19.003113   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:19.003174   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:19.038794   79927 cri.go:89] found id: ""
	I0806 08:38:19.038821   79927 logs.go:276] 0 containers: []
	W0806 08:38:19.038832   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:19.038839   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:19.038899   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:19.072521   79927 cri.go:89] found id: ""
	I0806 08:38:19.072546   79927 logs.go:276] 0 containers: []
	W0806 08:38:19.072554   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:19.072561   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:19.072606   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:19.105949   79927 cri.go:89] found id: ""
	I0806 08:38:19.105982   79927 logs.go:276] 0 containers: []
	W0806 08:38:19.105994   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:19.106002   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:19.106053   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:19.141277   79927 cri.go:89] found id: ""
	I0806 08:38:19.141303   79927 logs.go:276] 0 containers: []
	W0806 08:38:19.141311   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:19.141317   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:19.141373   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:19.179144   79927 cri.go:89] found id: ""
	I0806 08:38:19.179173   79927 logs.go:276] 0 containers: []
	W0806 08:38:19.179184   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:19.179191   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:19.179252   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:19.218970   79927 cri.go:89] found id: ""
	I0806 08:38:19.218998   79927 logs.go:276] 0 containers: []
	W0806 08:38:19.219010   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:19.219022   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:19.219084   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:19.257623   79927 cri.go:89] found id: ""
	I0806 08:38:19.257652   79927 logs.go:276] 0 containers: []
	W0806 08:38:19.257663   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:19.257671   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:19.257740   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:19.294222   79927 cri.go:89] found id: ""
	I0806 08:38:19.294250   79927 logs.go:276] 0 containers: []
	W0806 08:38:19.294258   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:19.294267   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:19.294279   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:19.374327   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:19.374352   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:19.374368   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:19.458570   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:19.458604   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:19.504329   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:19.504382   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:19.554199   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:19.554237   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:22.069574   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:22.082919   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:22.082981   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:22.115952   79927 cri.go:89] found id: ""
	I0806 08:38:22.115974   79927 logs.go:276] 0 containers: []
	W0806 08:38:22.115981   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:22.115987   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:22.116031   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:22.158322   79927 cri.go:89] found id: ""
	I0806 08:38:22.158349   79927 logs.go:276] 0 containers: []
	W0806 08:38:22.158359   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:22.158366   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:22.158425   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:22.203375   79927 cri.go:89] found id: ""
	I0806 08:38:22.203403   79927 logs.go:276] 0 containers: []
	W0806 08:38:22.203412   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:22.203418   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:22.203473   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:22.240808   79927 cri.go:89] found id: ""
	I0806 08:38:22.240837   79927 logs.go:276] 0 containers: []
	W0806 08:38:22.240849   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:22.240857   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:22.240913   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:22.277885   79927 cri.go:89] found id: ""
	I0806 08:38:22.277914   79927 logs.go:276] 0 containers: []
	W0806 08:38:22.277925   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:22.277932   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:22.277993   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:22.314990   79927 cri.go:89] found id: ""
	I0806 08:38:22.315020   79927 logs.go:276] 0 containers: []
	W0806 08:38:22.315031   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:22.315039   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:22.315098   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:22.352159   79927 cri.go:89] found id: ""
	I0806 08:38:22.352189   79927 logs.go:276] 0 containers: []
	W0806 08:38:22.352198   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:22.352204   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:22.352267   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:22.387349   79927 cri.go:89] found id: ""
	I0806 08:38:22.387382   79927 logs.go:276] 0 containers: []
	W0806 08:38:22.387393   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:22.387404   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:22.387416   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:22.425834   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:22.425871   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:22.479680   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:22.479712   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:22.495104   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:22.495140   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:22.567013   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:22.567030   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:22.567043   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:19.288896   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:21.291455   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:23.789282   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:21.111745   79219 pod_ready.go:102] pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:23.111513   79219 pod_ready.go:81] duration metric: took 4m0.005888372s for pod "metrics-server-569cc877fc-8xb9k" in "kube-system" namespace to be "Ready" ...
	E0806 08:38:23.111545   79219 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0806 08:38:23.111555   79219 pod_ready.go:38] duration metric: took 4m6.006750463s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 08:38:23.111579   79219 api_server.go:52] waiting for apiserver process to appear ...
	I0806 08:38:23.111610   79219 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:23.111669   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:23.182880   79219 cri.go:89] found id: "f7edb6bece3347d88a263663a1d6549165dafdcb67ad723494b937766cba6da6"
	I0806 08:38:23.182907   79219 cri.go:89] found id: ""
	I0806 08:38:23.182918   79219 logs.go:276] 1 containers: [f7edb6bece3347d88a263663a1d6549165dafdcb67ad723494b937766cba6da6]
	I0806 08:38:23.182974   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:23.188039   79219 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:23.188103   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:23.235090   79219 cri.go:89] found id: "df2efdb62a4af10f89a1b37599d4258ac3b72660353795f8c2141f3ef70c1f65"
	I0806 08:38:23.235113   79219 cri.go:89] found id: ""
	I0806 08:38:23.235135   79219 logs.go:276] 1 containers: [df2efdb62a4af10f89a1b37599d4258ac3b72660353795f8c2141f3ef70c1f65]
	I0806 08:38:23.235195   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:23.240260   79219 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:23.240335   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:23.280619   79219 cri.go:89] found id: "9e1c91bc3a920b6f7db4477b8219f13fa092c68ff53abe2390325a68dad253e5"
	I0806 08:38:23.280645   79219 cri.go:89] found id: ""
	I0806 08:38:23.280655   79219 logs.go:276] 1 containers: [9e1c91bc3a920b6f7db4477b8219f13fa092c68ff53abe2390325a68dad253e5]
	I0806 08:38:23.280715   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:23.286155   79219 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:23.286234   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:23.329583   79219 cri.go:89] found id: "a25c5660a3a2b7226bf308b44307a3912efa7a9a73e79ec63f83133b80a44683"
	I0806 08:38:23.329609   79219 cri.go:89] found id: ""
	I0806 08:38:23.329618   79219 logs.go:276] 1 containers: [a25c5660a3a2b7226bf308b44307a3912efa7a9a73e79ec63f83133b80a44683]
	I0806 08:38:23.329678   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:23.334342   79219 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:23.334407   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:23.372955   79219 cri.go:89] found id: "612fc6a8f2c1208097a6d6287c5fecb20026608a278af5d0d4efc9662e065eb9"
	I0806 08:38:23.372974   79219 cri.go:89] found id: ""
	I0806 08:38:23.372981   79219 logs.go:276] 1 containers: [612fc6a8f2c1208097a6d6287c5fecb20026608a278af5d0d4efc9662e065eb9]
	I0806 08:38:23.373037   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:23.377637   79219 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:23.377692   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:23.416165   79219 cri.go:89] found id: "d8651a7ed6615e826b950eab003841c2ec872b48160f207e77dd32a684125a23"
	I0806 08:38:23.416188   79219 cri.go:89] found id: ""
	I0806 08:38:23.416197   79219 logs.go:276] 1 containers: [d8651a7ed6615e826b950eab003841c2ec872b48160f207e77dd32a684125a23]
	I0806 08:38:23.416248   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:23.420970   79219 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:23.421020   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:23.459247   79219 cri.go:89] found id: ""
	I0806 08:38:23.459277   79219 logs.go:276] 0 containers: []
	W0806 08:38:23.459287   79219 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:23.459295   79219 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0806 08:38:23.459354   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0806 08:38:23.503329   79219 cri.go:89] found id: "367eb6487187a37acafb1fa74364a2d9ec5d9241888ed2e871fb14f2af161975"
	I0806 08:38:23.503354   79219 cri.go:89] found id: "78abe2efa92d2d6c9171ed65bd4d941693be3207554f9318a7cefb91544eeb20"
	I0806 08:38:23.503360   79219 cri.go:89] found id: ""
	I0806 08:38:23.503369   79219 logs.go:276] 2 containers: [367eb6487187a37acafb1fa74364a2d9ec5d9241888ed2e871fb14f2af161975 78abe2efa92d2d6c9171ed65bd4d941693be3207554f9318a7cefb91544eeb20]
	I0806 08:38:23.503431   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:23.508375   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:23.512355   79219 logs.go:123] Gathering logs for kube-scheduler [a25c5660a3a2b7226bf308b44307a3912efa7a9a73e79ec63f83133b80a44683] ...
	I0806 08:38:23.512401   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a25c5660a3a2b7226bf308b44307a3912efa7a9a73e79ec63f83133b80a44683"
	I0806 08:38:23.548545   79219 logs.go:123] Gathering logs for kube-controller-manager [d8651a7ed6615e826b950eab003841c2ec872b48160f207e77dd32a684125a23] ...
	I0806 08:38:23.548574   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8651a7ed6615e826b950eab003841c2ec872b48160f207e77dd32a684125a23"
	I0806 08:38:23.607970   79219 logs.go:123] Gathering logs for storage-provisioner [78abe2efa92d2d6c9171ed65bd4d941693be3207554f9318a7cefb91544eeb20] ...
	I0806 08:38:23.608004   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78abe2efa92d2d6c9171ed65bd4d941693be3207554f9318a7cefb91544eeb20"
	I0806 08:38:23.646553   79219 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:23.646582   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:24.180521   79219 logs.go:123] Gathering logs for container status ...
	I0806 08:38:24.180558   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:24.228240   79219 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:24.228273   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:24.285975   79219 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:24.286007   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:24.301117   79219 logs.go:123] Gathering logs for etcd [df2efdb62a4af10f89a1b37599d4258ac3b72660353795f8c2141f3ef70c1f65] ...
	I0806 08:38:24.301152   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df2efdb62a4af10f89a1b37599d4258ac3b72660353795f8c2141f3ef70c1f65"
	I0806 08:38:22.968463   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:25.469138   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:25.146383   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:25.160013   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:25.160078   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:25.200981   79927 cri.go:89] found id: ""
	I0806 08:38:25.201006   79927 logs.go:276] 0 containers: []
	W0806 08:38:25.201018   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:25.201026   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:25.201091   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:25.239852   79927 cri.go:89] found id: ""
	I0806 08:38:25.239895   79927 logs.go:276] 0 containers: []
	W0806 08:38:25.239907   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:25.239914   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:25.239975   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:25.275521   79927 cri.go:89] found id: ""
	I0806 08:38:25.275549   79927 logs.go:276] 0 containers: []
	W0806 08:38:25.275561   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:25.275568   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:25.275630   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:25.311921   79927 cri.go:89] found id: ""
	I0806 08:38:25.311952   79927 logs.go:276] 0 containers: []
	W0806 08:38:25.311962   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:25.311969   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:25.312036   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:25.353047   79927 cri.go:89] found id: ""
	I0806 08:38:25.353081   79927 logs.go:276] 0 containers: []
	W0806 08:38:25.353093   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:25.353102   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:25.353177   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:25.389442   79927 cri.go:89] found id: ""
	I0806 08:38:25.389470   79927 logs.go:276] 0 containers: []
	W0806 08:38:25.389479   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:25.389485   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:25.389529   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:25.426435   79927 cri.go:89] found id: ""
	I0806 08:38:25.426464   79927 logs.go:276] 0 containers: []
	W0806 08:38:25.426474   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:25.426481   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:25.426541   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:25.460095   79927 cri.go:89] found id: ""
	I0806 08:38:25.460132   79927 logs.go:276] 0 containers: []
	W0806 08:38:25.460157   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:25.460169   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:25.460192   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:25.511363   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:25.511398   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:25.525953   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:25.525980   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:25.598690   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:25.598711   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:25.598723   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:25.674996   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:25.675034   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:28.216533   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:28.230080   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:28.230171   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:28.269239   79927 cri.go:89] found id: ""
	I0806 08:38:28.269274   79927 logs.go:276] 0 containers: []
	W0806 08:38:28.269288   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:28.269296   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:28.269361   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:28.311773   79927 cri.go:89] found id: ""
	I0806 08:38:28.311809   79927 logs.go:276] 0 containers: []
	W0806 08:38:28.311820   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:28.311828   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:28.311896   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:28.351492   79927 cri.go:89] found id: ""
	I0806 08:38:28.351522   79927 logs.go:276] 0 containers: []
	W0806 08:38:28.351530   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:28.351535   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:28.351581   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:28.389039   79927 cri.go:89] found id: ""
	I0806 08:38:28.389067   79927 logs.go:276] 0 containers: []
	W0806 08:38:28.389079   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:28.389086   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:28.389175   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:28.432249   79927 cri.go:89] found id: ""
	I0806 08:38:28.432277   79927 logs.go:276] 0 containers: []
	W0806 08:38:28.432287   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:28.432296   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:28.432354   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:28.478277   79927 cri.go:89] found id: ""
	I0806 08:38:28.478303   79927 logs.go:276] 0 containers: []
	W0806 08:38:28.478313   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:28.478321   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:28.478380   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:28.516167   79927 cri.go:89] found id: ""
	I0806 08:38:28.516199   79927 logs.go:276] 0 containers: []
	W0806 08:38:28.516211   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:28.516219   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:28.516286   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:28.555493   79927 cri.go:89] found id: ""
	I0806 08:38:28.555523   79927 logs.go:276] 0 containers: []
	W0806 08:38:28.555534   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:28.555545   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:28.555560   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:28.627101   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:28.627124   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:28.627147   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:28.716655   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:28.716709   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:28.755469   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:28.755510   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:26.286925   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:28.288654   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:24.355686   79219 logs.go:123] Gathering logs for coredns [9e1c91bc3a920b6f7db4477b8219f13fa092c68ff53abe2390325a68dad253e5] ...
	I0806 08:38:24.355729   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e1c91bc3a920b6f7db4477b8219f13fa092c68ff53abe2390325a68dad253e5"
	I0806 08:38:24.396350   79219 logs.go:123] Gathering logs for kube-proxy [612fc6a8f2c1208097a6d6287c5fecb20026608a278af5d0d4efc9662e065eb9] ...
	I0806 08:38:24.396402   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 612fc6a8f2c1208097a6d6287c5fecb20026608a278af5d0d4efc9662e065eb9"
	I0806 08:38:24.435066   79219 logs.go:123] Gathering logs for storage-provisioner [367eb6487187a37acafb1fa74364a2d9ec5d9241888ed2e871fb14f2af161975] ...
	I0806 08:38:24.435098   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 367eb6487187a37acafb1fa74364a2d9ec5d9241888ed2e871fb14f2af161975"
	I0806 08:38:24.474159   79219 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:24.474188   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 08:38:24.601081   79219 logs.go:123] Gathering logs for kube-apiserver [f7edb6bece3347d88a263663a1d6549165dafdcb67ad723494b937766cba6da6] ...
	I0806 08:38:24.601112   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7edb6bece3347d88a263663a1d6549165dafdcb67ad723494b937766cba6da6"
	I0806 08:38:27.170209   79219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:27.187573   79219 api_server.go:72] duration metric: took 4m15.285938182s to wait for apiserver process to appear ...
	I0806 08:38:27.187597   79219 api_server.go:88] waiting for apiserver healthz status ...
	I0806 08:38:27.187626   79219 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:27.187673   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:27.224204   79219 cri.go:89] found id: "f7edb6bece3347d88a263663a1d6549165dafdcb67ad723494b937766cba6da6"
	I0806 08:38:27.224227   79219 cri.go:89] found id: ""
	I0806 08:38:27.224237   79219 logs.go:276] 1 containers: [f7edb6bece3347d88a263663a1d6549165dafdcb67ad723494b937766cba6da6]
	I0806 08:38:27.224284   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:27.228883   79219 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:27.228957   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:27.275914   79219 cri.go:89] found id: "df2efdb62a4af10f89a1b37599d4258ac3b72660353795f8c2141f3ef70c1f65"
	I0806 08:38:27.275939   79219 cri.go:89] found id: ""
	I0806 08:38:27.275948   79219 logs.go:276] 1 containers: [df2efdb62a4af10f89a1b37599d4258ac3b72660353795f8c2141f3ef70c1f65]
	I0806 08:38:27.276009   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:27.280979   79219 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:27.281037   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:27.326292   79219 cri.go:89] found id: "9e1c91bc3a920b6f7db4477b8219f13fa092c68ff53abe2390325a68dad253e5"
	I0806 08:38:27.326312   79219 cri.go:89] found id: ""
	I0806 08:38:27.326319   79219 logs.go:276] 1 containers: [9e1c91bc3a920b6f7db4477b8219f13fa092c68ff53abe2390325a68dad253e5]
	I0806 08:38:27.326365   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:27.331249   79219 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:27.331326   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:27.374229   79219 cri.go:89] found id: "a25c5660a3a2b7226bf308b44307a3912efa7a9a73e79ec63f83133b80a44683"
	I0806 08:38:27.374250   79219 cri.go:89] found id: ""
	I0806 08:38:27.374257   79219 logs.go:276] 1 containers: [a25c5660a3a2b7226bf308b44307a3912efa7a9a73e79ec63f83133b80a44683]
	I0806 08:38:27.374300   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:27.378875   79219 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:27.378957   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:27.416159   79219 cri.go:89] found id: "612fc6a8f2c1208097a6d6287c5fecb20026608a278af5d0d4efc9662e065eb9"
	I0806 08:38:27.416183   79219 cri.go:89] found id: ""
	I0806 08:38:27.416192   79219 logs.go:276] 1 containers: [612fc6a8f2c1208097a6d6287c5fecb20026608a278af5d0d4efc9662e065eb9]
	I0806 08:38:27.416243   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:27.420463   79219 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:27.420531   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:27.461673   79219 cri.go:89] found id: "d8651a7ed6615e826b950eab003841c2ec872b48160f207e77dd32a684125a23"
	I0806 08:38:27.461700   79219 cri.go:89] found id: ""
	I0806 08:38:27.461710   79219 logs.go:276] 1 containers: [d8651a7ed6615e826b950eab003841c2ec872b48160f207e77dd32a684125a23]
	I0806 08:38:27.461762   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:27.466404   79219 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:27.466474   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:27.507684   79219 cri.go:89] found id: ""
	I0806 08:38:27.507708   79219 logs.go:276] 0 containers: []
	W0806 08:38:27.507716   79219 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:27.507722   79219 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0806 08:38:27.507787   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0806 08:38:27.547812   79219 cri.go:89] found id: "367eb6487187a37acafb1fa74364a2d9ec5d9241888ed2e871fb14f2af161975"
	I0806 08:38:27.547844   79219 cri.go:89] found id: "78abe2efa92d2d6c9171ed65bd4d941693be3207554f9318a7cefb91544eeb20"
	I0806 08:38:27.547850   79219 cri.go:89] found id: ""
	I0806 08:38:27.547859   79219 logs.go:276] 2 containers: [367eb6487187a37acafb1fa74364a2d9ec5d9241888ed2e871fb14f2af161975 78abe2efa92d2d6c9171ed65bd4d941693be3207554f9318a7cefb91544eeb20]
	I0806 08:38:27.547929   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:27.552639   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:27.556993   79219 logs.go:123] Gathering logs for kube-proxy [612fc6a8f2c1208097a6d6287c5fecb20026608a278af5d0d4efc9662e065eb9] ...
	I0806 08:38:27.557022   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 612fc6a8f2c1208097a6d6287c5fecb20026608a278af5d0d4efc9662e065eb9"
	I0806 08:38:27.599317   79219 logs.go:123] Gathering logs for storage-provisioner [78abe2efa92d2d6c9171ed65bd4d941693be3207554f9318a7cefb91544eeb20] ...
	I0806 08:38:27.599345   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78abe2efa92d2d6c9171ed65bd4d941693be3207554f9318a7cefb91544eeb20"
	I0806 08:38:27.643812   79219 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:27.643845   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:28.119873   79219 logs.go:123] Gathering logs for container status ...
	I0806 08:38:28.119909   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:28.160698   79219 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:28.160744   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 08:38:28.275895   79219 logs.go:123] Gathering logs for coredns [9e1c91bc3a920b6f7db4477b8219f13fa092c68ff53abe2390325a68dad253e5] ...
	I0806 08:38:28.275927   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e1c91bc3a920b6f7db4477b8219f13fa092c68ff53abe2390325a68dad253e5"
	I0806 08:38:28.324525   79219 logs.go:123] Gathering logs for kube-scheduler [a25c5660a3a2b7226bf308b44307a3912efa7a9a73e79ec63f83133b80a44683] ...
	I0806 08:38:28.324559   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a25c5660a3a2b7226bf308b44307a3912efa7a9a73e79ec63f83133b80a44683"
	I0806 08:38:28.372860   79219 logs.go:123] Gathering logs for etcd [df2efdb62a4af10f89a1b37599d4258ac3b72660353795f8c2141f3ef70c1f65] ...
	I0806 08:38:28.372891   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df2efdb62a4af10f89a1b37599d4258ac3b72660353795f8c2141f3ef70c1f65"
	I0806 08:38:28.438909   79219 logs.go:123] Gathering logs for kube-controller-manager [d8651a7ed6615e826b950eab003841c2ec872b48160f207e77dd32a684125a23] ...
	I0806 08:38:28.438943   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8651a7ed6615e826b950eab003841c2ec872b48160f207e77dd32a684125a23"
	I0806 08:38:28.514238   79219 logs.go:123] Gathering logs for storage-provisioner [367eb6487187a37acafb1fa74364a2d9ec5d9241888ed2e871fb14f2af161975] ...
	I0806 08:38:28.514279   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 367eb6487187a37acafb1fa74364a2d9ec5d9241888ed2e871fb14f2af161975"
	I0806 08:38:28.552420   79219 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:28.552451   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:28.605414   79219 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:28.605452   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:28.621364   79219 logs.go:123] Gathering logs for kube-apiserver [f7edb6bece3347d88a263663a1d6549165dafdcb67ad723494b937766cba6da6] ...
	I0806 08:38:28.621404   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7edb6bece3347d88a263663a1d6549165dafdcb67ad723494b937766cba6da6"
	I0806 08:38:27.469521   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:29.967734   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:28.812203   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:28.812237   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:31.327103   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:31.340745   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:31.340826   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:31.382470   79927 cri.go:89] found id: ""
	I0806 08:38:31.382503   79927 logs.go:276] 0 containers: []
	W0806 08:38:31.382515   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:31.382523   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:31.382593   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:31.423319   79927 cri.go:89] found id: ""
	I0806 08:38:31.423343   79927 logs.go:276] 0 containers: []
	W0806 08:38:31.423353   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:31.423359   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:31.423423   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:31.463073   79927 cri.go:89] found id: ""
	I0806 08:38:31.463099   79927 logs.go:276] 0 containers: []
	W0806 08:38:31.463108   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:31.463115   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:31.463165   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:31.499169   79927 cri.go:89] found id: ""
	I0806 08:38:31.499209   79927 logs.go:276] 0 containers: []
	W0806 08:38:31.499220   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:31.499228   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:31.499284   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:31.535030   79927 cri.go:89] found id: ""
	I0806 08:38:31.535060   79927 logs.go:276] 0 containers: []
	W0806 08:38:31.535071   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:31.535079   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:31.535145   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:31.575872   79927 cri.go:89] found id: ""
	I0806 08:38:31.575894   79927 logs.go:276] 0 containers: []
	W0806 08:38:31.575902   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:31.575907   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:31.575952   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:31.614766   79927 cri.go:89] found id: ""
	I0806 08:38:31.614787   79927 logs.go:276] 0 containers: []
	W0806 08:38:31.614797   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:31.614804   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:31.614860   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:31.655929   79927 cri.go:89] found id: ""
	I0806 08:38:31.655954   79927 logs.go:276] 0 containers: []
	W0806 08:38:31.655964   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:31.655992   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:31.656009   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:31.728563   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:31.728611   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:31.745445   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:31.745479   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:31.827965   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:31.827998   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:31.828014   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:31.935051   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:31.935093   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:30.788021   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:32.788768   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:31.179830   79219 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8443/healthz ...
	I0806 08:38:31.184262   79219 api_server.go:279] https://192.168.39.190:8443/healthz returned 200:
	ok
	I0806 08:38:31.185245   79219 api_server.go:141] control plane version: v1.30.3
	I0806 08:38:31.185267   79219 api_server.go:131] duration metric: took 3.997664057s to wait for apiserver health ...
	I0806 08:38:31.185274   79219 system_pods.go:43] waiting for kube-system pods to appear ...
	I0806 08:38:31.185297   79219 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:31.185339   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:31.222960   79219 cri.go:89] found id: "f7edb6bece3347d88a263663a1d6549165dafdcb67ad723494b937766cba6da6"
	I0806 08:38:31.222986   79219 cri.go:89] found id: ""
	I0806 08:38:31.222993   79219 logs.go:276] 1 containers: [f7edb6bece3347d88a263663a1d6549165dafdcb67ad723494b937766cba6da6]
	I0806 08:38:31.223042   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:31.227448   79219 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:31.227514   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:31.265326   79219 cri.go:89] found id: "df2efdb62a4af10f89a1b37599d4258ac3b72660353795f8c2141f3ef70c1f65"
	I0806 08:38:31.265349   79219 cri.go:89] found id: ""
	I0806 08:38:31.265356   79219 logs.go:276] 1 containers: [df2efdb62a4af10f89a1b37599d4258ac3b72660353795f8c2141f3ef70c1f65]
	I0806 08:38:31.265402   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:31.269672   79219 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:31.269732   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:31.310202   79219 cri.go:89] found id: "9e1c91bc3a920b6f7db4477b8219f13fa092c68ff53abe2390325a68dad253e5"
	I0806 08:38:31.310228   79219 cri.go:89] found id: ""
	I0806 08:38:31.310237   79219 logs.go:276] 1 containers: [9e1c91bc3a920b6f7db4477b8219f13fa092c68ff53abe2390325a68dad253e5]
	I0806 08:38:31.310283   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:31.314486   79219 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:31.314540   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:31.352612   79219 cri.go:89] found id: "a25c5660a3a2b7226bf308b44307a3912efa7a9a73e79ec63f83133b80a44683"
	I0806 08:38:31.352637   79219 cri.go:89] found id: ""
	I0806 08:38:31.352647   79219 logs.go:276] 1 containers: [a25c5660a3a2b7226bf308b44307a3912efa7a9a73e79ec63f83133b80a44683]
	I0806 08:38:31.352707   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:31.358230   79219 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:31.358300   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:31.402435   79219 cri.go:89] found id: "612fc6a8f2c1208097a6d6287c5fecb20026608a278af5d0d4efc9662e065eb9"
	I0806 08:38:31.402462   79219 cri.go:89] found id: ""
	I0806 08:38:31.402472   79219 logs.go:276] 1 containers: [612fc6a8f2c1208097a6d6287c5fecb20026608a278af5d0d4efc9662e065eb9]
	I0806 08:38:31.402530   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:31.407723   79219 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:31.407793   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:31.455484   79219 cri.go:89] found id: "d8651a7ed6615e826b950eab003841c2ec872b48160f207e77dd32a684125a23"
	I0806 08:38:31.455511   79219 cri.go:89] found id: ""
	I0806 08:38:31.455520   79219 logs.go:276] 1 containers: [d8651a7ed6615e826b950eab003841c2ec872b48160f207e77dd32a684125a23]
	I0806 08:38:31.455582   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:31.461155   79219 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:31.461231   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:31.501317   79219 cri.go:89] found id: ""
	I0806 08:38:31.501342   79219 logs.go:276] 0 containers: []
	W0806 08:38:31.501352   79219 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:31.501359   79219 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0806 08:38:31.501418   79219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0806 08:38:31.552712   79219 cri.go:89] found id: "367eb6487187a37acafb1fa74364a2d9ec5d9241888ed2e871fb14f2af161975"
	I0806 08:38:31.552742   79219 cri.go:89] found id: "78abe2efa92d2d6c9171ed65bd4d941693be3207554f9318a7cefb91544eeb20"
	I0806 08:38:31.552748   79219 cri.go:89] found id: ""
	I0806 08:38:31.552758   79219 logs.go:276] 2 containers: [367eb6487187a37acafb1fa74364a2d9ec5d9241888ed2e871fb14f2af161975 78abe2efa92d2d6c9171ed65bd4d941693be3207554f9318a7cefb91544eeb20]
	I0806 08:38:31.552820   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:31.561505   79219 ssh_runner.go:195] Run: which crictl
	I0806 08:38:31.566613   79219 logs.go:123] Gathering logs for storage-provisioner [367eb6487187a37acafb1fa74364a2d9ec5d9241888ed2e871fb14f2af161975] ...
	I0806 08:38:31.566640   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 367eb6487187a37acafb1fa74364a2d9ec5d9241888ed2e871fb14f2af161975"
	I0806 08:38:31.613033   79219 logs.go:123] Gathering logs for storage-provisioner [78abe2efa92d2d6c9171ed65bd4d941693be3207554f9318a7cefb91544eeb20] ...
	I0806 08:38:31.613073   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78abe2efa92d2d6c9171ed65bd4d941693be3207554f9318a7cefb91544eeb20"
	I0806 08:38:31.655353   79219 logs.go:123] Gathering logs for kube-scheduler [a25c5660a3a2b7226bf308b44307a3912efa7a9a73e79ec63f83133b80a44683] ...
	I0806 08:38:31.655389   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a25c5660a3a2b7226bf308b44307a3912efa7a9a73e79ec63f83133b80a44683"
	I0806 08:38:31.703934   79219 logs.go:123] Gathering logs for kube-controller-manager [d8651a7ed6615e826b950eab003841c2ec872b48160f207e77dd32a684125a23] ...
	I0806 08:38:31.703960   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8651a7ed6615e826b950eab003841c2ec872b48160f207e77dd32a684125a23"
	I0806 08:38:31.768880   79219 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:31.768916   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 08:38:31.896045   79219 logs.go:123] Gathering logs for kube-apiserver [f7edb6bece3347d88a263663a1d6549165dafdcb67ad723494b937766cba6da6] ...
	I0806 08:38:31.896079   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7edb6bece3347d88a263663a1d6549165dafdcb67ad723494b937766cba6da6"
	I0806 08:38:31.966646   79219 logs.go:123] Gathering logs for etcd [df2efdb62a4af10f89a1b37599d4258ac3b72660353795f8c2141f3ef70c1f65] ...
	I0806 08:38:31.966682   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df2efdb62a4af10f89a1b37599d4258ac3b72660353795f8c2141f3ef70c1f65"
	I0806 08:38:32.029471   79219 logs.go:123] Gathering logs for coredns [9e1c91bc3a920b6f7db4477b8219f13fa092c68ff53abe2390325a68dad253e5] ...
	I0806 08:38:32.029509   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e1c91bc3a920b6f7db4477b8219f13fa092c68ff53abe2390325a68dad253e5"
	I0806 08:38:32.071406   79219 logs.go:123] Gathering logs for kube-proxy [612fc6a8f2c1208097a6d6287c5fecb20026608a278af5d0d4efc9662e065eb9] ...
	I0806 08:38:32.071435   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 612fc6a8f2c1208097a6d6287c5fecb20026608a278af5d0d4efc9662e065eb9"
	I0806 08:38:32.122333   79219 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:32.122368   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:32.478494   79219 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:32.478540   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:32.530649   79219 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:32.530696   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:32.545866   79219 logs.go:123] Gathering logs for container status ...
	I0806 08:38:32.545895   79219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:35.101370   79219 system_pods.go:59] 8 kube-system pods found
	I0806 08:38:35.101397   79219 system_pods.go:61] "coredns-7db6d8ff4d-4xxwm" [eeb47dea-32e9-458f-9ad9-2af6d4e68cc0] Running
	I0806 08:38:35.101402   79219 system_pods.go:61] "etcd-embed-certs-324808" [1e0a82cd-6704-4da3-8992-15c68a7aaf70] Running
	I0806 08:38:35.101405   79219 system_pods.go:61] "kube-apiserver-embed-certs-324808" [a33c54d9-1fc5-4243-8fe9-d535c40908c3] Running
	I0806 08:38:35.101409   79219 system_pods.go:61] "kube-controller-manager-embed-certs-324808" [b1c519b8-9c96-4c87-90f3-cc277cfe7763] Running
	I0806 08:38:35.101412   79219 system_pods.go:61] "kube-proxy-8lbzv" [0c45e6c0-9b7f-4c48-bff5-38d4a5949bec] Running
	I0806 08:38:35.101415   79219 system_pods.go:61] "kube-scheduler-embed-certs-324808" [78c2827f-341a-4441-bf7e-eff947fc3ea8] Running
	I0806 08:38:35.101421   79219 system_pods.go:61] "metrics-server-569cc877fc-8xb9k" [7a5fb326-11a7-48fa-bcde-a22026a1dccb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0806 08:38:35.101424   79219 system_pods.go:61] "storage-provisioner" [15c7d89a-e994-4cca-931c-f646157a15e8] Running
	I0806 08:38:35.101431   79219 system_pods.go:74] duration metric: took 3.916151772s to wait for pod list to return data ...
	I0806 08:38:35.101438   79219 default_sa.go:34] waiting for default service account to be created ...
	I0806 08:38:35.103518   79219 default_sa.go:45] found service account: "default"
	I0806 08:38:35.103536   79219 default_sa.go:55] duration metric: took 2.092582ms for default service account to be created ...
	I0806 08:38:35.103546   79219 system_pods.go:116] waiting for k8s-apps to be running ...
	I0806 08:38:35.108383   79219 system_pods.go:86] 8 kube-system pods found
	I0806 08:38:35.108409   79219 system_pods.go:89] "coredns-7db6d8ff4d-4xxwm" [eeb47dea-32e9-458f-9ad9-2af6d4e68cc0] Running
	I0806 08:38:35.108417   79219 system_pods.go:89] "etcd-embed-certs-324808" [1e0a82cd-6704-4da3-8992-15c68a7aaf70] Running
	I0806 08:38:35.108423   79219 system_pods.go:89] "kube-apiserver-embed-certs-324808" [a33c54d9-1fc5-4243-8fe9-d535c40908c3] Running
	I0806 08:38:35.108427   79219 system_pods.go:89] "kube-controller-manager-embed-certs-324808" [b1c519b8-9c96-4c87-90f3-cc277cfe7763] Running
	I0806 08:38:35.108431   79219 system_pods.go:89] "kube-proxy-8lbzv" [0c45e6c0-9b7f-4c48-bff5-38d4a5949bec] Running
	I0806 08:38:35.108436   79219 system_pods.go:89] "kube-scheduler-embed-certs-324808" [78c2827f-341a-4441-bf7e-eff947fc3ea8] Running
	I0806 08:38:35.108442   79219 system_pods.go:89] "metrics-server-569cc877fc-8xb9k" [7a5fb326-11a7-48fa-bcde-a22026a1dccb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0806 08:38:35.108447   79219 system_pods.go:89] "storage-provisioner" [15c7d89a-e994-4cca-931c-f646157a15e8] Running
	I0806 08:38:35.108453   79219 system_pods.go:126] duration metric: took 4.901954ms to wait for k8s-apps to be running ...
	I0806 08:38:35.108463   79219 system_svc.go:44] waiting for kubelet service to be running ....
	I0806 08:38:35.108505   79219 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 08:38:35.126018   79219 system_svc.go:56] duration metric: took 17.549117ms WaitForService to wait for kubelet
	I0806 08:38:35.126045   79219 kubeadm.go:582] duration metric: took 4m23.224415251s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0806 08:38:35.126065   79219 node_conditions.go:102] verifying NodePressure condition ...
	I0806 08:38:35.129214   79219 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0806 08:38:35.129231   79219 node_conditions.go:123] node cpu capacity is 2
	I0806 08:38:35.129241   79219 node_conditions.go:105] duration metric: took 3.172893ms to run NodePressure ...
	I0806 08:38:35.129252   79219 start.go:241] waiting for startup goroutines ...
	I0806 08:38:35.129259   79219 start.go:246] waiting for cluster config update ...
	I0806 08:38:35.129273   79219 start.go:255] writing updated cluster config ...
	I0806 08:38:35.129540   79219 ssh_runner.go:195] Run: rm -f paused
	I0806 08:38:35.178651   79219 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0806 08:38:35.180921   79219 out.go:177] * Done! kubectl is now configured to use "embed-certs-324808" cluster and "default" namespace by default
	I0806 08:38:31.968170   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:34.468911   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:34.478890   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:34.493573   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:34.493650   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:34.529054   79927 cri.go:89] found id: ""
	I0806 08:38:34.529083   79927 logs.go:276] 0 containers: []
	W0806 08:38:34.529094   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:34.529101   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:34.529171   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:34.570214   79927 cri.go:89] found id: ""
	I0806 08:38:34.570245   79927 logs.go:276] 0 containers: []
	W0806 08:38:34.570256   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:34.570264   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:34.570325   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:34.607198   79927 cri.go:89] found id: ""
	I0806 08:38:34.607232   79927 logs.go:276] 0 containers: []
	W0806 08:38:34.607243   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:34.607250   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:34.607296   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:34.646238   79927 cri.go:89] found id: ""
	I0806 08:38:34.646266   79927 logs.go:276] 0 containers: []
	W0806 08:38:34.646278   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:34.646287   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:34.646354   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:34.682137   79927 cri.go:89] found id: ""
	I0806 08:38:34.682161   79927 logs.go:276] 0 containers: []
	W0806 08:38:34.682169   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:34.682175   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:34.682227   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:34.721211   79927 cri.go:89] found id: ""
	I0806 08:38:34.721238   79927 logs.go:276] 0 containers: []
	W0806 08:38:34.721246   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:34.721252   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:34.721306   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:34.759693   79927 cri.go:89] found id: ""
	I0806 08:38:34.759728   79927 logs.go:276] 0 containers: []
	W0806 08:38:34.759737   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:34.759743   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:34.759793   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:34.799224   79927 cri.go:89] found id: ""
	I0806 08:38:34.799246   79927 logs.go:276] 0 containers: []
	W0806 08:38:34.799254   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:34.799262   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:34.799273   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:34.852682   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:34.852727   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:34.866456   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:34.866494   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:34.933029   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:34.933055   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:34.933076   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:35.017215   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:35.017253   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:37.567662   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:37.582220   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:37.582284   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:37.622112   79927 cri.go:89] found id: ""
	I0806 08:38:37.622145   79927 logs.go:276] 0 containers: []
	W0806 08:38:37.622153   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:37.622164   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:37.622228   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:37.657767   79927 cri.go:89] found id: ""
	I0806 08:38:37.657791   79927 logs.go:276] 0 containers: []
	W0806 08:38:37.657798   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:37.657804   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:37.657849   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:37.691442   79927 cri.go:89] found id: ""
	I0806 08:38:37.691471   79927 logs.go:276] 0 containers: []
	W0806 08:38:37.691480   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:37.691485   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:37.691539   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:37.725810   79927 cri.go:89] found id: ""
	I0806 08:38:37.725837   79927 logs.go:276] 0 containers: []
	W0806 08:38:37.725859   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:37.725866   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:37.725922   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:37.764708   79927 cri.go:89] found id: ""
	I0806 08:38:37.764737   79927 logs.go:276] 0 containers: []
	W0806 08:38:37.764748   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:37.764755   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:37.764818   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:37.802915   79927 cri.go:89] found id: ""
	I0806 08:38:37.802939   79927 logs.go:276] 0 containers: []
	W0806 08:38:37.802947   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:37.802953   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:37.803008   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:37.838492   79927 cri.go:89] found id: ""
	I0806 08:38:37.838525   79927 logs.go:276] 0 containers: []
	W0806 08:38:37.838536   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:37.838544   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:37.838607   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:37.873937   79927 cri.go:89] found id: ""
	I0806 08:38:37.873969   79927 logs.go:276] 0 containers: []
	W0806 08:38:37.873979   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:37.873990   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:37.874005   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:37.912169   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:37.912193   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:37.968269   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:37.968305   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:37.983414   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:37.983440   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:38.055031   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:38.055049   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:38.055062   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:35.289081   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:37.789220   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:36.968687   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:39.468051   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:41.468488   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:40.635449   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:40.649962   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:40.650045   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:40.688975   79927 cri.go:89] found id: ""
	I0806 08:38:40.689006   79927 logs.go:276] 0 containers: []
	W0806 08:38:40.689017   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:40.689029   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:40.689093   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:40.726740   79927 cri.go:89] found id: ""
	I0806 08:38:40.726770   79927 logs.go:276] 0 containers: []
	W0806 08:38:40.726780   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:40.726788   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:40.726886   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:40.767402   79927 cri.go:89] found id: ""
	I0806 08:38:40.767432   79927 logs.go:276] 0 containers: []
	W0806 08:38:40.767445   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:40.767453   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:40.767510   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:40.809994   79927 cri.go:89] found id: ""
	I0806 08:38:40.810023   79927 logs.go:276] 0 containers: []
	W0806 08:38:40.810033   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:40.810042   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:40.810102   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:40.845624   79927 cri.go:89] found id: ""
	I0806 08:38:40.845653   79927 logs.go:276] 0 containers: []
	W0806 08:38:40.845662   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:40.845668   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:40.845715   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:40.883830   79927 cri.go:89] found id: ""
	I0806 08:38:40.883856   79927 logs.go:276] 0 containers: []
	W0806 08:38:40.883871   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:40.883879   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:40.883937   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:40.919930   79927 cri.go:89] found id: ""
	I0806 08:38:40.919958   79927 logs.go:276] 0 containers: []
	W0806 08:38:40.919966   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:40.919971   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:40.920018   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:40.954563   79927 cri.go:89] found id: ""
	I0806 08:38:40.954594   79927 logs.go:276] 0 containers: []
	W0806 08:38:40.954605   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:40.954616   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:40.954629   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:41.008780   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:41.008813   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:41.022292   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:41.022321   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:41.090241   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:41.090273   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:41.090294   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:41.171739   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:41.171779   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:43.712410   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:43.726676   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:43.726747   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:40.288192   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:42.289039   79614 pod_ready.go:102] pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:42.789083   79614 pod_ready.go:81] duration metric: took 4m0.00771712s for pod "metrics-server-569cc877fc-hnxd4" in "kube-system" namespace to be "Ready" ...
	E0806 08:38:42.789107   79614 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0806 08:38:42.789117   79614 pod_ready.go:38] duration metric: took 4m5.553596655s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 08:38:42.789135   79614 api_server.go:52] waiting for apiserver process to appear ...
	I0806 08:38:42.789163   79614 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:42.789215   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:42.837092   79614 cri.go:89] found id: "49843be03d16addb301ffd26e517770ff29b00f049efb524a2139b8b2153325c"
	I0806 08:38:42.837119   79614 cri.go:89] found id: ""
	I0806 08:38:42.837128   79614 logs.go:276] 1 containers: [49843be03d16addb301ffd26e517770ff29b00f049efb524a2139b8b2153325c]
	I0806 08:38:42.837188   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:42.842270   79614 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:42.842334   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:42.878018   79614 cri.go:89] found id: "b5fa6fe8f1db6b59f47ea637515efb2864e0408ce1df0e6851b3bda0e240f5d3"
	I0806 08:38:42.878044   79614 cri.go:89] found id: ""
	I0806 08:38:42.878053   79614 logs.go:276] 1 containers: [b5fa6fe8f1db6b59f47ea637515efb2864e0408ce1df0e6851b3bda0e240f5d3]
	I0806 08:38:42.878115   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:42.882437   79614 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:42.882509   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:42.925635   79614 cri.go:89] found id: "59f63eeae644f7c8944cdb7b1d05d9e49fe84eb28e9ccbad2d2dd79846d7eb52"
	I0806 08:38:42.925655   79614 cri.go:89] found id: ""
	I0806 08:38:42.925662   79614 logs.go:276] 1 containers: [59f63eeae644f7c8944cdb7b1d05d9e49fe84eb28e9ccbad2d2dd79846d7eb52]
	I0806 08:38:42.925704   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:42.930525   79614 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:42.930583   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:42.972716   79614 cri.go:89] found id: "e596abcad3dc13d27548701f515beebba051e04a9e20f8ce1d018731d880ff1c"
	I0806 08:38:42.972736   79614 cri.go:89] found id: ""
	I0806 08:38:42.972744   79614 logs.go:276] 1 containers: [e596abcad3dc13d27548701f515beebba051e04a9e20f8ce1d018731d880ff1c]
	I0806 08:38:42.972791   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:42.977840   79614 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:42.977894   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:43.017316   79614 cri.go:89] found id: "75990046864b4a4db9f8c02f44de4eefbb0cf45584880b053bce4ca0b677ce4e"
	I0806 08:38:43.017339   79614 cri.go:89] found id: ""
	I0806 08:38:43.017346   79614 logs.go:276] 1 containers: [75990046864b4a4db9f8c02f44de4eefbb0cf45584880b053bce4ca0b677ce4e]
	I0806 08:38:43.017396   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:43.022341   79614 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:43.022416   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:43.067113   79614 cri.go:89] found id: "05bbffa4c989e42bd020ebe6b943459ec1d7b31a6c5e5ba9a37056da15542658"
	I0806 08:38:43.067145   79614 cri.go:89] found id: ""
	I0806 08:38:43.067155   79614 logs.go:276] 1 containers: [05bbffa4c989e42bd020ebe6b943459ec1d7b31a6c5e5ba9a37056da15542658]
	I0806 08:38:43.067211   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:43.071674   79614 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:43.071748   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:43.114123   79614 cri.go:89] found id: ""
	I0806 08:38:43.114146   79614 logs.go:276] 0 containers: []
	W0806 08:38:43.114154   79614 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:43.114159   79614 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0806 08:38:43.114211   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0806 08:38:43.155927   79614 cri.go:89] found id: "e668e4a5fa59e279bc227b6110983cc4660b639d5b1e1291ae60d361f682d82e"
	I0806 08:38:43.155955   79614 cri.go:89] found id: "abd677ac30faba3fef9e9d562601ad0012d7f1b48852e08b8f2684efda6b0f2e"
	I0806 08:38:43.155961   79614 cri.go:89] found id: ""
	I0806 08:38:43.155969   79614 logs.go:276] 2 containers: [e668e4a5fa59e279bc227b6110983cc4660b639d5b1e1291ae60d361f682d82e abd677ac30faba3fef9e9d562601ad0012d7f1b48852e08b8f2684efda6b0f2e]
	I0806 08:38:43.156017   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:43.165735   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:43.171834   79614 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:43.171861   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:43.232588   79614 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:43.232628   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:43.248033   79614 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:43.248080   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 08:38:43.383729   79614 logs.go:123] Gathering logs for etcd [b5fa6fe8f1db6b59f47ea637515efb2864e0408ce1df0e6851b3bda0e240f5d3] ...
	I0806 08:38:43.383760   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5fa6fe8f1db6b59f47ea637515efb2864e0408ce1df0e6851b3bda0e240f5d3"
	I0806 08:38:43.428082   79614 logs.go:123] Gathering logs for coredns [59f63eeae644f7c8944cdb7b1d05d9e49fe84eb28e9ccbad2d2dd79846d7eb52] ...
	I0806 08:38:43.428116   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59f63eeae644f7c8944cdb7b1d05d9e49fe84eb28e9ccbad2d2dd79846d7eb52"
	I0806 08:38:43.467054   79614 logs.go:123] Gathering logs for storage-provisioner [e668e4a5fa59e279bc227b6110983cc4660b639d5b1e1291ae60d361f682d82e] ...
	I0806 08:38:43.467081   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e668e4a5fa59e279bc227b6110983cc4660b639d5b1e1291ae60d361f682d82e"
	I0806 08:38:43.502900   79614 logs.go:123] Gathering logs for storage-provisioner [abd677ac30faba3fef9e9d562601ad0012d7f1b48852e08b8f2684efda6b0f2e] ...
	I0806 08:38:43.502928   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abd677ac30faba3fef9e9d562601ad0012d7f1b48852e08b8f2684efda6b0f2e"
	I0806 08:38:43.539338   79614 logs.go:123] Gathering logs for kube-apiserver [49843be03d16addb301ffd26e517770ff29b00f049efb524a2139b8b2153325c] ...
	I0806 08:38:43.539369   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49843be03d16addb301ffd26e517770ff29b00f049efb524a2139b8b2153325c"
	I0806 08:38:43.585813   79614 logs.go:123] Gathering logs for kube-scheduler [e596abcad3dc13d27548701f515beebba051e04a9e20f8ce1d018731d880ff1c] ...
	I0806 08:38:43.585855   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e596abcad3dc13d27548701f515beebba051e04a9e20f8ce1d018731d880ff1c"
	I0806 08:38:43.627141   79614 logs.go:123] Gathering logs for kube-proxy [75990046864b4a4db9f8c02f44de4eefbb0cf45584880b053bce4ca0b677ce4e] ...
	I0806 08:38:43.627173   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75990046864b4a4db9f8c02f44de4eefbb0cf45584880b053bce4ca0b677ce4e"
	I0806 08:38:43.663754   79614 logs.go:123] Gathering logs for kube-controller-manager [05bbffa4c989e42bd020ebe6b943459ec1d7b31a6c5e5ba9a37056da15542658] ...
	I0806 08:38:43.663783   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 05bbffa4c989e42bd020ebe6b943459ec1d7b31a6c5e5ba9a37056da15542658"
	I0806 08:38:43.717594   79614 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:43.717631   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:43.469134   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:45.967796   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:43.765343   79927 cri.go:89] found id: ""
	I0806 08:38:43.765379   79927 logs.go:276] 0 containers: []
	W0806 08:38:43.765390   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:43.765398   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:43.765457   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:43.803813   79927 cri.go:89] found id: ""
	I0806 08:38:43.803846   79927 logs.go:276] 0 containers: []
	W0806 08:38:43.803859   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:43.803868   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:43.803933   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:43.841307   79927 cri.go:89] found id: ""
	I0806 08:38:43.841339   79927 logs.go:276] 0 containers: []
	W0806 08:38:43.841349   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:43.841356   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:43.841418   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:43.876488   79927 cri.go:89] found id: ""
	I0806 08:38:43.876521   79927 logs.go:276] 0 containers: []
	W0806 08:38:43.876534   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:43.876544   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:43.876606   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:43.910972   79927 cri.go:89] found id: ""
	I0806 08:38:43.911000   79927 logs.go:276] 0 containers: []
	W0806 08:38:43.911010   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:43.911017   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:43.911077   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:43.945541   79927 cri.go:89] found id: ""
	I0806 08:38:43.945573   79927 logs.go:276] 0 containers: []
	W0806 08:38:43.945584   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:43.945592   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:43.945655   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:43.981366   79927 cri.go:89] found id: ""
	I0806 08:38:43.981399   79927 logs.go:276] 0 containers: []
	W0806 08:38:43.981408   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:43.981413   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:43.981463   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:44.015349   79927 cri.go:89] found id: ""
	I0806 08:38:44.015375   79927 logs.go:276] 0 containers: []
	W0806 08:38:44.015384   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:44.015394   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:44.015408   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:44.068089   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:44.068129   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:44.082496   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:44.082524   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:44.157765   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:44.157788   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:44.157805   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:44.242842   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:44.242885   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:46.789280   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:46.803490   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:46.803562   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:46.844164   79927 cri.go:89] found id: ""
	I0806 08:38:46.844192   79927 logs.go:276] 0 containers: []
	W0806 08:38:46.844200   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:38:46.844206   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:46.844247   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:46.880968   79927 cri.go:89] found id: ""
	I0806 08:38:46.881003   79927 logs.go:276] 0 containers: []
	W0806 08:38:46.881015   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:38:46.881022   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:46.881082   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:46.919291   79927 cri.go:89] found id: ""
	I0806 08:38:46.919311   79927 logs.go:276] 0 containers: []
	W0806 08:38:46.919319   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:38:46.919324   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:46.919369   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:46.960591   79927 cri.go:89] found id: ""
	I0806 08:38:46.960616   79927 logs.go:276] 0 containers: []
	W0806 08:38:46.960626   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:38:46.960632   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:46.960681   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:46.995913   79927 cri.go:89] found id: ""
	I0806 08:38:46.995943   79927 logs.go:276] 0 containers: []
	W0806 08:38:46.995954   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:38:46.995961   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:46.996045   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:47.035185   79927 cri.go:89] found id: ""
	I0806 08:38:47.035213   79927 logs.go:276] 0 containers: []
	W0806 08:38:47.035225   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:38:47.035236   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:47.035292   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:47.079605   79927 cri.go:89] found id: ""
	I0806 08:38:47.079634   79927 logs.go:276] 0 containers: []
	W0806 08:38:47.079646   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:47.079653   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:38:47.079701   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:38:47.118819   79927 cri.go:89] found id: ""
	I0806 08:38:47.118849   79927 logs.go:276] 0 containers: []
	W0806 08:38:47.118861   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:38:47.118873   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:47.118888   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:47.181357   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:47.181394   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:47.197676   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:47.197707   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:38:47.270304   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:38:47.270331   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:47.270348   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:47.354145   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:38:47.354176   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:44.291978   79614 logs.go:123] Gathering logs for container status ...
	I0806 08:38:44.292026   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:46.843357   79614 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:46.860793   79614 api_server.go:72] duration metric: took 4m14.882891742s to wait for apiserver process to appear ...
	I0806 08:38:46.860820   79614 api_server.go:88] waiting for apiserver healthz status ...
	I0806 08:38:46.860858   79614 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:46.860923   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:46.909044   79614 cri.go:89] found id: "49843be03d16addb301ffd26e517770ff29b00f049efb524a2139b8b2153325c"
	I0806 08:38:46.909068   79614 cri.go:89] found id: ""
	I0806 08:38:46.909076   79614 logs.go:276] 1 containers: [49843be03d16addb301ffd26e517770ff29b00f049efb524a2139b8b2153325c]
	I0806 08:38:46.909134   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:46.913443   79614 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:46.913522   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:46.955246   79614 cri.go:89] found id: "b5fa6fe8f1db6b59f47ea637515efb2864e0408ce1df0e6851b3bda0e240f5d3"
	I0806 08:38:46.955273   79614 cri.go:89] found id: ""
	I0806 08:38:46.955281   79614 logs.go:276] 1 containers: [b5fa6fe8f1db6b59f47ea637515efb2864e0408ce1df0e6851b3bda0e240f5d3]
	I0806 08:38:46.955347   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:46.959914   79614 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:46.959994   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:47.001639   79614 cri.go:89] found id: "59f63eeae644f7c8944cdb7b1d05d9e49fe84eb28e9ccbad2d2dd79846d7eb52"
	I0806 08:38:47.001665   79614 cri.go:89] found id: ""
	I0806 08:38:47.001673   79614 logs.go:276] 1 containers: [59f63eeae644f7c8944cdb7b1d05d9e49fe84eb28e9ccbad2d2dd79846d7eb52]
	I0806 08:38:47.001717   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:47.006260   79614 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:47.006344   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:47.049226   79614 cri.go:89] found id: "e596abcad3dc13d27548701f515beebba051e04a9e20f8ce1d018731d880ff1c"
	I0806 08:38:47.049252   79614 cri.go:89] found id: ""
	I0806 08:38:47.049261   79614 logs.go:276] 1 containers: [e596abcad3dc13d27548701f515beebba051e04a9e20f8ce1d018731d880ff1c]
	I0806 08:38:47.049317   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:47.053544   79614 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:47.053626   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:47.095805   79614 cri.go:89] found id: "75990046864b4a4db9f8c02f44de4eefbb0cf45584880b053bce4ca0b677ce4e"
	I0806 08:38:47.095832   79614 cri.go:89] found id: ""
	I0806 08:38:47.095841   79614 logs.go:276] 1 containers: [75990046864b4a4db9f8c02f44de4eefbb0cf45584880b053bce4ca0b677ce4e]
	I0806 08:38:47.095895   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:47.100272   79614 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:47.100329   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:47.146108   79614 cri.go:89] found id: "05bbffa4c989e42bd020ebe6b943459ec1d7b31a6c5e5ba9a37056da15542658"
	I0806 08:38:47.146132   79614 cri.go:89] found id: ""
	I0806 08:38:47.146142   79614 logs.go:276] 1 containers: [05bbffa4c989e42bd020ebe6b943459ec1d7b31a6c5e5ba9a37056da15542658]
	I0806 08:38:47.146200   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:47.150808   79614 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:47.150863   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:47.194122   79614 cri.go:89] found id: ""
	I0806 08:38:47.194157   79614 logs.go:276] 0 containers: []
	W0806 08:38:47.194169   79614 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:47.194177   79614 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0806 08:38:47.194241   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0806 08:38:47.237872   79614 cri.go:89] found id: "e668e4a5fa59e279bc227b6110983cc4660b639d5b1e1291ae60d361f682d82e"
	I0806 08:38:47.237906   79614 cri.go:89] found id: "abd677ac30faba3fef9e9d562601ad0012d7f1b48852e08b8f2684efda6b0f2e"
	I0806 08:38:47.237915   79614 cri.go:89] found id: ""
	I0806 08:38:47.237925   79614 logs.go:276] 2 containers: [e668e4a5fa59e279bc227b6110983cc4660b639d5b1e1291ae60d361f682d82e abd677ac30faba3fef9e9d562601ad0012d7f1b48852e08b8f2684efda6b0f2e]
	I0806 08:38:47.237988   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:47.242528   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:47.247317   79614 logs.go:123] Gathering logs for kube-proxy [75990046864b4a4db9f8c02f44de4eefbb0cf45584880b053bce4ca0b677ce4e] ...
	I0806 08:38:47.247341   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75990046864b4a4db9f8c02f44de4eefbb0cf45584880b053bce4ca0b677ce4e"
	I0806 08:38:47.289612   79614 logs.go:123] Gathering logs for kube-controller-manager [05bbffa4c989e42bd020ebe6b943459ec1d7b31a6c5e5ba9a37056da15542658] ...
	I0806 08:38:47.289644   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 05bbffa4c989e42bd020ebe6b943459ec1d7b31a6c5e5ba9a37056da15542658"
	I0806 08:38:47.345619   79614 logs.go:123] Gathering logs for storage-provisioner [e668e4a5fa59e279bc227b6110983cc4660b639d5b1e1291ae60d361f682d82e] ...
	I0806 08:38:47.345653   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e668e4a5fa59e279bc227b6110983cc4660b639d5b1e1291ae60d361f682d82e"
	I0806 08:38:47.401281   79614 logs.go:123] Gathering logs for storage-provisioner [abd677ac30faba3fef9e9d562601ad0012d7f1b48852e08b8f2684efda6b0f2e] ...
	I0806 08:38:47.401306   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abd677ac30faba3fef9e9d562601ad0012d7f1b48852e08b8f2684efda6b0f2e"
	I0806 08:38:47.438335   79614 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:47.438364   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 08:38:47.537334   79614 logs.go:123] Gathering logs for etcd [b5fa6fe8f1db6b59f47ea637515efb2864e0408ce1df0e6851b3bda0e240f5d3] ...
	I0806 08:38:47.537360   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5fa6fe8f1db6b59f47ea637515efb2864e0408ce1df0e6851b3bda0e240f5d3"
	I0806 08:38:47.592101   79614 logs.go:123] Gathering logs for coredns [59f63eeae644f7c8944cdb7b1d05d9e49fe84eb28e9ccbad2d2dd79846d7eb52] ...
	I0806 08:38:47.592132   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59f63eeae644f7c8944cdb7b1d05d9e49fe84eb28e9ccbad2d2dd79846d7eb52"
	I0806 08:38:47.626714   79614 logs.go:123] Gathering logs for kube-scheduler [e596abcad3dc13d27548701f515beebba051e04a9e20f8ce1d018731d880ff1c] ...
	I0806 08:38:47.626738   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e596abcad3dc13d27548701f515beebba051e04a9e20f8ce1d018731d880ff1c"
	I0806 08:38:47.663232   79614 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:47.663262   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:48.133308   79614 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:48.133344   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:48.189669   79614 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:48.189709   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:48.208530   79614 logs.go:123] Gathering logs for kube-apiserver [49843be03d16addb301ffd26e517770ff29b00f049efb524a2139b8b2153325c] ...
	I0806 08:38:48.208557   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49843be03d16addb301ffd26e517770ff29b00f049efb524a2139b8b2153325c"
	I0806 08:38:48.264499   79614 logs.go:123] Gathering logs for container status ...
	I0806 08:38:48.264535   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:49.901481   79927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:38:49.914862   79927 kubeadm.go:597] duration metric: took 4m3.80396481s to restartPrimaryControlPlane
	W0806 08:38:49.914926   79927 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0806 08:38:49.914952   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0806 08:38:50.394829   79927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 08:38:50.409425   79927 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0806 08:38:50.419349   79927 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0806 08:38:50.429688   79927 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0806 08:38:50.429711   79927 kubeadm.go:157] found existing configuration files:
	
	I0806 08:38:50.429756   79927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0806 08:38:50.440055   79927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0806 08:38:50.440126   79927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0806 08:38:50.450734   79927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0806 08:38:50.461860   79927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0806 08:38:50.461943   79927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0806 08:38:50.473324   79927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0806 08:38:50.483381   79927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0806 08:38:50.483440   79927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0806 08:38:50.493247   79927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0806 08:38:50.502800   79927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0806 08:38:50.502861   79927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0806 08:38:50.512897   79927 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0806 08:38:50.585658   79927 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0806 08:38:50.585795   79927 kubeadm.go:310] [preflight] Running pre-flight checks
	I0806 08:38:50.740936   79927 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0806 08:38:50.741079   79927 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0806 08:38:50.741200   79927 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0806 08:38:50.938086   79927 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0806 08:38:48.468228   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:50.968702   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:50.940205   79927 out.go:204]   - Generating certificates and keys ...
	I0806 08:38:50.940315   79927 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0806 08:38:50.940428   79927 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0806 08:38:50.940543   79927 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0806 08:38:50.940621   79927 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0806 08:38:50.940723   79927 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0806 08:38:50.940800   79927 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0806 08:38:50.940881   79927 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0806 08:38:50.941150   79927 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0806 08:38:50.941954   79927 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0806 08:38:50.942765   79927 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0806 08:38:50.942910   79927 kubeadm.go:310] [certs] Using the existing "sa" key
	I0806 08:38:50.942993   79927 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0806 08:38:51.039558   79927 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0806 08:38:51.265243   79927 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0806 08:38:51.348501   79927 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0806 08:38:51.509558   79927 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0806 08:38:51.530725   79927 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0806 08:38:51.534108   79927 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0806 08:38:51.534297   79927 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0806 08:38:51.680149   79927 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0806 08:38:51.682195   79927 out.go:204]   - Booting up control plane ...
	I0806 08:38:51.682331   79927 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0806 08:38:51.693736   79927 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0806 08:38:51.695315   79927 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0806 08:38:51.697107   79927 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0806 08:38:51.700906   79927 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0806 08:38:50.822580   79614 api_server.go:253] Checking apiserver healthz at https://192.168.50.235:8444/healthz ...
	I0806 08:38:50.827121   79614 api_server.go:279] https://192.168.50.235:8444/healthz returned 200:
	ok
	I0806 08:38:50.828327   79614 api_server.go:141] control plane version: v1.30.3
	I0806 08:38:50.828355   79614 api_server.go:131] duration metric: took 3.967524972s to wait for apiserver health ...
	I0806 08:38:50.828381   79614 system_pods.go:43] waiting for kube-system pods to appear ...
	I0806 08:38:50.828406   79614 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:38:50.828460   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:38:50.875518   79614 cri.go:89] found id: "49843be03d16addb301ffd26e517770ff29b00f049efb524a2139b8b2153325c"
	I0806 08:38:50.875545   79614 cri.go:89] found id: ""
	I0806 08:38:50.875555   79614 logs.go:276] 1 containers: [49843be03d16addb301ffd26e517770ff29b00f049efb524a2139b8b2153325c]
	I0806 08:38:50.875611   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:50.881260   79614 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:38:50.881321   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:38:50.923252   79614 cri.go:89] found id: "b5fa6fe8f1db6b59f47ea637515efb2864e0408ce1df0e6851b3bda0e240f5d3"
	I0806 08:38:50.923281   79614 cri.go:89] found id: ""
	I0806 08:38:50.923292   79614 logs.go:276] 1 containers: [b5fa6fe8f1db6b59f47ea637515efb2864e0408ce1df0e6851b3bda0e240f5d3]
	I0806 08:38:50.923350   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:50.928115   79614 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:38:50.928176   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:38:50.971427   79614 cri.go:89] found id: "59f63eeae644f7c8944cdb7b1d05d9e49fe84eb28e9ccbad2d2dd79846d7eb52"
	I0806 08:38:50.971453   79614 cri.go:89] found id: ""
	I0806 08:38:50.971463   79614 logs.go:276] 1 containers: [59f63eeae644f7c8944cdb7b1d05d9e49fe84eb28e9ccbad2d2dd79846d7eb52]
	I0806 08:38:50.971520   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:50.975665   79614 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:38:50.975789   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:38:51.014601   79614 cri.go:89] found id: "e596abcad3dc13d27548701f515beebba051e04a9e20f8ce1d018731d880ff1c"
	I0806 08:38:51.014633   79614 cri.go:89] found id: ""
	I0806 08:38:51.014644   79614 logs.go:276] 1 containers: [e596abcad3dc13d27548701f515beebba051e04a9e20f8ce1d018731d880ff1c]
	I0806 08:38:51.014722   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:51.019197   79614 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:38:51.019266   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:38:51.057063   79614 cri.go:89] found id: "75990046864b4a4db9f8c02f44de4eefbb0cf45584880b053bce4ca0b677ce4e"
	I0806 08:38:51.057089   79614 cri.go:89] found id: ""
	I0806 08:38:51.057098   79614 logs.go:276] 1 containers: [75990046864b4a4db9f8c02f44de4eefbb0cf45584880b053bce4ca0b677ce4e]
	I0806 08:38:51.057159   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:51.061504   79614 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:38:51.061571   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:38:51.106496   79614 cri.go:89] found id: "05bbffa4c989e42bd020ebe6b943459ec1d7b31a6c5e5ba9a37056da15542658"
	I0806 08:38:51.106523   79614 cri.go:89] found id: ""
	I0806 08:38:51.106532   79614 logs.go:276] 1 containers: [05bbffa4c989e42bd020ebe6b943459ec1d7b31a6c5e5ba9a37056da15542658]
	I0806 08:38:51.106590   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:51.111040   79614 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:38:51.111109   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:38:51.156042   79614 cri.go:89] found id: ""
	I0806 08:38:51.156072   79614 logs.go:276] 0 containers: []
	W0806 08:38:51.156083   79614 logs.go:278] No container was found matching "kindnet"
	I0806 08:38:51.156090   79614 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0806 08:38:51.156152   79614 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0806 08:38:51.208712   79614 cri.go:89] found id: "e668e4a5fa59e279bc227b6110983cc4660b639d5b1e1291ae60d361f682d82e"
	I0806 08:38:51.208733   79614 cri.go:89] found id: "abd677ac30faba3fef9e9d562601ad0012d7f1b48852e08b8f2684efda6b0f2e"
	I0806 08:38:51.208737   79614 cri.go:89] found id: ""
	I0806 08:38:51.208744   79614 logs.go:276] 2 containers: [e668e4a5fa59e279bc227b6110983cc4660b639d5b1e1291ae60d361f682d82e abd677ac30faba3fef9e9d562601ad0012d7f1b48852e08b8f2684efda6b0f2e]
	I0806 08:38:51.208796   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:51.213508   79614 ssh_runner.go:195] Run: which crictl
	I0806 08:38:51.219082   79614 logs.go:123] Gathering logs for etcd [b5fa6fe8f1db6b59f47ea637515efb2864e0408ce1df0e6851b3bda0e240f5d3] ...
	I0806 08:38:51.219104   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5fa6fe8f1db6b59f47ea637515efb2864e0408ce1df0e6851b3bda0e240f5d3"
	I0806 08:38:51.266352   79614 logs.go:123] Gathering logs for storage-provisioner [e668e4a5fa59e279bc227b6110983cc4660b639d5b1e1291ae60d361f682d82e] ...
	I0806 08:38:51.266380   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e668e4a5fa59e279bc227b6110983cc4660b639d5b1e1291ae60d361f682d82e"
	I0806 08:38:51.315128   79614 logs.go:123] Gathering logs for storage-provisioner [abd677ac30faba3fef9e9d562601ad0012d7f1b48852e08b8f2684efda6b0f2e] ...
	I0806 08:38:51.315168   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abd677ac30faba3fef9e9d562601ad0012d7f1b48852e08b8f2684efda6b0f2e"
	I0806 08:38:51.357872   79614 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:38:51.357914   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:38:51.768288   79614 logs.go:123] Gathering logs for container status ...
	I0806 08:38:51.768328   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 08:38:51.825304   79614 logs.go:123] Gathering logs for kubelet ...
	I0806 08:38:51.825341   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:38:51.888532   79614 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:38:51.888567   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 08:38:51.998271   79614 logs.go:123] Gathering logs for coredns [59f63eeae644f7c8944cdb7b1d05d9e49fe84eb28e9ccbad2d2dd79846d7eb52] ...
	I0806 08:38:51.998307   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59f63eeae644f7c8944cdb7b1d05d9e49fe84eb28e9ccbad2d2dd79846d7eb52"
	I0806 08:38:52.034549   79614 logs.go:123] Gathering logs for kube-scheduler [e596abcad3dc13d27548701f515beebba051e04a9e20f8ce1d018731d880ff1c] ...
	I0806 08:38:52.034574   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e596abcad3dc13d27548701f515beebba051e04a9e20f8ce1d018731d880ff1c"
	I0806 08:38:52.079289   79614 logs.go:123] Gathering logs for kube-proxy [75990046864b4a4db9f8c02f44de4eefbb0cf45584880b053bce4ca0b677ce4e] ...
	I0806 08:38:52.079317   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75990046864b4a4db9f8c02f44de4eefbb0cf45584880b053bce4ca0b677ce4e"
	I0806 08:38:52.124134   79614 logs.go:123] Gathering logs for kube-controller-manager [05bbffa4c989e42bd020ebe6b943459ec1d7b31a6c5e5ba9a37056da15542658] ...
	I0806 08:38:52.124163   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 05bbffa4c989e42bd020ebe6b943459ec1d7b31a6c5e5ba9a37056da15542658"
	I0806 08:38:52.191877   79614 logs.go:123] Gathering logs for dmesg ...
	I0806 08:38:52.191914   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:38:52.210712   79614 logs.go:123] Gathering logs for kube-apiserver [49843be03d16addb301ffd26e517770ff29b00f049efb524a2139b8b2153325c] ...
	I0806 08:38:52.210740   79614 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49843be03d16addb301ffd26e517770ff29b00f049efb524a2139b8b2153325c"
	I0806 08:38:54.770330   79614 system_pods.go:59] 8 kube-system pods found
	I0806 08:38:54.770356   79614 system_pods.go:61] "coredns-7db6d8ff4d-gx8tt" [48f83ba8-a238-4baf-94af-88370fefcb02] Running
	I0806 08:38:54.770361   79614 system_pods.go:61] "etcd-default-k8s-diff-port-990751" [bf342ba4-1e11-40bf-80fd-02b402043ecf] Running
	I0806 08:38:54.770365   79614 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-990751" [11ee42c4-c9e9-41cf-ace2-deac0f30153a] Running
	I0806 08:38:54.770369   79614 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-990751" [f17fca30-0afc-4ed8-a8cc-cb64e69723f3] Running
	I0806 08:38:54.770373   79614 system_pods.go:61] "kube-proxy-lpf88" [866c10f0-2c44-4c03-b460-ff2194bd1ec6] Running
	I0806 08:38:54.770377   79614 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-990751" [2e591ea7-07bd-464f-bf0e-bc24bfcca016] Running
	I0806 08:38:54.770383   79614 system_pods.go:61] "metrics-server-569cc877fc-hnxd4" [f369ac70-49b7-448a-9852-89232fb66da9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0806 08:38:54.770388   79614 system_pods.go:61] "storage-provisioner" [0faff3fc-d34b-418e-972b-95a2e36b577a] Running
	I0806 08:38:54.770395   79614 system_pods.go:74] duration metric: took 3.942006529s to wait for pod list to return data ...
	I0806 08:38:54.770401   79614 default_sa.go:34] waiting for default service account to be created ...
	I0806 08:38:54.773935   79614 default_sa.go:45] found service account: "default"
	I0806 08:38:54.773956   79614 default_sa.go:55] duration metric: took 3.549307ms for default service account to be created ...
	I0806 08:38:54.773963   79614 system_pods.go:116] waiting for k8s-apps to be running ...
	I0806 08:38:54.779748   79614 system_pods.go:86] 8 kube-system pods found
	I0806 08:38:54.779773   79614 system_pods.go:89] "coredns-7db6d8ff4d-gx8tt" [48f83ba8-a238-4baf-94af-88370fefcb02] Running
	I0806 08:38:54.779780   79614 system_pods.go:89] "etcd-default-k8s-diff-port-990751" [bf342ba4-1e11-40bf-80fd-02b402043ecf] Running
	I0806 08:38:54.779787   79614 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-990751" [11ee42c4-c9e9-41cf-ace2-deac0f30153a] Running
	I0806 08:38:54.779794   79614 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-990751" [f17fca30-0afc-4ed8-a8cc-cb64e69723f3] Running
	I0806 08:38:54.779801   79614 system_pods.go:89] "kube-proxy-lpf88" [866c10f0-2c44-4c03-b460-ff2194bd1ec6] Running
	I0806 08:38:54.779807   79614 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-990751" [2e591ea7-07bd-464f-bf0e-bc24bfcca016] Running
	I0806 08:38:54.779818   79614 system_pods.go:89] "metrics-server-569cc877fc-hnxd4" [f369ac70-49b7-448a-9852-89232fb66da9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0806 08:38:54.779825   79614 system_pods.go:89] "storage-provisioner" [0faff3fc-d34b-418e-972b-95a2e36b577a] Running
	I0806 08:38:54.779834   79614 system_pods.go:126] duration metric: took 5.864748ms to wait for k8s-apps to be running ...
	I0806 08:38:54.779842   79614 system_svc.go:44] waiting for kubelet service to be running ....
	I0806 08:38:54.779898   79614 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 08:38:54.796325   79614 system_svc.go:56] duration metric: took 16.472895ms WaitForService to wait for kubelet
	I0806 08:38:54.796353   79614 kubeadm.go:582] duration metric: took 4m22.818456744s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0806 08:38:54.796397   79614 node_conditions.go:102] verifying NodePressure condition ...
	I0806 08:38:54.800536   79614 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0806 08:38:54.800564   79614 node_conditions.go:123] node cpu capacity is 2
	I0806 08:38:54.800577   79614 node_conditions.go:105] duration metric: took 4.173499ms to run NodePressure ...
	I0806 08:38:54.800591   79614 start.go:241] waiting for startup goroutines ...
	I0806 08:38:54.800600   79614 start.go:246] waiting for cluster config update ...
	I0806 08:38:54.800613   79614 start.go:255] writing updated cluster config ...
	I0806 08:38:54.800936   79614 ssh_runner.go:195] Run: rm -f paused
	I0806 08:38:54.850703   79614 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0806 08:38:54.852771   79614 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-990751" cluster and "default" namespace by default
	I0806 08:38:53.469241   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:55.968335   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:38:58.469143   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:39:00.470785   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:39:02.967983   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:39:05.468164   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:39:07.469643   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:39:09.971774   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:39:12.469590   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:39:14.968935   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:39:17.468202   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:39:19.968322   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:39:21.968455   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:39:24.467291   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:39:26.475005   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:39:28.968164   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:39:31.467876   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:39:31.701723   79927 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0806 08:39:31.702058   79927 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0806 08:39:31.702242   79927 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0806 08:39:33.468293   78922 pod_ready.go:102] pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace has status "Ready":"False"
	I0806 08:39:34.961921   78922 pod_ready.go:81] duration metric: took 4m0.000413299s for pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace to be "Ready" ...
	E0806 08:39:34.961944   78922 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-qz4vx" in "kube-system" namespace to be "Ready" (will not retry!)
	I0806 08:39:34.961967   78922 pod_ready.go:38] duration metric: took 4m12.510868918s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 08:39:34.962010   78922 kubeadm.go:597] duration metric: took 4m19.878821661s to restartPrimaryControlPlane
	W0806 08:39:34.962072   78922 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0806 08:39:34.962105   78922 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0806 08:39:36.702773   79927 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0806 08:39:36.703067   79927 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0806 08:39:46.703330   79927 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0806 08:39:46.703594   79927 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0806 08:40:01.383913   78922 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.421778806s)
	I0806 08:40:01.383994   78922 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 08:40:01.409726   78922 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0806 08:40:01.432284   78922 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0806 08:40:01.445162   78922 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0806 08:40:01.445187   78922 kubeadm.go:157] found existing configuration files:
	
	I0806 08:40:01.445242   78922 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0806 08:40:01.466287   78922 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0806 08:40:01.466344   78922 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0806 08:40:01.481355   78922 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0806 08:40:01.499933   78922 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0806 08:40:01.500003   78922 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0806 08:40:01.518094   78922 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0806 08:40:01.527345   78922 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0806 08:40:01.527408   78922 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0806 08:40:01.539328   78922 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0806 08:40:01.549752   78922 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0806 08:40:01.549863   78922 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0806 08:40:01.561309   78922 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0806 08:40:01.609660   78922 kubeadm.go:310] W0806 08:40:01.587147    2983 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0806 08:40:01.610476   78922 kubeadm.go:310] W0806 08:40:01.588016    2983 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0806 08:40:01.749335   78922 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0806 08:40:06.704889   79927 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0806 08:40:06.705136   79927 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0806 08:40:10.616781   78922 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-rc.0
	I0806 08:40:10.616868   78922 kubeadm.go:310] [preflight] Running pre-flight checks
	I0806 08:40:10.616966   78922 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0806 08:40:10.617110   78922 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0806 08:40:10.617234   78922 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0806 08:40:10.617332   78922 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0806 08:40:10.618729   78922 out.go:204]   - Generating certificates and keys ...
	I0806 08:40:10.618805   78922 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0806 08:40:10.618886   78922 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0806 08:40:10.618961   78922 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0806 08:40:10.619025   78922 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0806 08:40:10.619127   78922 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0806 08:40:10.619200   78922 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0806 08:40:10.619289   78922 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0806 08:40:10.619373   78922 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0806 08:40:10.619485   78922 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0806 08:40:10.619596   78922 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0806 08:40:10.619644   78922 kubeadm.go:310] [certs] Using the existing "sa" key
	I0806 08:40:10.619721   78922 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0806 08:40:10.619787   78922 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0806 08:40:10.619871   78922 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0806 08:40:10.619916   78922 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0806 08:40:10.619968   78922 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0806 08:40:10.620028   78922 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0806 08:40:10.620097   78922 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0806 08:40:10.620153   78922 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0806 08:40:10.621538   78922 out.go:204]   - Booting up control plane ...
	I0806 08:40:10.621632   78922 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0806 08:40:10.621739   78922 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0806 08:40:10.621802   78922 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0806 08:40:10.621898   78922 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0806 08:40:10.621979   78922 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0806 08:40:10.622011   78922 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0806 08:40:10.622117   78922 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0806 08:40:10.622225   78922 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0806 08:40:10.622291   78922 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.639662ms
	I0806 08:40:10.622351   78922 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0806 08:40:10.622399   78922 kubeadm.go:310] [api-check] The API server is healthy after 5.503393689s
	I0806 08:40:10.622505   78922 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0806 08:40:10.622639   78922 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0806 08:40:10.622692   78922 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0806 08:40:10.622843   78922 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-703340 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0806 08:40:10.622895   78922 kubeadm.go:310] [bootstrap-token] Using token: 5r8726.myu0qo9tvx4sxjd3
	I0806 08:40:10.624254   78922 out.go:204]   - Configuring RBAC rules ...
	I0806 08:40:10.624381   78922 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0806 08:40:10.624463   78922 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0806 08:40:10.624613   78922 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0806 08:40:10.624813   78922 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0806 08:40:10.624990   78922 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0806 08:40:10.625114   78922 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0806 08:40:10.625253   78922 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0806 08:40:10.625312   78922 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0806 08:40:10.625378   78922 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0806 08:40:10.625387   78922 kubeadm.go:310] 
	I0806 08:40:10.625464   78922 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0806 08:40:10.625472   78922 kubeadm.go:310] 
	I0806 08:40:10.625580   78922 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0806 08:40:10.625595   78922 kubeadm.go:310] 
	I0806 08:40:10.625615   78922 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0806 08:40:10.625668   78922 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0806 08:40:10.625708   78922 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0806 08:40:10.625712   78922 kubeadm.go:310] 
	I0806 08:40:10.625822   78922 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0806 08:40:10.625846   78922 kubeadm.go:310] 
	I0806 08:40:10.625891   78922 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0806 08:40:10.625898   78922 kubeadm.go:310] 
	I0806 08:40:10.625940   78922 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0806 08:40:10.626007   78922 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0806 08:40:10.626068   78922 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0806 08:40:10.626074   78922 kubeadm.go:310] 
	I0806 08:40:10.626173   78922 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0806 08:40:10.626272   78922 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0806 08:40:10.626282   78922 kubeadm.go:310] 
	I0806 08:40:10.626381   78922 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 5r8726.myu0qo9tvx4sxjd3 \
	I0806 08:40:10.626499   78922 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d36e5c1dfe989c7d561c809d7c06e0fc13926ac8f6d664cc5d6865ea5f79931b \
	I0806 08:40:10.626529   78922 kubeadm.go:310] 	--control-plane 
	I0806 08:40:10.626538   78922 kubeadm.go:310] 
	I0806 08:40:10.626639   78922 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0806 08:40:10.626647   78922 kubeadm.go:310] 
	I0806 08:40:10.626745   78922 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 5r8726.myu0qo9tvx4sxjd3 \
	I0806 08:40:10.626893   78922 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d36e5c1dfe989c7d561c809d7c06e0fc13926ac8f6d664cc5d6865ea5f79931b 
	I0806 08:40:10.626920   78922 cni.go:84] Creating CNI manager for ""
	I0806 08:40:10.626929   78922 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0806 08:40:10.628682   78922 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0806 08:40:10.630096   78922 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0806 08:40:10.642492   78922 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0806 08:40:10.667673   78922 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0806 08:40:10.667763   78922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:40:10.667833   78922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-703340 minikube.k8s.io/updated_at=2024_08_06T08_40_10_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=e92cb06692f5ea1ba801d10d148e5e92e807f9c8 minikube.k8s.io/name=no-preload-703340 minikube.k8s.io/primary=true
	I0806 08:40:10.705177   78922 ops.go:34] apiserver oom_adj: -16
	I0806 08:40:10.901976   78922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:40:11.401990   78922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:40:11.902906   78922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:40:12.402461   78922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:40:12.902909   78922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:40:13.402018   78922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:40:13.902098   78922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:40:14.402869   78922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:40:14.902034   78922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 08:40:14.999571   78922 kubeadm.go:1113] duration metric: took 4.331874752s to wait for elevateKubeSystemPrivileges
	I0806 08:40:14.999615   78922 kubeadm.go:394] duration metric: took 4m59.975766448s to StartCluster
	I0806 08:40:14.999636   78922 settings.go:142] acquiring lock: {Name:mkb0bfbcae3b4205ef43487a9de84cfd8afd2e5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:40:14.999722   78922 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19370-13057/kubeconfig
	I0806 08:40:15.001429   78922 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/kubeconfig: {Name:mk5c066914acf8e36556b29792f75b98076ca34a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 08:40:15.001696   78922 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.126 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0806 08:40:15.001753   78922 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0806 08:40:15.001864   78922 addons.go:69] Setting storage-provisioner=true in profile "no-preload-703340"
	I0806 08:40:15.001909   78922 addons.go:234] Setting addon storage-provisioner=true in "no-preload-703340"
	W0806 08:40:15.001921   78922 addons.go:243] addon storage-provisioner should already be in state true
	I0806 08:40:15.001919   78922 addons.go:69] Setting default-storageclass=true in profile "no-preload-703340"
	I0806 08:40:15.001945   78922 addons.go:69] Setting metrics-server=true in profile "no-preload-703340"
	I0806 08:40:15.001968   78922 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-703340"
	I0806 08:40:15.001984   78922 addons.go:234] Setting addon metrics-server=true in "no-preload-703340"
	W0806 08:40:15.001997   78922 addons.go:243] addon metrics-server should already be in state true
	I0806 08:40:15.001958   78922 host.go:66] Checking if "no-preload-703340" exists ...
	I0806 08:40:15.002030   78922 host.go:66] Checking if "no-preload-703340" exists ...
	I0806 08:40:15.001918   78922 config.go:182] Loaded profile config "no-preload-703340": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-rc.0
	I0806 08:40:15.002284   78922 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:40:15.002295   78922 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:40:15.002310   78922 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:40:15.002319   78922 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:40:15.002354   78922 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:40:15.002312   78922 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:40:15.003644   78922 out.go:177] * Verifying Kubernetes components...
	I0806 08:40:15.005157   78922 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 08:40:15.018061   78922 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35339
	I0806 08:40:15.018548   78922 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:40:15.018866   78922 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44375
	I0806 08:40:15.019201   78922 main.go:141] libmachine: Using API Version  1
	I0806 08:40:15.019226   78922 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:40:15.019270   78922 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:40:15.019582   78922 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:40:15.019743   78922 main.go:141] libmachine: Using API Version  1
	I0806 08:40:15.019748   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetState
	I0806 08:40:15.019765   78922 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:40:15.020089   78922 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:40:15.020617   78922 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:40:15.020664   78922 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:40:15.020815   78922 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32773
	I0806 08:40:15.021303   78922 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:40:15.021775   78922 main.go:141] libmachine: Using API Version  1
	I0806 08:40:15.021796   78922 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:40:15.022125   78922 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:40:15.022549   78922 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:40:15.022577   78922 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:40:15.023653   78922 addons.go:234] Setting addon default-storageclass=true in "no-preload-703340"
	W0806 08:40:15.023678   78922 addons.go:243] addon default-storageclass should already be in state true
	I0806 08:40:15.023708   78922 host.go:66] Checking if "no-preload-703340" exists ...
	I0806 08:40:15.024066   78922 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:40:15.024120   78922 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:40:15.036138   78922 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44447
	I0806 08:40:15.036586   78922 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:40:15.037132   78922 main.go:141] libmachine: Using API Version  1
	I0806 08:40:15.037153   78922 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:40:15.037609   78922 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:40:15.037921   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetState
	I0806 08:40:15.038640   78922 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42341
	I0806 08:40:15.038777   78922 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34793
	I0806 08:40:15.039003   78922 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:40:15.039098   78922 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:40:15.039473   78922 main.go:141] libmachine: Using API Version  1
	I0806 08:40:15.039489   78922 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:40:15.039607   78922 main.go:141] libmachine: Using API Version  1
	I0806 08:40:15.039623   78922 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:40:15.039820   78922 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:40:15.040041   78922 main.go:141] libmachine: (no-preload-703340) Calling .DriverName
	I0806 08:40:15.040181   78922 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:40:15.040414   78922 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 08:40:15.040456   78922 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 08:40:15.040477   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetState
	I0806 08:40:15.042137   78922 main.go:141] libmachine: (no-preload-703340) Calling .DriverName
	I0806 08:40:15.042223   78922 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0806 08:40:15.043522   78922 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 08:40:15.043585   78922 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0806 08:40:15.043601   78922 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0806 08:40:15.043619   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHHostname
	I0806 08:40:15.044812   78922 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0806 08:40:15.044829   78922 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0806 08:40:15.044854   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHHostname
	I0806 08:40:15.047376   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:40:15.047978   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:40:15.048001   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:40:15.048228   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:40:15.048269   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHPort
	I0806 08:40:15.048485   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHKeyPath
	I0806 08:40:15.048617   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHUsername
	I0806 08:40:15.048667   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:40:15.048688   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:40:15.048800   78922 sshutil.go:53] new ssh client: &{IP:192.168.72.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/no-preload-703340/id_rsa Username:docker}
	I0806 08:40:15.049107   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHPort
	I0806 08:40:15.049267   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHKeyPath
	I0806 08:40:15.049403   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHUsername
	I0806 08:40:15.049518   78922 sshutil.go:53] new ssh client: &{IP:192.168.72.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/no-preload-703340/id_rsa Username:docker}
	I0806 08:40:15.059440   78922 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33377
	I0806 08:40:15.059910   78922 main.go:141] libmachine: () Calling .GetVersion
	I0806 08:40:15.060423   78922 main.go:141] libmachine: Using API Version  1
	I0806 08:40:15.060441   78922 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 08:40:15.060741   78922 main.go:141] libmachine: () Calling .GetMachineName
	I0806 08:40:15.060922   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetState
	I0806 08:40:15.062303   78922 main.go:141] libmachine: (no-preload-703340) Calling .DriverName
	I0806 08:40:15.062491   78922 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0806 08:40:15.062503   78922 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0806 08:40:15.062516   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHHostname
	I0806 08:40:15.065588   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:40:15.066008   78922 main.go:141] libmachine: (no-preload-703340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fa:d3", ip: ""} in network mk-no-preload-703340: {Iface:virbr4 ExpiryTime:2024-08-06 09:34:49 +0000 UTC Type:0 Mac:52:54:00:a1:fa:d3 Iaid: IPaddr:192.168.72.126 Prefix:24 Hostname:no-preload-703340 Clientid:01:52:54:00:a1:fa:d3}
	I0806 08:40:15.066024   78922 main.go:141] libmachine: (no-preload-703340) DBG | domain no-preload-703340 has defined IP address 192.168.72.126 and MAC address 52:54:00:a1:fa:d3 in network mk-no-preload-703340
	I0806 08:40:15.066147   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHPort
	I0806 08:40:15.066302   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHKeyPath
	I0806 08:40:15.066407   78922 main.go:141] libmachine: (no-preload-703340) Calling .GetSSHUsername
	I0806 08:40:15.066500   78922 sshutil.go:53] new ssh client: &{IP:192.168.72.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/no-preload-703340/id_rsa Username:docker}
	I0806 08:40:15.209763   78922 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 08:40:15.231626   78922 node_ready.go:35] waiting up to 6m0s for node "no-preload-703340" to be "Ready" ...
	I0806 08:40:15.239924   78922 node_ready.go:49] node "no-preload-703340" has status "Ready":"True"
	I0806 08:40:15.239952   78922 node_ready.go:38] duration metric: took 8.292674ms for node "no-preload-703340" to be "Ready" ...
	I0806 08:40:15.239964   78922 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 08:40:15.247043   78922 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-9dt4q" in "kube-system" namespace to be "Ready" ...
	I0806 08:40:15.320295   78922 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0806 08:40:15.343811   78922 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0806 08:40:15.343838   78922 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0806 08:40:15.367466   78922 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0806 08:40:15.392170   78922 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0806 08:40:15.392200   78922 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0806 08:40:15.440711   78922 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0806 08:40:15.440741   78922 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0806 08:40:15.483982   78922 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0806 08:40:16.105714   78922 main.go:141] libmachine: Making call to close driver server
	I0806 08:40:16.105739   78922 main.go:141] libmachine: (no-preload-703340) Calling .Close
	I0806 08:40:16.105768   78922 main.go:141] libmachine: Making call to close driver server
	I0806 08:40:16.105789   78922 main.go:141] libmachine: (no-preload-703340) Calling .Close
	I0806 08:40:16.106051   78922 main.go:141] libmachine: (no-preload-703340) DBG | Closing plugin on server side
	I0806 08:40:16.106086   78922 main.go:141] libmachine: (no-preload-703340) DBG | Closing plugin on server side
	I0806 08:40:16.106097   78922 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:40:16.106113   78922 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:40:16.106133   78922 main.go:141] libmachine: Making call to close driver server
	I0806 08:40:16.106144   78922 main.go:141] libmachine: (no-preload-703340) Calling .Close
	I0806 08:40:16.106176   78922 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:40:16.106254   78922 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:40:16.106280   78922 main.go:141] libmachine: Making call to close driver server
	I0806 08:40:16.106287   78922 main.go:141] libmachine: (no-preload-703340) Calling .Close
	I0806 08:40:16.107761   78922 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:40:16.107779   78922 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:40:16.107767   78922 main.go:141] libmachine: (no-preload-703340) DBG | Closing plugin on server side
	I0806 08:40:16.107805   78922 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:40:16.107820   78922 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:40:16.107874   78922 main.go:141] libmachine: (no-preload-703340) DBG | Closing plugin on server side
	I0806 08:40:16.133295   78922 main.go:141] libmachine: Making call to close driver server
	I0806 08:40:16.133318   78922 main.go:141] libmachine: (no-preload-703340) Calling .Close
	I0806 08:40:16.133642   78922 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:40:16.133660   78922 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:40:16.133680   78922 main.go:141] libmachine: (no-preload-703340) DBG | Closing plugin on server side
	I0806 08:40:16.628763   78922 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.144732374s)
	I0806 08:40:16.628864   78922 main.go:141] libmachine: Making call to close driver server
	I0806 08:40:16.628891   78922 main.go:141] libmachine: (no-preload-703340) Calling .Close
	I0806 08:40:16.629348   78922 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:40:16.629375   78922 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:40:16.629385   78922 main.go:141] libmachine: Making call to close driver server
	I0806 08:40:16.629396   78922 main.go:141] libmachine: (no-preload-703340) Calling .Close
	I0806 08:40:16.629406   78922 main.go:141] libmachine: (no-preload-703340) DBG | Closing plugin on server side
	I0806 08:40:16.629673   78922 main.go:141] libmachine: Successfully made call to close driver server
	I0806 08:40:16.629690   78922 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 08:40:16.629702   78922 addons.go:475] Verifying addon metrics-server=true in "no-preload-703340"
	I0806 08:40:16.631683   78922 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0806 08:40:16.632723   78922 addons.go:510] duration metric: took 1.630970284s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0806 08:40:17.264829   78922 pod_ready.go:102] pod "coredns-6f6b679f8f-9dt4q" in "kube-system" namespace has status "Ready":"False"
	I0806 08:40:19.754652   78922 pod_ready.go:102] pod "coredns-6f6b679f8f-9dt4q" in "kube-system" namespace has status "Ready":"False"
	I0806 08:40:21.753116   78922 pod_ready.go:92] pod "coredns-6f6b679f8f-9dt4q" in "kube-system" namespace has status "Ready":"True"
	I0806 08:40:21.753139   78922 pod_ready.go:81] duration metric: took 6.506065964s for pod "coredns-6f6b679f8f-9dt4q" in "kube-system" namespace to be "Ready" ...
	I0806 08:40:21.753149   78922 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-mpfwj" in "kube-system" namespace to be "Ready" ...
	I0806 08:40:21.757521   78922 pod_ready.go:92] pod "coredns-6f6b679f8f-mpfwj" in "kube-system" namespace has status "Ready":"True"
	I0806 08:40:21.757542   78922 pod_ready.go:81] duration metric: took 4.3867ms for pod "coredns-6f6b679f8f-mpfwj" in "kube-system" namespace to be "Ready" ...
	I0806 08:40:21.757553   78922 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-703340" in "kube-system" namespace to be "Ready" ...
	I0806 08:40:21.762161   78922 pod_ready.go:92] pod "etcd-no-preload-703340" in "kube-system" namespace has status "Ready":"True"
	I0806 08:40:21.762179   78922 pod_ready.go:81] duration metric: took 4.619377ms for pod "etcd-no-preload-703340" in "kube-system" namespace to be "Ready" ...
	I0806 08:40:21.762188   78922 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-703340" in "kube-system" namespace to be "Ready" ...
	I0806 08:40:21.766394   78922 pod_ready.go:92] pod "kube-apiserver-no-preload-703340" in "kube-system" namespace has status "Ready":"True"
	I0806 08:40:21.766411   78922 pod_ready.go:81] duration metric: took 4.216869ms for pod "kube-apiserver-no-preload-703340" in "kube-system" namespace to be "Ready" ...
	I0806 08:40:21.766421   78922 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-703340" in "kube-system" namespace to be "Ready" ...
	I0806 08:40:21.770569   78922 pod_ready.go:92] pod "kube-controller-manager-no-preload-703340" in "kube-system" namespace has status "Ready":"True"
	I0806 08:40:21.770592   78922 pod_ready.go:81] duration metric: took 4.160997ms for pod "kube-controller-manager-no-preload-703340" in "kube-system" namespace to be "Ready" ...
	I0806 08:40:21.770600   78922 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zsv9x" in "kube-system" namespace to be "Ready" ...
	I0806 08:40:22.151795   78922 pod_ready.go:92] pod "kube-proxy-zsv9x" in "kube-system" namespace has status "Ready":"True"
	I0806 08:40:22.151819   78922 pod_ready.go:81] duration metric: took 381.212951ms for pod "kube-proxy-zsv9x" in "kube-system" namespace to be "Ready" ...
	I0806 08:40:22.151829   78922 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-703340" in "kube-system" namespace to be "Ready" ...
	I0806 08:40:22.550798   78922 pod_ready.go:92] pod "kube-scheduler-no-preload-703340" in "kube-system" namespace has status "Ready":"True"
	I0806 08:40:22.550821   78922 pod_ready.go:81] duration metric: took 398.985533ms for pod "kube-scheduler-no-preload-703340" in "kube-system" namespace to be "Ready" ...
	I0806 08:40:22.550830   78922 pod_ready.go:38] duration metric: took 7.310852309s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 08:40:22.550842   78922 api_server.go:52] waiting for apiserver process to appear ...
	I0806 08:40:22.550888   78922 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 08:40:22.567410   78922 api_server.go:72] duration metric: took 7.565683505s to wait for apiserver process to appear ...
	I0806 08:40:22.567441   78922 api_server.go:88] waiting for apiserver healthz status ...
	I0806 08:40:22.567464   78922 api_server.go:253] Checking apiserver healthz at https://192.168.72.126:8443/healthz ...
	I0806 08:40:22.572514   78922 api_server.go:279] https://192.168.72.126:8443/healthz returned 200:
	ok
	I0806 08:40:22.573572   78922 api_server.go:141] control plane version: v1.31.0-rc.0
	I0806 08:40:22.573596   78922 api_server.go:131] duration metric: took 6.147703ms to wait for apiserver health ...
	I0806 08:40:22.573606   78922 system_pods.go:43] waiting for kube-system pods to appear ...
	I0806 08:40:22.754910   78922 system_pods.go:59] 9 kube-system pods found
	I0806 08:40:22.754937   78922 system_pods.go:61] "coredns-6f6b679f8f-9dt4q" [a62f9c28-2e06-4072-b459-3a879cba65d6] Running
	I0806 08:40:22.754942   78922 system_pods.go:61] "coredns-6f6b679f8f-mpfwj" [9da40ee8-73c2-4479-a2aa-32a2dce7e21d] Running
	I0806 08:40:22.754945   78922 system_pods.go:61] "etcd-no-preload-703340" [2b89a18a-69f6-429b-8616-b84bf9e666b4] Running
	I0806 08:40:22.754950   78922 system_pods.go:61] "kube-apiserver-no-preload-703340" [8b863a60-03ad-4d18-855a-12d29dc63612] Running
	I0806 08:40:22.754953   78922 system_pods.go:61] "kube-controller-manager-no-preload-703340" [8490ec19-da74-4c39-9a69-17918506ea10] Running
	I0806 08:40:22.754956   78922 system_pods.go:61] "kube-proxy-zsv9x" [2859a7f2-f880-4be4-806e-163eb9caf535] Running
	I0806 08:40:22.754959   78922 system_pods.go:61] "kube-scheduler-no-preload-703340" [e003ccd3-48e7-4e4e-9f05-166003935575] Running
	I0806 08:40:22.754964   78922 system_pods.go:61] "metrics-server-6867b74b74-tqnjm" [4ab2bdd5-bd70-445a-84ac-6a45bdaf06a7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0806 08:40:22.754972   78922 system_pods.go:61] "storage-provisioner" [5497ddbc-8ed5-4b42-80e8-81aa7c1b8fa7] Running
	I0806 08:40:22.754980   78922 system_pods.go:74] duration metric: took 181.367063ms to wait for pod list to return data ...
	I0806 08:40:22.754990   78922 default_sa.go:34] waiting for default service account to be created ...
	I0806 08:40:22.951215   78922 default_sa.go:45] found service account: "default"
	I0806 08:40:22.951242   78922 default_sa.go:55] duration metric: took 196.244443ms for default service account to be created ...
	I0806 08:40:22.951250   78922 system_pods.go:116] waiting for k8s-apps to be running ...
	I0806 08:40:23.154822   78922 system_pods.go:86] 9 kube-system pods found
	I0806 08:40:23.154852   78922 system_pods.go:89] "coredns-6f6b679f8f-9dt4q" [a62f9c28-2e06-4072-b459-3a879cba65d6] Running
	I0806 08:40:23.154858   78922 system_pods.go:89] "coredns-6f6b679f8f-mpfwj" [9da40ee8-73c2-4479-a2aa-32a2dce7e21d] Running
	I0806 08:40:23.154862   78922 system_pods.go:89] "etcd-no-preload-703340" [2b89a18a-69f6-429b-8616-b84bf9e666b4] Running
	I0806 08:40:23.154865   78922 system_pods.go:89] "kube-apiserver-no-preload-703340" [8b863a60-03ad-4d18-855a-12d29dc63612] Running
	I0806 08:40:23.154870   78922 system_pods.go:89] "kube-controller-manager-no-preload-703340" [8490ec19-da74-4c39-9a69-17918506ea10] Running
	I0806 08:40:23.154875   78922 system_pods.go:89] "kube-proxy-zsv9x" [2859a7f2-f880-4be4-806e-163eb9caf535] Running
	I0806 08:40:23.154879   78922 system_pods.go:89] "kube-scheduler-no-preload-703340" [e003ccd3-48e7-4e4e-9f05-166003935575] Running
	I0806 08:40:23.154886   78922 system_pods.go:89] "metrics-server-6867b74b74-tqnjm" [4ab2bdd5-bd70-445a-84ac-6a45bdaf06a7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0806 08:40:23.154890   78922 system_pods.go:89] "storage-provisioner" [5497ddbc-8ed5-4b42-80e8-81aa7c1b8fa7] Running
	I0806 08:40:23.154898   78922 system_pods.go:126] duration metric: took 203.642884ms to wait for k8s-apps to be running ...
	I0806 08:40:23.154905   78922 system_svc.go:44] waiting for kubelet service to be running ....
	I0806 08:40:23.154948   78922 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 08:40:23.170577   78922 system_svc.go:56] duration metric: took 15.661926ms WaitForService to wait for kubelet
	I0806 08:40:23.170609   78922 kubeadm.go:582] duration metric: took 8.168885934s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0806 08:40:23.170631   78922 node_conditions.go:102] verifying NodePressure condition ...
	I0806 08:40:23.352727   78922 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0806 08:40:23.352753   78922 node_conditions.go:123] node cpu capacity is 2
	I0806 08:40:23.352763   78922 node_conditions.go:105] duration metric: took 182.12736ms to run NodePressure ...
	I0806 08:40:23.352774   78922 start.go:241] waiting for startup goroutines ...
	I0806 08:40:23.352780   78922 start.go:246] waiting for cluster config update ...
	I0806 08:40:23.352790   78922 start.go:255] writing updated cluster config ...
	I0806 08:40:23.353072   78922 ssh_runner.go:195] Run: rm -f paused
	I0806 08:40:23.401609   78922 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-rc.0 (minor skew: 1)
	I0806 08:40:23.403639   78922 out.go:177] * Done! kubectl is now configured to use "no-preload-703340" cluster and "default" namespace by default
	I0806 08:40:46.706828   79927 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0806 08:40:46.707130   79927 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0806 08:40:46.707156   79927 kubeadm.go:310] 
	I0806 08:40:46.707218   79927 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0806 08:40:46.707278   79927 kubeadm.go:310] 		timed out waiting for the condition
	I0806 08:40:46.707304   79927 kubeadm.go:310] 
	I0806 08:40:46.707381   79927 kubeadm.go:310] 	This error is likely caused by:
	I0806 08:40:46.707439   79927 kubeadm.go:310] 		- The kubelet is not running
	I0806 08:40:46.707569   79927 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0806 08:40:46.707580   79927 kubeadm.go:310] 
	I0806 08:40:46.707705   79927 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0806 08:40:46.707756   79927 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0806 08:40:46.707805   79927 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0806 08:40:46.707814   79927 kubeadm.go:310] 
	I0806 08:40:46.707974   79927 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0806 08:40:46.708118   79927 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0806 08:40:46.708130   79927 kubeadm.go:310] 
	I0806 08:40:46.708297   79927 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0806 08:40:46.708448   79927 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0806 08:40:46.708577   79927 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0806 08:40:46.708673   79927 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0806 08:40:46.708684   79927 kubeadm.go:310] 
	I0806 08:40:46.709441   79927 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0806 08:40:46.709593   79927 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0806 08:40:46.709791   79927 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0806 08:40:46.709862   79927 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0806 08:40:46.709928   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0806 08:40:47.169443   79927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 08:40:47.187784   79927 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0806 08:40:47.199239   79927 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0806 08:40:47.199267   79927 kubeadm.go:157] found existing configuration files:
	
	I0806 08:40:47.199325   79927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0806 08:40:47.209294   79927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0806 08:40:47.209361   79927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0806 08:40:47.219483   79927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0806 08:40:47.229674   79927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0806 08:40:47.229732   79927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0806 08:40:47.240729   79927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0806 08:40:47.250964   79927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0806 08:40:47.251018   79927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0806 08:40:47.261322   79927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0806 08:40:47.271411   79927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0806 08:40:47.271502   79927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0806 08:40:47.281986   79927 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0806 08:40:47.514409   79927 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0806 08:42:43.760446   79927 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0806 08:42:43.760586   79927 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0806 08:42:43.762233   79927 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0806 08:42:43.762301   79927 kubeadm.go:310] [preflight] Running pre-flight checks
	I0806 08:42:43.762391   79927 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0806 08:42:43.762504   79927 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0806 08:42:43.762602   79927 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0806 08:42:43.762653   79927 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0806 08:42:43.764435   79927 out.go:204]   - Generating certificates and keys ...
	I0806 08:42:43.764523   79927 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0806 08:42:43.764587   79927 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0806 08:42:43.764684   79927 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0806 08:42:43.764761   79927 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0806 08:42:43.764833   79927 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0806 08:42:43.764905   79927 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0806 08:42:43.765002   79927 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0806 08:42:43.765100   79927 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0806 08:42:43.765202   79927 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0806 08:42:43.765338   79927 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0806 08:42:43.765400   79927 kubeadm.go:310] [certs] Using the existing "sa" key
	I0806 08:42:43.765516   79927 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0806 08:42:43.765601   79927 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0806 08:42:43.765680   79927 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0806 08:42:43.765774   79927 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0806 08:42:43.765854   79927 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0806 08:42:43.766002   79927 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0806 08:42:43.766133   79927 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0806 08:42:43.766194   79927 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0806 08:42:43.766273   79927 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0806 08:42:43.767861   79927 out.go:204]   - Booting up control plane ...
	I0806 08:42:43.767957   79927 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0806 08:42:43.768051   79927 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0806 08:42:43.768136   79927 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0806 08:42:43.768232   79927 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0806 08:42:43.768487   79927 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0806 08:42:43.768559   79927 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0806 08:42:43.768635   79927 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0806 08:42:43.768862   79927 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0806 08:42:43.768945   79927 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0806 08:42:43.769124   79927 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0806 08:42:43.769195   79927 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0806 08:42:43.769379   79927 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0806 08:42:43.769463   79927 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0806 08:42:43.769721   79927 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0806 08:42:43.769823   79927 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0806 08:42:43.770089   79927 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0806 08:42:43.770102   79927 kubeadm.go:310] 
	I0806 08:42:43.770161   79927 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0806 08:42:43.770223   79927 kubeadm.go:310] 		timed out waiting for the condition
	I0806 08:42:43.770234   79927 kubeadm.go:310] 
	I0806 08:42:43.770274   79927 kubeadm.go:310] 	This error is likely caused by:
	I0806 08:42:43.770303   79927 kubeadm.go:310] 		- The kubelet is not running
	I0806 08:42:43.770389   79927 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0806 08:42:43.770399   79927 kubeadm.go:310] 
	I0806 08:42:43.770487   79927 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0806 08:42:43.770519   79927 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0806 08:42:43.770547   79927 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0806 08:42:43.770554   79927 kubeadm.go:310] 
	I0806 08:42:43.770657   79927 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0806 08:42:43.770756   79927 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0806 08:42:43.770769   79927 kubeadm.go:310] 
	I0806 08:42:43.770932   79927 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0806 08:42:43.771079   79927 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0806 08:42:43.771178   79927 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0806 08:42:43.771266   79927 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0806 08:42:43.771315   79927 kubeadm.go:310] 
	I0806 08:42:43.771336   79927 kubeadm.go:394] duration metric: took 7m57.719144545s to StartCluster
	I0806 08:42:43.771381   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 08:42:43.771451   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 08:42:43.817981   79927 cri.go:89] found id: ""
	I0806 08:42:43.818010   79927 logs.go:276] 0 containers: []
	W0806 08:42:43.818021   79927 logs.go:278] No container was found matching "kube-apiserver"
	I0806 08:42:43.818028   79927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 08:42:43.818087   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 08:42:43.855481   79927 cri.go:89] found id: ""
	I0806 08:42:43.855506   79927 logs.go:276] 0 containers: []
	W0806 08:42:43.855517   79927 logs.go:278] No container was found matching "etcd"
	I0806 08:42:43.855524   79927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 08:42:43.855584   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 08:42:43.893110   79927 cri.go:89] found id: ""
	I0806 08:42:43.893141   79927 logs.go:276] 0 containers: []
	W0806 08:42:43.893152   79927 logs.go:278] No container was found matching "coredns"
	I0806 08:42:43.893172   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 08:42:43.893255   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 08:42:43.929785   79927 cri.go:89] found id: ""
	I0806 08:42:43.929844   79927 logs.go:276] 0 containers: []
	W0806 08:42:43.929853   79927 logs.go:278] No container was found matching "kube-scheduler"
	I0806 08:42:43.929858   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 08:42:43.929921   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 08:42:43.970501   79927 cri.go:89] found id: ""
	I0806 08:42:43.970533   79927 logs.go:276] 0 containers: []
	W0806 08:42:43.970541   79927 logs.go:278] No container was found matching "kube-proxy"
	I0806 08:42:43.970546   79927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 08:42:43.970590   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 08:42:44.007344   79927 cri.go:89] found id: ""
	I0806 08:42:44.007373   79927 logs.go:276] 0 containers: []
	W0806 08:42:44.007382   79927 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 08:42:44.007388   79927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 08:42:44.007434   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 08:42:44.043724   79927 cri.go:89] found id: ""
	I0806 08:42:44.043747   79927 logs.go:276] 0 containers: []
	W0806 08:42:44.043755   79927 logs.go:278] No container was found matching "kindnet"
	I0806 08:42:44.043761   79927 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0806 08:42:44.043810   79927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0806 08:42:44.084642   79927 cri.go:89] found id: ""
	I0806 08:42:44.084682   79927 logs.go:276] 0 containers: []
	W0806 08:42:44.084694   79927 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0806 08:42:44.084709   79927 logs.go:123] Gathering logs for kubelet ...
	I0806 08:42:44.084724   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 08:42:44.157081   79927 logs.go:123] Gathering logs for dmesg ...
	I0806 08:42:44.157125   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 08:42:44.175703   79927 logs.go:123] Gathering logs for describe nodes ...
	I0806 08:42:44.175746   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 08:42:44.275665   79927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 08:42:44.275699   79927 logs.go:123] Gathering logs for CRI-O ...
	I0806 08:42:44.275717   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 08:42:44.385031   79927 logs.go:123] Gathering logs for container status ...
	I0806 08:42:44.385076   79927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0806 08:42:44.425573   79927 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0806 08:42:44.425616   79927 out.go:239] * 
	W0806 08:42:44.425683   79927 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0806 08:42:44.425713   79927 out.go:239] * 
	W0806 08:42:44.426633   79927 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0806 08:42:44.429961   79927 out.go:177] 
	W0806 08:42:44.431352   79927 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0806 08:42:44.431398   79927 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0806 08:42:44.431414   79927 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0806 08:42:44.432963   79927 out.go:177] 
	
	
	==> CRI-O <==
	Aug 06 08:54:15 old-k8s-version-409903 crio[649]: time="2024-08-06 08:54:15.277133741Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722934455277093307,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fff4a4b9-e2d1-41dd-92e4-3022d4f5746a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 08:54:15 old-k8s-version-409903 crio[649]: time="2024-08-06 08:54:15.277808439Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=29a53cac-ca33-41fa-9b6c-9753a7fcf23e name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:54:15 old-k8s-version-409903 crio[649]: time="2024-08-06 08:54:15.277892008Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=29a53cac-ca33-41fa-9b6c-9753a7fcf23e name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:54:15 old-k8s-version-409903 crio[649]: time="2024-08-06 08:54:15.277950251Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=29a53cac-ca33-41fa-9b6c-9753a7fcf23e name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:54:15 old-k8s-version-409903 crio[649]: time="2024-08-06 08:54:15.316916699Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2f8f722d-10d6-4c08-af7e-7ee9b3987f22 name=/runtime.v1.RuntimeService/Version
	Aug 06 08:54:15 old-k8s-version-409903 crio[649]: time="2024-08-06 08:54:15.317000914Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2f8f722d-10d6-4c08-af7e-7ee9b3987f22 name=/runtime.v1.RuntimeService/Version
	Aug 06 08:54:15 old-k8s-version-409903 crio[649]: time="2024-08-06 08:54:15.318307840Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8a89772d-a647-4713-a8b8-c8db7bbf8fca name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 08:54:15 old-k8s-version-409903 crio[649]: time="2024-08-06 08:54:15.318921445Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722934455318889325,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8a89772d-a647-4713-a8b8-c8db7bbf8fca name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 08:54:15 old-k8s-version-409903 crio[649]: time="2024-08-06 08:54:15.319876106Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5d484610-889b-4784-9830-3865f16ec624 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:54:15 old-k8s-version-409903 crio[649]: time="2024-08-06 08:54:15.319953936Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5d484610-889b-4784-9830-3865f16ec624 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:54:15 old-k8s-version-409903 crio[649]: time="2024-08-06 08:54:15.319996955Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=5d484610-889b-4784-9830-3865f16ec624 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:54:15 old-k8s-version-409903 crio[649]: time="2024-08-06 08:54:15.353951265Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f29d4ae2-98b5-4773-9f41-519ae3b0ea06 name=/runtime.v1.RuntimeService/Version
	Aug 06 08:54:15 old-k8s-version-409903 crio[649]: time="2024-08-06 08:54:15.354039976Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f29d4ae2-98b5-4773-9f41-519ae3b0ea06 name=/runtime.v1.RuntimeService/Version
	Aug 06 08:54:15 old-k8s-version-409903 crio[649]: time="2024-08-06 08:54:15.355064329Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6f064bcd-853f-44f5-904c-e93e44cb8d8a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 08:54:15 old-k8s-version-409903 crio[649]: time="2024-08-06 08:54:15.355485081Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722934455355447217,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6f064bcd-853f-44f5-904c-e93e44cb8d8a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 08:54:15 old-k8s-version-409903 crio[649]: time="2024-08-06 08:54:15.356163275Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=85f5a6a0-bd1b-4a5d-be3c-a487aa558443 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:54:15 old-k8s-version-409903 crio[649]: time="2024-08-06 08:54:15.356221886Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=85f5a6a0-bd1b-4a5d-be3c-a487aa558443 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:54:15 old-k8s-version-409903 crio[649]: time="2024-08-06 08:54:15.356309835Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=85f5a6a0-bd1b-4a5d-be3c-a487aa558443 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:54:15 old-k8s-version-409903 crio[649]: time="2024-08-06 08:54:15.395786607Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=864d2f75-0c01-41b6-b22b-e559651487af name=/runtime.v1.RuntimeService/Version
	Aug 06 08:54:15 old-k8s-version-409903 crio[649]: time="2024-08-06 08:54:15.395897478Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=864d2f75-0c01-41b6-b22b-e559651487af name=/runtime.v1.RuntimeService/Version
	Aug 06 08:54:15 old-k8s-version-409903 crio[649]: time="2024-08-06 08:54:15.397431933Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4b7db6d3-a196-42e9-b19a-f4e74a178efe name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 08:54:15 old-k8s-version-409903 crio[649]: time="2024-08-06 08:54:15.398099869Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722934455398067608,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4b7db6d3-a196-42e9-b19a-f4e74a178efe name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 08:54:15 old-k8s-version-409903 crio[649]: time="2024-08-06 08:54:15.399002172Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d8bb899d-bee2-4548-a425-e1748e390250 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:54:15 old-k8s-version-409903 crio[649]: time="2024-08-06 08:54:15.399077749Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d8bb899d-bee2-4548-a425-e1748e390250 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 08:54:15 old-k8s-version-409903 crio[649]: time="2024-08-06 08:54:15.399276914Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=d8bb899d-bee2-4548-a425-e1748e390250 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Aug 6 08:34] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.055941] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041778] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.077438] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.582101] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.479002] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000016] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.372783] systemd-fstab-generator[571]: Ignoring "noauto" option for root device
	[  +0.066842] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064276] systemd-fstab-generator[583]: Ignoring "noauto" option for root device
	[  +0.187929] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.204236] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.277743] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +6.753482] systemd-fstab-generator[840]: Ignoring "noauto" option for root device
	[  +0.064437] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.801648] systemd-fstab-generator[963]: Ignoring "noauto" option for root device
	[ +10.357148] kauditd_printk_skb: 46 callbacks suppressed
	[Aug 6 08:38] systemd-fstab-generator[5042]: Ignoring "noauto" option for root device
	[Aug 6 08:40] systemd-fstab-generator[5323]: Ignoring "noauto" option for root device
	[  +0.070161] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 08:54:15 up 19 min,  0 users,  load average: 0.05, 0.05, 0.02
	Linux old-k8s-version-409903 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Aug 06 08:54:14 old-k8s-version-409903 kubelet[6822]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Aug 06 08:54:14 old-k8s-version-409903 kubelet[6822]: goroutine 158 [chan receive]:
	Aug 06 08:54:14 old-k8s-version-409903 kubelet[6822]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc0001000c0, 0xc000c1e750)
	Aug 06 08:54:14 old-k8s-version-409903 kubelet[6822]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:130 +0x34
	Aug 06 08:54:14 old-k8s-version-409903 kubelet[6822]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	Aug 06 08:54:14 old-k8s-version-409903 kubelet[6822]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129 +0xa5
	Aug 06 08:54:14 old-k8s-version-409903 kubelet[6822]: goroutine 159 [select]:
	Aug 06 08:54:14 old-k8s-version-409903 kubelet[6822]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000d29ef0, 0x4f0ac20, 0xc000c20a00, 0x1, 0xc0001000c0)
	Aug 06 08:54:14 old-k8s-version-409903 kubelet[6822]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	Aug 06 08:54:14 old-k8s-version-409903 kubelet[6822]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc0000e9340, 0xc0001000c0)
	Aug 06 08:54:14 old-k8s-version-409903 kubelet[6822]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Aug 06 08:54:14 old-k8s-version-409903 kubelet[6822]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Aug 06 08:54:14 old-k8s-version-409903 kubelet[6822]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Aug 06 08:54:14 old-k8s-version-409903 kubelet[6822]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000d1c250, 0xc000d0d3c0)
	Aug 06 08:54:14 old-k8s-version-409903 kubelet[6822]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Aug 06 08:54:14 old-k8s-version-409903 kubelet[6822]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Aug 06 08:54:14 old-k8s-version-409903 kubelet[6822]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Aug 06 08:54:14 old-k8s-version-409903 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 140.
	Aug 06 08:54:14 old-k8s-version-409903 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Aug 06 08:54:14 old-k8s-version-409903 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Aug 06 08:54:14 old-k8s-version-409903 kubelet[6850]: I0806 08:54:14.974258    6850 server.go:416] Version: v1.20.0
	Aug 06 08:54:14 old-k8s-version-409903 kubelet[6850]: I0806 08:54:14.974963    6850 server.go:837] Client rotation is on, will bootstrap in background
	Aug 06 08:54:14 old-k8s-version-409903 kubelet[6850]: I0806 08:54:14.978024    6850 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Aug 06 08:54:14 old-k8s-version-409903 kubelet[6850]: I0806 08:54:14.979353    6850 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Aug 06 08:54:14 old-k8s-version-409903 kubelet[6850]: W0806 08:54:14.979489    6850 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-409903 -n old-k8s-version-409903
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-409903 -n old-k8s-version-409903: exit status 2 (230.650003ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-409903" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (145.28s)

                                                
                                    

Test pass (257/326)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 57.83
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.13
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.30.3/json-events 16.67
13 TestDownloadOnly/v1.30.3/preload-exists 0
17 TestDownloadOnly/v1.30.3/LogsDuration 0.06
18 TestDownloadOnly/v1.30.3/DeleteAll 0.13
19 TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds 0.12
21 TestDownloadOnly/v1.31.0-rc.0/json-events 52.48
22 TestDownloadOnly/v1.31.0-rc.0/preload-exists 0
26 TestDownloadOnly/v1.31.0-rc.0/LogsDuration 0.06
27 TestDownloadOnly/v1.31.0-rc.0/DeleteAll 0.13
28 TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds 0.12
30 TestBinaryMirror 0.57
31 TestOffline 93.16
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
36 TestAddons/Setup 147.17
40 TestAddons/serial/GCPAuth/Namespaces 0.14
42 TestAddons/parallel/Registry 16.72
44 TestAddons/parallel/InspektorGadget 11.77
46 TestAddons/parallel/HelmTiller 12.5
48 TestAddons/parallel/CSI 82.37
49 TestAddons/parallel/Headlamp 17.7
50 TestAddons/parallel/CloudSpanner 5.59
51 TestAddons/parallel/LocalPath 57
52 TestAddons/parallel/NvidiaDevicePlugin 6.51
53 TestAddons/parallel/Yakd 11.87
55 TestCertOptions 70
56 TestCertExpiration 334.77
58 TestForceSystemdFlag 102.58
59 TestForceSystemdEnv 46.46
61 TestKVMDriverInstallOrUpdate 4.51
65 TestErrorSpam/setup 39.69
66 TestErrorSpam/start 0.34
67 TestErrorSpam/status 0.72
68 TestErrorSpam/pause 1.58
69 TestErrorSpam/unpause 1.62
70 TestErrorSpam/stop 5.51
73 TestFunctional/serial/CopySyncFile 0
74 TestFunctional/serial/StartWithProxy 59.54
75 TestFunctional/serial/AuditLog 0
76 TestFunctional/serial/SoftStart 48.21
77 TestFunctional/serial/KubeContext 0.04
78 TestFunctional/serial/KubectlGetPods 0.07
81 TestFunctional/serial/CacheCmd/cache/add_remote 3.11
82 TestFunctional/serial/CacheCmd/cache/add_local 2.2
83 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
84 TestFunctional/serial/CacheCmd/cache/list 0.04
85 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.23
86 TestFunctional/serial/CacheCmd/cache/cache_reload 1.58
87 TestFunctional/serial/CacheCmd/cache/delete 0.09
88 TestFunctional/serial/MinikubeKubectlCmd 0.1
89 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
90 TestFunctional/serial/ExtraConfig 81.52
91 TestFunctional/serial/ComponentHealth 0.07
92 TestFunctional/serial/LogsCmd 1.42
93 TestFunctional/serial/LogsFileCmd 1.43
94 TestFunctional/serial/InvalidService 3.54
96 TestFunctional/parallel/ConfigCmd 0.29
97 TestFunctional/parallel/DashboardCmd 11.13
98 TestFunctional/parallel/DryRun 0.26
99 TestFunctional/parallel/InternationalLanguage 0.13
100 TestFunctional/parallel/StatusCmd 1
104 TestFunctional/parallel/ServiceCmdConnect 37.44
105 TestFunctional/parallel/AddonsCmd 0.11
106 TestFunctional/parallel/PersistentVolumeClaim 50.19
108 TestFunctional/parallel/SSHCmd 0.38
109 TestFunctional/parallel/CpCmd 1.23
110 TestFunctional/parallel/MySQL 36.58
111 TestFunctional/parallel/FileSync 0.2
112 TestFunctional/parallel/CertSync 1.22
116 TestFunctional/parallel/NodeLabels 0.08
118 TestFunctional/parallel/NonActiveRuntimeDisabled 0.43
120 TestFunctional/parallel/License 0.62
121 TestFunctional/parallel/ImageCommands/ImageListShort 0.22
122 TestFunctional/parallel/ImageCommands/ImageListTable 0.29
123 TestFunctional/parallel/ImageCommands/ImageListJson 0.27
124 TestFunctional/parallel/ImageCommands/ImageListYaml 0.24
125 TestFunctional/parallel/ImageCommands/ImageBuild 6
126 TestFunctional/parallel/ImageCommands/Setup 1.99
127 TestFunctional/parallel/Version/short 0.04
128 TestFunctional/parallel/Version/components 0.51
129 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
130 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
131 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
132 TestFunctional/parallel/ServiceCmd/DeployApp 45.15
133 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.25
143 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.88
144 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.77
145 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.49
146 TestFunctional/parallel/ImageCommands/ImageRemove 0.44
147 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.89
148 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.56
149 TestFunctional/parallel/ProfileCmd/profile_not_create 0.33
150 TestFunctional/parallel/ProfileCmd/profile_list 0.27
151 TestFunctional/parallel/ProfileCmd/profile_json_output 0.32
152 TestFunctional/parallel/MountCmd/any-port 8.46
153 TestFunctional/parallel/ServiceCmd/List 0.43
154 TestFunctional/parallel/ServiceCmd/JSONOutput 0.47
155 TestFunctional/parallel/ServiceCmd/HTTPS 0.47
156 TestFunctional/parallel/ServiceCmd/Format 0.4
157 TestFunctional/parallel/MountCmd/specific-port 1.6
158 TestFunctional/parallel/ServiceCmd/URL 0.32
159 TestFunctional/parallel/MountCmd/VerifyCleanup 1.4
160 TestFunctional/delete_echo-server_images 0.04
161 TestFunctional/delete_my-image_image 0.02
162 TestFunctional/delete_minikube_cached_images 0.02
166 TestMultiControlPlane/serial/StartCluster 277.49
167 TestMultiControlPlane/serial/DeployApp 8.09
168 TestMultiControlPlane/serial/PingHostFromPods 1.23
169 TestMultiControlPlane/serial/AddWorkerNode 58.79
170 TestMultiControlPlane/serial/NodeLabels 0.07
171 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.51
172 TestMultiControlPlane/serial/CopyFile 12.54
174 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.47
176 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.39
178 TestMultiControlPlane/serial/DeleteSecondaryNode 17.22
179 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.36
181 TestMultiControlPlane/serial/RestartCluster 277.52
182 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.38
183 TestMultiControlPlane/serial/AddSecondaryNode 79.68
184 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.52
188 TestJSONOutput/start/Command 58.09
189 TestJSONOutput/start/Audit 0
191 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/pause/Command 0.76
195 TestJSONOutput/pause/Audit 0
197 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/unpause/Command 0.64
201 TestJSONOutput/unpause/Audit 0
203 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
204 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
206 TestJSONOutput/stop/Command 9.35
207 TestJSONOutput/stop/Audit 0
209 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
210 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
211 TestErrorJSONOutput 0.19
216 TestMainNoArgs 0.04
217 TestMinikubeProfile 89.71
220 TestMountStart/serial/StartWithMountFirst 29.95
221 TestMountStart/serial/VerifyMountFirst 0.37
222 TestMountStart/serial/StartWithMountSecond 28.48
223 TestMountStart/serial/VerifyMountSecond 0.37
224 TestMountStart/serial/DeleteFirst 0.72
225 TestMountStart/serial/VerifyMountPostDelete 0.37
226 TestMountStart/serial/Stop 1.28
227 TestMountStart/serial/RestartStopped 23.57
228 TestMountStart/serial/VerifyMountPostStop 0.36
231 TestMultiNode/serial/FreshStart2Nodes 121.79
232 TestMultiNode/serial/DeployApp2Nodes 5.7
233 TestMultiNode/serial/PingHostFrom2Pods 0.8
234 TestMultiNode/serial/AddNode 53.53
235 TestMultiNode/serial/MultiNodeLabels 0.06
236 TestMultiNode/serial/ProfileList 0.21
237 TestMultiNode/serial/CopyFile 6.9
238 TestMultiNode/serial/StopNode 2.36
239 TestMultiNode/serial/StartAfterStop 39.83
241 TestMultiNode/serial/DeleteNode 2.47
243 TestMultiNode/serial/RestartMultiNode 185.18
244 TestMultiNode/serial/ValidateNameConflict 46.04
251 TestScheduledStopUnix 114.63
255 TestRunningBinaryUpgrade 204.07
260 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
261 TestNoKubernetes/serial/StartWithK8s 120.52
269 TestNetworkPlugins/group/false 2.74
273 TestNoKubernetes/serial/StartWithStopK8s 42.92
274 TestNoKubernetes/serial/Start 31.91
275 TestNoKubernetes/serial/VerifyK8sNotRunning 0.2
276 TestNoKubernetes/serial/ProfileList 0.8
277 TestNoKubernetes/serial/Stop 2.48
278 TestNoKubernetes/serial/StartNoArgs 60.64
279 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.19
280 TestStoppedBinaryUpgrade/Setup 2.59
281 TestStoppedBinaryUpgrade/Upgrade 107.86
283 TestPause/serial/Start 75.5
291 TestNetworkPlugins/group/auto/Start 82.82
292 TestStoppedBinaryUpgrade/MinikubeLogs 0.87
293 TestNetworkPlugins/group/kindnet/Start 104.75
294 TestPause/serial/SecondStartNoReconfiguration 127.9
295 TestNetworkPlugins/group/auto/KubeletFlags 0.23
296 TestNetworkPlugins/group/auto/NetCatPod 10.26
297 TestNetworkPlugins/group/auto/DNS 0.16
298 TestNetworkPlugins/group/auto/Localhost 0.14
299 TestNetworkPlugins/group/auto/HairPin 0.13
300 TestNetworkPlugins/group/calico/Start 156.99
301 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
302 TestNetworkPlugins/group/kindnet/KubeletFlags 0.2
303 TestNetworkPlugins/group/kindnet/NetCatPod 11.23
304 TestNetworkPlugins/group/kindnet/DNS 0.15
305 TestNetworkPlugins/group/kindnet/Localhost 0.14
306 TestNetworkPlugins/group/kindnet/HairPin 0.13
307 TestNetworkPlugins/group/custom-flannel/Start 134.49
308 TestPause/serial/Pause 0.78
309 TestPause/serial/VerifyStatus 0.27
310 TestPause/serial/Unpause 0.88
311 TestPause/serial/PauseAgain 1.13
312 TestPause/serial/DeletePaused 1.08
313 TestPause/serial/VerifyDeletedResources 1.71
314 TestNetworkPlugins/group/enable-default-cni/Start 142.14
315 TestNetworkPlugins/group/calico/ControllerPod 6.01
316 TestNetworkPlugins/group/calico/KubeletFlags 0.22
317 TestNetworkPlugins/group/calico/NetCatPod 11.3
318 TestNetworkPlugins/group/flannel/Start 85.31
319 TestNetworkPlugins/group/calico/DNS 0.18
320 TestNetworkPlugins/group/calico/Localhost 0.15
321 TestNetworkPlugins/group/calico/HairPin 0.15
322 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.23
323 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.28
324 TestNetworkPlugins/group/custom-flannel/DNS 0.18
325 TestNetworkPlugins/group/custom-flannel/Localhost 0.15
326 TestNetworkPlugins/group/custom-flannel/HairPin 0.15
327 TestNetworkPlugins/group/bridge/Start 66.73
330 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.22
331 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.27
332 TestNetworkPlugins/group/enable-default-cni/DNS 0.19
333 TestNetworkPlugins/group/enable-default-cni/Localhost 0.19
334 TestNetworkPlugins/group/enable-default-cni/HairPin 0.18
336 TestStartStop/group/no-preload/serial/FirstStart 78.08
337 TestNetworkPlugins/group/flannel/ControllerPod 6.01
338 TestNetworkPlugins/group/bridge/KubeletFlags 0.2
339 TestNetworkPlugins/group/bridge/NetCatPod 11.23
340 TestNetworkPlugins/group/flannel/KubeletFlags 0.22
341 TestNetworkPlugins/group/flannel/NetCatPod 11.3
342 TestNetworkPlugins/group/bridge/DNS 0.17
343 TestNetworkPlugins/group/bridge/Localhost 0.13
344 TestNetworkPlugins/group/bridge/HairPin 0.13
345 TestNetworkPlugins/group/flannel/DNS 0.16
346 TestNetworkPlugins/group/flannel/Localhost 0.12
347 TestNetworkPlugins/group/flannel/HairPin 0.14
349 TestStartStop/group/embed-certs/serial/FirstStart 60.21
351 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 117.22
352 TestStartStop/group/no-preload/serial/DeployApp 11.33
353 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.21
355 TestStartStop/group/embed-certs/serial/DeployApp 9.3
356 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.96
358 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 11.32
359 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.95
362 TestStartStop/group/no-preload/serial/SecondStart 687.05
366 TestStartStop/group/embed-certs/serial/SecondStart 561.17
368 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 516.32
369 TestStartStop/group/old-k8s-version/serial/Stop 4.29
370 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
381 TestStartStop/group/newest-cni/serial/FirstStart 49.02
382 TestStartStop/group/newest-cni/serial/DeployApp 0
383 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.16
384 TestStartStop/group/newest-cni/serial/Stop 10.38
385 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
386 TestStartStop/group/newest-cni/serial/SecondStart 36.68
387 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
388 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
389 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.22
390 TestStartStop/group/newest-cni/serial/Pause 2.38
x
+
TestDownloadOnly/v1.20.0/json-events (57.83s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-284489 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-284489 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (57.833419253s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (57.83s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-284489
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-284489: exit status 85 (58.722504ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-284489 | jenkins | v1.33.1 | 06 Aug 24 07:04 UTC |          |
	|         | -p download-only-284489        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/06 07:04:36
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0806 07:04:36.846650   20258 out.go:291] Setting OutFile to fd 1 ...
	I0806 07:04:36.846753   20258 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 07:04:36.846763   20258 out.go:304] Setting ErrFile to fd 2...
	I0806 07:04:36.846767   20258 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 07:04:36.846941   20258 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19370-13057/.minikube/bin
	W0806 07:04:36.847051   20258 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19370-13057/.minikube/config/config.json: open /home/jenkins/minikube-integration/19370-13057/.minikube/config/config.json: no such file or directory
	I0806 07:04:36.847604   20258 out.go:298] Setting JSON to true
	I0806 07:04:36.848540   20258 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2823,"bootTime":1722925054,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0806 07:04:36.848599   20258 start.go:139] virtualization: kvm guest
	I0806 07:04:36.851204   20258 out.go:97] [download-only-284489] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	W0806 07:04:36.851304   20258 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19370-13057/.minikube/cache/preloaded-tarball: no such file or directory
	I0806 07:04:36.851396   20258 notify.go:220] Checking for updates...
	I0806 07:04:36.852978   20258 out.go:169] MINIKUBE_LOCATION=19370
	I0806 07:04:36.854803   20258 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0806 07:04:36.856297   20258 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19370-13057/kubeconfig
	I0806 07:04:36.857874   20258 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19370-13057/.minikube
	I0806 07:04:36.859279   20258 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0806 07:04:36.861744   20258 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0806 07:04:36.862004   20258 driver.go:392] Setting default libvirt URI to qemu:///system
	I0806 07:04:36.966651   20258 out.go:97] Using the kvm2 driver based on user configuration
	I0806 07:04:36.966680   20258 start.go:297] selected driver: kvm2
	I0806 07:04:36.966686   20258 start.go:901] validating driver "kvm2" against <nil>
	I0806 07:04:36.967022   20258 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 07:04:36.967147   20258 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19370-13057/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0806 07:04:36.981877   20258 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0806 07:04:36.981925   20258 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0806 07:04:36.982414   20258 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0806 07:04:36.982564   20258 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0806 07:04:36.982588   20258 cni.go:84] Creating CNI manager for ""
	I0806 07:04:36.982595   20258 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0806 07:04:36.982606   20258 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0806 07:04:36.982654   20258 start.go:340] cluster config:
	{Name:download-only-284489 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-284489 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 07:04:36.982831   20258 iso.go:125] acquiring lock: {Name:mke58d71de28babe305c5eb0c31c274e9713bb3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 07:04:36.985117   20258 out.go:97] Downloading VM boot image ...
	I0806 07:04:36.985173   20258 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19370-13057/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0806 07:04:47.753522   20258 out.go:97] Starting "download-only-284489" primary control-plane node in "download-only-284489" cluster
	I0806 07:04:47.753540   20258 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0806 07:04:47.862200   20258 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0806 07:04:47.862225   20258 cache.go:56] Caching tarball of preloaded images
	I0806 07:04:47.862402   20258 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0806 07:04:47.864222   20258 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0806 07:04:47.864242   20258 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0806 07:04:47.974022   20258 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19370-13057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0806 07:05:04.950861   20258 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0806 07:05:04.950949   20258 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19370-13057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0806 07:05:05.852324   20258 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0806 07:05:05.852700   20258 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/download-only-284489/config.json ...
	I0806 07:05:05.852731   20258 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/download-only-284489/config.json: {Name:mk91b74eaf0c9da17f6a26600947adfea98d96c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 07:05:05.852889   20258 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0806 07:05:05.853078   20258 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19370-13057/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-284489 host does not exist
	  To start a cluster, run: "minikube start -p download-only-284489"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-284489
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/json-events (16.67s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-257964 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-257964 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (16.669873068s)
--- PASS: TestDownloadOnly/v1.30.3/json-events (16.67s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/preload-exists
--- PASS: TestDownloadOnly/v1.30.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-257964
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-257964: exit status 85 (57.8202ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-284489 | jenkins | v1.33.1 | 06 Aug 24 07:04 UTC |                     |
	|         | -p download-only-284489        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 06 Aug 24 07:05 UTC | 06 Aug 24 07:05 UTC |
	| delete  | -p download-only-284489        | download-only-284489 | jenkins | v1.33.1 | 06 Aug 24 07:05 UTC | 06 Aug 24 07:05 UTC |
	| start   | -o=json --download-only        | download-only-257964 | jenkins | v1.33.1 | 06 Aug 24 07:05 UTC |                     |
	|         | -p download-only-257964        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/06 07:05:34
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0806 07:05:34.991902   20627 out.go:291] Setting OutFile to fd 1 ...
	I0806 07:05:34.991997   20627 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 07:05:34.992004   20627 out.go:304] Setting ErrFile to fd 2...
	I0806 07:05:34.992008   20627 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 07:05:34.992194   20627 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19370-13057/.minikube/bin
	I0806 07:05:34.992755   20627 out.go:298] Setting JSON to true
	I0806 07:05:34.993570   20627 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2881,"bootTime":1722925054,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0806 07:05:34.993625   20627 start.go:139] virtualization: kvm guest
	I0806 07:05:34.995666   20627 out.go:97] [download-only-257964] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0806 07:05:34.995808   20627 notify.go:220] Checking for updates...
	I0806 07:05:34.997257   20627 out.go:169] MINIKUBE_LOCATION=19370
	I0806 07:05:34.998765   20627 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0806 07:05:35.000235   20627 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19370-13057/kubeconfig
	I0806 07:05:35.002134   20627 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19370-13057/.minikube
	I0806 07:05:35.003447   20627 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0806 07:05:35.006036   20627 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0806 07:05:35.006290   20627 driver.go:392] Setting default libvirt URI to qemu:///system
	I0806 07:05:35.039150   20627 out.go:97] Using the kvm2 driver based on user configuration
	I0806 07:05:35.039177   20627 start.go:297] selected driver: kvm2
	I0806 07:05:35.039183   20627 start.go:901] validating driver "kvm2" against <nil>
	I0806 07:05:35.039492   20627 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 07:05:35.039566   20627 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19370-13057/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0806 07:05:35.054906   20627 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0806 07:05:35.054963   20627 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0806 07:05:35.055438   20627 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0806 07:05:35.055567   20627 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0806 07:05:35.055615   20627 cni.go:84] Creating CNI manager for ""
	I0806 07:05:35.055626   20627 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0806 07:05:35.055637   20627 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0806 07:05:35.055690   20627 start.go:340] cluster config:
	{Name:download-only-257964 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:download-only-257964 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 07:05:35.055772   20627 iso.go:125] acquiring lock: {Name:mke58d71de28babe305c5eb0c31c274e9713bb3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 07:05:35.057483   20627 out.go:97] Starting "download-only-257964" primary control-plane node in "download-only-257964" cluster
	I0806 07:05:35.057504   20627 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0806 07:05:35.217770   20627 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0806 07:05:35.217797   20627 cache.go:56] Caching tarball of preloaded images
	I0806 07:05:35.217973   20627 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0806 07:05:35.219851   20627 out.go:97] Downloading Kubernetes v1.30.3 preload ...
	I0806 07:05:35.219880   20627 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 ...
	I0806 07:05:35.325589   20627 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4?checksum=md5:15191286f02471d9b3ea0b587fcafc39 -> /home/jenkins/minikube-integration/19370-13057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0806 07:05:49.956187   20627 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 ...
	I0806 07:05:49.956282   20627 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19370-13057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-257964 host does not exist
	  To start a cluster, run: "minikube start -p download-only-257964"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.3/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.3/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-257964
--- PASS: TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/json-events (52.48s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-460250 --force --alsologtostderr --kubernetes-version=v1.31.0-rc.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-460250 --force --alsologtostderr --kubernetes-version=v1.31.0-rc.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (52.479429168s)
--- PASS: TestDownloadOnly/v1.31.0-rc.0/json-events (52.48s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-rc.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-460250
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-460250: exit status 85 (59.819563ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-284489 | jenkins | v1.33.1 | 06 Aug 24 07:04 UTC |                     |
	|         | -p download-only-284489           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0      |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.33.1 | 06 Aug 24 07:05 UTC | 06 Aug 24 07:05 UTC |
	| delete  | -p download-only-284489           | download-only-284489 | jenkins | v1.33.1 | 06 Aug 24 07:05 UTC | 06 Aug 24 07:05 UTC |
	| start   | -o=json --download-only           | download-only-257964 | jenkins | v1.33.1 | 06 Aug 24 07:05 UTC |                     |
	|         | -p download-only-257964           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3      |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.33.1 | 06 Aug 24 07:05 UTC | 06 Aug 24 07:05 UTC |
	| delete  | -p download-only-257964           | download-only-257964 | jenkins | v1.33.1 | 06 Aug 24 07:05 UTC | 06 Aug 24 07:05 UTC |
	| start   | -o=json --download-only           | download-only-460250 | jenkins | v1.33.1 | 06 Aug 24 07:05 UTC |                     |
	|         | -p download-only-460250           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0 |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/06 07:05:51
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0806 07:05:51.971841   20848 out.go:291] Setting OutFile to fd 1 ...
	I0806 07:05:51.971969   20848 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 07:05:51.971979   20848 out.go:304] Setting ErrFile to fd 2...
	I0806 07:05:51.971986   20848 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 07:05:51.972201   20848 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19370-13057/.minikube/bin
	I0806 07:05:51.972771   20848 out.go:298] Setting JSON to true
	I0806 07:05:51.973602   20848 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2898,"bootTime":1722925054,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0806 07:05:51.973659   20848 start.go:139] virtualization: kvm guest
	I0806 07:05:51.975784   20848 out.go:97] [download-only-460250] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0806 07:05:51.975903   20848 notify.go:220] Checking for updates...
	I0806 07:05:51.977494   20848 out.go:169] MINIKUBE_LOCATION=19370
	I0806 07:05:51.979015   20848 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0806 07:05:51.980359   20848 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19370-13057/kubeconfig
	I0806 07:05:51.981873   20848 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19370-13057/.minikube
	I0806 07:05:51.983175   20848 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0806 07:05:51.985525   20848 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0806 07:05:51.985720   20848 driver.go:392] Setting default libvirt URI to qemu:///system
	I0806 07:05:52.017841   20848 out.go:97] Using the kvm2 driver based on user configuration
	I0806 07:05:52.017870   20848 start.go:297] selected driver: kvm2
	I0806 07:05:52.017876   20848 start.go:901] validating driver "kvm2" against <nil>
	I0806 07:05:52.018217   20848 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 07:05:52.018298   20848 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19370-13057/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0806 07:05:52.032519   20848 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0806 07:05:52.032566   20848 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0806 07:05:52.033023   20848 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0806 07:05:52.033140   20848 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0806 07:05:52.033173   20848 cni.go:84] Creating CNI manager for ""
	I0806 07:05:52.033180   20848 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0806 07:05:52.033190   20848 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0806 07:05:52.033240   20848 start.go:340] cluster config:
	{Name:download-only-460250 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-rc.0 ClusterName:download-only-460250 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 07:05:52.033319   20848 iso.go:125] acquiring lock: {Name:mke58d71de28babe305c5eb0c31c274e9713bb3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 07:05:52.035210   20848 out.go:97] Starting "download-only-460250" primary control-plane node in "download-only-460250" cluster
	I0806 07:05:52.035230   20848 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime crio
	I0806 07:05:52.141858   20848 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-rc.0/preloaded-images-k8s-v18-v1.31.0-rc.0-cri-o-overlay-amd64.tar.lz4
	I0806 07:05:52.141907   20848 cache.go:56] Caching tarball of preloaded images
	I0806 07:05:52.142084   20848 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime crio
	I0806 07:05:52.144216   20848 out.go:97] Downloading Kubernetes v1.31.0-rc.0 preload ...
	I0806 07:05:52.144234   20848 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-rc.0-cri-o-overlay-amd64.tar.lz4 ...
	I0806 07:05:52.257308   20848 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-rc.0/preloaded-images-k8s-v18-v1.31.0-rc.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:89b2d75682ccec9e5b50b57ad7b65741 -> /home/jenkins/minikube-integration/19370-13057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-cri-o-overlay-amd64.tar.lz4
	I0806 07:06:03.155740   20848 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-rc.0-cri-o-overlay-amd64.tar.lz4 ...
	I0806 07:06:03.155832   20848 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19370-13057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-cri-o-overlay-amd64.tar.lz4 ...
	I0806 07:06:03.899601   20848 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-rc.0 on crio
	I0806 07:06:03.899967   20848 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/download-only-460250/config.json ...
	I0806 07:06:03.900001   20848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/download-only-460250/config.json: {Name:mke003e8aec7bbcd5ed464c51503129f892116f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 07:06:03.900167   20848 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime crio
	I0806 07:06:03.900304   20848 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19370-13057/.minikube/cache/linux/amd64/v1.31.0-rc.0/kubectl
	
	
	* The control-plane node download-only-460250 host does not exist
	  To start a cluster, run: "minikube start -p download-only-460250"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-rc.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-rc.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-460250
--- PASS: TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.57s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-030771 --alsologtostderr --binary-mirror http://127.0.0.1:41953 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-030771" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-030771
--- PASS: TestBinaryMirror (0.57s)

                                                
                                    
x
+
TestOffline (93.16s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-662151 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-662151 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m32.33619794s)
helpers_test.go:175: Cleaning up "offline-crio-662151" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-662151
--- PASS: TestOffline (93.16s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-505457
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-505457: exit status 85 (48.175761ms)

                                                
                                                
-- stdout --
	* Profile "addons-505457" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-505457"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-505457
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-505457: exit status 85 (47.906091ms)

                                                
                                                
-- stdout --
	* Profile "addons-505457" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-505457"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (147.17s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-505457 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-505457 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m27.166168916s)
--- PASS: TestAddons/Setup (147.17s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-505457 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-505457 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.72s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 3.045535ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-698f998955-4xwg8" [d2496fc4-46dd-4a87-b8e6-51cc90641be4] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.004706168s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-rpkh9" [273a082a-adbe-45cf-93b4-da0d67d0c071] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004551993s
addons_test.go:342: (dbg) Run:  kubectl --context addons-505457 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-505457 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-505457 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.961016554s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-505457 ip
2024/08/06 07:09:48 [DEBUG] GET http://192.168.39.6:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-505457 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.72s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.77s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-2cjx4" [7df3d23d-3d36-4d48-a87e-d1460b5c96b7] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.007437507s
addons_test.go:851: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-505457
addons_test.go:851: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-505457: (5.764868157s)
--- PASS: TestAddons/parallel/InspektorGadget (11.77s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (12.5s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 3.994513ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-n2x6k" [d1db42b6-6642-4121-ae56-573e9f26e727] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 6.004574069s
addons_test.go:475: (dbg) Run:  kubectl --context addons-505457 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-505457 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.922928939s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-505457 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (12.50s)

                                                
                                    
x
+
TestAddons/parallel/CSI (82.37s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 7.951936ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-505457 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505457 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-505457 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [7e93c97f-840c-4ee8-a839-68e9782030dd] Pending
helpers_test.go:344: "task-pv-pod" [7e93c97f-840c-4ee8-a839-68e9782030dd] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [7e93c97f-840c-4ee8-a839-68e9782030dd] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 15.003929742s
addons_test.go:590: (dbg) Run:  kubectl --context addons-505457 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-505457 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-505457 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-505457 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-505457 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-505457 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505457 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505457 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-505457 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [d479fa44-d93e-403f-9311-65544ff86a4f] Pending
helpers_test.go:344: "task-pv-pod-restore" [d479fa44-d93e-403f-9311-65544ff86a4f] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [d479fa44-d93e-403f-9311-65544ff86a4f] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.004403212s
addons_test.go:632: (dbg) Run:  kubectl --context addons-505457 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-505457 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-505457 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-505457 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-amd64 -p addons-505457 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.862082069s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-amd64 -p addons-505457 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (82.37s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.7s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-505457 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-9d868696f-5cjw5" [7c380ed2-c9c3-4bb8-bc87-93140e9aa99d] Pending
helpers_test.go:344: "headlamp-9d868696f-5cjw5" [7c380ed2-c9c3-4bb8-bc87-93140e9aa99d] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-9d868696f-5cjw5" [7c380ed2-c9c3-4bb8-bc87-93140e9aa99d] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.004360926s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 -p addons-505457 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-amd64 -p addons-505457 addons disable headlamp --alsologtostderr -v=1: (5.741371185s)
--- PASS: TestAddons/parallel/Headlamp (17.70s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.59s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5455fb9b69-9mmwf" [a558ffd9-2c0f-4418-a235-b1bc88046de6] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004806107s
addons_test.go:870: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-505457
--- PASS: TestAddons/parallel/CloudSpanner (5.59s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (57s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-505457 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-505457 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505457 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505457 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505457 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505457 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505457 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505457 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505457 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505457 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [30acf24f-529d-4896-96b0-2ae53e61eeb6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [30acf24f-529d-4896-96b0-2ae53e61eeb6] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [30acf24f-529d-4896-96b0-2ae53e61eeb6] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.0052763s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-505457 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-amd64 -p addons-505457 ssh "cat /opt/local-path-provisioner/pvc-20492753-e965-4561-96c4-8676548fb006_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-505457 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-505457 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 -p addons-505457 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-linux-amd64 -p addons-505457 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.23879673s)
--- PASS: TestAddons/parallel/LocalPath (57.00s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.51s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-bxmng" [84dbfa29-9673-477b-a4b1-2e17cc5b745b] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.005140281s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-505457
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.51s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.87s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-799879c74f-f6l97" [945e530b-c3be-4c97-90e0-606846962678] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004398867s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-amd64 -p addons-505457 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-amd64 -p addons-505457 addons disable yakd --alsologtostderr -v=1: (5.86354099s)
--- PASS: TestAddons/parallel/Yakd (11.87s)

                                                
                                    
x
+
TestCertOptions (70s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-144526 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-144526 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m8.493670163s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-144526 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-144526 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-144526 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-144526" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-144526
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-144526: (1.026608794s)
--- PASS: TestCertOptions (70.00s)

                                                
                                    
x
+
TestCertExpiration (334.77s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-739400 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-739400 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m33.654388607s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-739400 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-739400 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (1m0.109192775s)
helpers_test.go:175: Cleaning up "cert-expiration-739400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-739400
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-739400: (1.007140235s)
--- PASS: TestCertExpiration (334.77s)

                                                
                                    
x
+
TestForceSystemdFlag (102.58s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-261275 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E0806 08:14:13.203605   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/addons-505457/client.crt: no such file or directory
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-261275 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m41.36229254s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-261275 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-261275" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-261275
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-261275: (1.015032676s)
--- PASS: TestForceSystemdFlag (102.58s)

                                                
                                    
x
+
TestForceSystemdEnv (46.46s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-706451 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-706451 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (45.696161017s)
helpers_test.go:175: Cleaning up "force-systemd-env-706451" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-706451
--- PASS: TestForceSystemdEnv (46.46s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.51s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (4.51s)

                                                
                                    
x
+
TestErrorSpam/setup (39.69s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-184622 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-184622 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-184622 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-184622 --driver=kvm2  --container-runtime=crio: (39.686757967s)
--- PASS: TestErrorSpam/setup (39.69s)

                                                
                                    
x
+
TestErrorSpam/start (0.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-184622 --log_dir /tmp/nospam-184622 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-184622 --log_dir /tmp/nospam-184622 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-184622 --log_dir /tmp/nospam-184622 start --dry-run
--- PASS: TestErrorSpam/start (0.34s)

                                                
                                    
x
+
TestErrorSpam/status (0.72s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-184622 --log_dir /tmp/nospam-184622 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-184622 --log_dir /tmp/nospam-184622 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-184622 --log_dir /tmp/nospam-184622 status
--- PASS: TestErrorSpam/status (0.72s)

                                                
                                    
x
+
TestErrorSpam/pause (1.58s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-184622 --log_dir /tmp/nospam-184622 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-184622 --log_dir /tmp/nospam-184622 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-184622 --log_dir /tmp/nospam-184622 pause
--- PASS: TestErrorSpam/pause (1.58s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.62s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-184622 --log_dir /tmp/nospam-184622 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-184622 --log_dir /tmp/nospam-184622 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-184622 --log_dir /tmp/nospam-184622 unpause
--- PASS: TestErrorSpam/unpause (1.62s)

                                                
                                    
x
+
TestErrorSpam/stop (5.51s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-184622 --log_dir /tmp/nospam-184622 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-184622 --log_dir /tmp/nospam-184622 stop: (2.298629606s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-184622 --log_dir /tmp/nospam-184622 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-184622 --log_dir /tmp/nospam-184622 stop: (1.711736128s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-184622 --log_dir /tmp/nospam-184622 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-184622 --log_dir /tmp/nospam-184622 stop: (1.50046326s)
--- PASS: TestErrorSpam/stop (5.51s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/19370-13057/.minikube/files/etc/test/nested/copy/20246/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (59.54s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-811691 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-811691 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (59.543321476s)
--- PASS: TestFunctional/serial/StartWithProxy (59.54s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (48.21s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-811691 --alsologtostderr -v=8
E0806 07:19:13.204328   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/addons-505457/client.crt: no such file or directory
E0806 07:19:13.210206   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/addons-505457/client.crt: no such file or directory
E0806 07:19:13.220485   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/addons-505457/client.crt: no such file or directory
E0806 07:19:13.240834   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/addons-505457/client.crt: no such file or directory
E0806 07:19:13.281182   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/addons-505457/client.crt: no such file or directory
E0806 07:19:13.361528   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/addons-505457/client.crt: no such file or directory
E0806 07:19:13.522119   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/addons-505457/client.crt: no such file or directory
E0806 07:19:13.842796   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/addons-505457/client.crt: no such file or directory
E0806 07:19:14.483779   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/addons-505457/client.crt: no such file or directory
E0806 07:19:15.764416   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/addons-505457/client.crt: no such file or directory
E0806 07:19:18.325218   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/addons-505457/client.crt: no such file or directory
E0806 07:19:23.446361   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/addons-505457/client.crt: no such file or directory
E0806 07:19:33.687278   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/addons-505457/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-811691 --alsologtostderr -v=8: (48.205131402s)
functional_test.go:659: soft start took 48.205790441s for "functional-811691" cluster.
--- PASS: TestFunctional/serial/SoftStart (48.21s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-811691 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-811691 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-811691 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-811691 cache add registry.k8s.io/pause:3.3: (1.10132828s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-811691 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-811691 cache add registry.k8s.io/pause:latest: (1.069887953s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.2s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-811691 /tmp/TestFunctionalserialCacheCmdcacheadd_local19304583/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-811691 cache add minikube-local-cache-test:functional-811691
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-811691 cache add minikube-local-cache-test:functional-811691: (1.867729741s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-811691 cache delete minikube-local-cache-test:functional-811691
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-811691
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.20s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-811691 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.58s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-811691 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-811691 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-811691 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (206.491332ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-811691 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-811691 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.58s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-811691 kubectl -- --context functional-811691 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-811691 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (81.52s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-811691 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0806 07:19:54.167830   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/addons-505457/client.crt: no such file or directory
E0806 07:20:35.129188   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/addons-505457/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-811691 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m21.518405266s)
functional_test.go:757: restart took 1m21.518534311s for "functional-811691" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (81.52s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-811691 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.42s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-811691 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-811691 logs: (1.421209516s)
--- PASS: TestFunctional/serial/LogsCmd (1.42s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.43s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-811691 logs --file /tmp/TestFunctionalserialLogsFileCmd1602642361/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-811691 logs --file /tmp/TestFunctionalserialLogsFileCmd1602642361/001/logs.txt: (1.42481529s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.43s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.54s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-811691 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-811691
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-811691: exit status 115 (277.011085ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.203:30190 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-811691 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.54s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-811691 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-811691 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-811691 config get cpus: exit status 14 (44.74951ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-811691 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-811691 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-811691 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-811691 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-811691 config get cpus: exit status 14 (47.224875ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (11.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-811691 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-811691 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 30943: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (11.13s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-811691 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-811691 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (133.527072ms)

                                                
                                                
-- stdout --
	* [functional-811691] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19370
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19370-13057/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19370-13057/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 07:22:06.400306   30622 out.go:291] Setting OutFile to fd 1 ...
	I0806 07:22:06.400457   30622 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 07:22:06.400470   30622 out.go:304] Setting ErrFile to fd 2...
	I0806 07:22:06.400476   30622 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 07:22:06.400700   30622 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19370-13057/.minikube/bin
	I0806 07:22:06.401251   30622 out.go:298] Setting JSON to false
	I0806 07:22:06.402195   30622 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3872,"bootTime":1722925054,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0806 07:22:06.402249   30622 start.go:139] virtualization: kvm guest
	I0806 07:22:06.404227   30622 out.go:177] * [functional-811691] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0806 07:22:06.405297   30622 notify.go:220] Checking for updates...
	I0806 07:22:06.405323   30622 out.go:177]   - MINIKUBE_LOCATION=19370
	I0806 07:22:06.406528   30622 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0806 07:22:06.407649   30622 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19370-13057/kubeconfig
	I0806 07:22:06.409102   30622 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19370-13057/.minikube
	I0806 07:22:06.410437   30622 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0806 07:22:06.411655   30622 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0806 07:22:06.413197   30622 config.go:182] Loaded profile config "functional-811691": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 07:22:06.413613   30622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:22:06.413678   30622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:22:06.428968   30622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40725
	I0806 07:22:06.429343   30622 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:22:06.429884   30622 main.go:141] libmachine: Using API Version  1
	I0806 07:22:06.429905   30622 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:22:06.430201   30622 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:22:06.430368   30622 main.go:141] libmachine: (functional-811691) Calling .DriverName
	I0806 07:22:06.430595   30622 driver.go:392] Setting default libvirt URI to qemu:///system
	I0806 07:22:06.430869   30622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:22:06.430900   30622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:22:06.445818   30622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35757
	I0806 07:22:06.446214   30622 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:22:06.446763   30622 main.go:141] libmachine: Using API Version  1
	I0806 07:22:06.446785   30622 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:22:06.447119   30622 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:22:06.447350   30622 main.go:141] libmachine: (functional-811691) Calling .DriverName
	I0806 07:22:06.487695   30622 out.go:177] * Using the kvm2 driver based on existing profile
	I0806 07:22:06.489023   30622 start.go:297] selected driver: kvm2
	I0806 07:22:06.489039   30622 start.go:901] validating driver "kvm2" against &{Name:functional-811691 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:functional-811691 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.203 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 07:22:06.489156   30622 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0806 07:22:06.491757   30622 out.go:177] 
	W0806 07:22:06.492710   30622 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0806 07:22:06.493869   30622 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-811691 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-811691 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-811691 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (131.361996ms)

                                                
                                                
-- stdout --
	* [functional-811691] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19370
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19370-13057/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19370-13057/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 07:22:06.666592   30677 out.go:291] Setting OutFile to fd 1 ...
	I0806 07:22:06.666694   30677 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 07:22:06.666701   30677 out.go:304] Setting ErrFile to fd 2...
	I0806 07:22:06.666705   30677 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 07:22:06.666970   30677 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19370-13057/.minikube/bin
	I0806 07:22:06.667522   30677 out.go:298] Setting JSON to false
	I0806 07:22:06.668489   30677 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3873,"bootTime":1722925054,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0806 07:22:06.668551   30677 start.go:139] virtualization: kvm guest
	I0806 07:22:06.670589   30677 out.go:177] * [functional-811691] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	I0806 07:22:06.671964   30677 out.go:177]   - MINIKUBE_LOCATION=19370
	I0806 07:22:06.671988   30677 notify.go:220] Checking for updates...
	I0806 07:22:06.674473   30677 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0806 07:22:06.675751   30677 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19370-13057/kubeconfig
	I0806 07:22:06.676953   30677 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19370-13057/.minikube
	I0806 07:22:06.678311   30677 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0806 07:22:06.679580   30677 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0806 07:22:06.681282   30677 config.go:182] Loaded profile config "functional-811691": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 07:22:06.681676   30677 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:22:06.681736   30677 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:22:06.696093   30677 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46657
	I0806 07:22:06.696483   30677 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:22:06.696978   30677 main.go:141] libmachine: Using API Version  1
	I0806 07:22:06.696997   30677 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:22:06.697302   30677 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:22:06.697464   30677 main.go:141] libmachine: (functional-811691) Calling .DriverName
	I0806 07:22:06.697676   30677 driver.go:392] Setting default libvirt URI to qemu:///system
	I0806 07:22:06.697943   30677 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:22:06.697978   30677 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:22:06.712743   30677 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35043
	I0806 07:22:06.713239   30677 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:22:06.713696   30677 main.go:141] libmachine: Using API Version  1
	I0806 07:22:06.713718   30677 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:22:06.714043   30677 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:22:06.714220   30677 main.go:141] libmachine: (functional-811691) Calling .DriverName
	I0806 07:22:06.746658   30677 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0806 07:22:06.748011   30677 start.go:297] selected driver: kvm2
	I0806 07:22:06.748024   30677 start.go:901] validating driver "kvm2" against &{Name:functional-811691 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:functional-811691 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.203 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 07:22:06.748129   30677 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0806 07:22:06.750589   30677 out.go:177] 
	W0806 07:22:06.751838   30677 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0806 07:22:06.753088   30677 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-811691 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-811691 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-811691 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (37.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-811691 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-811691 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-zrmbv" [425606e1-6ed8-488a-9ed3-a2e006bc4873] Pending
helpers_test.go:344: "hello-node-connect-57b4589c47-zrmbv" [425606e1-6ed8-488a-9ed3-a2e006bc4873] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-zrmbv" [425606e1-6ed8-488a-9ed3-a2e006bc4873] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 37.006183431s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-811691 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.39.203:32651
functional_test.go:1671: http://192.168.39.203:32651: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-zrmbv

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.203:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.203:32651
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (37.44s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-811691 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-811691 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (50.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [2267e478-e601-449d-a90c-78c5aea99916] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004610249s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-811691 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-811691 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-811691 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-811691 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-811691 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [6b1eed3e-0af7-42c7-99a3-ee52d48422a1] Pending
helpers_test.go:344: "sp-pod" [6b1eed3e-0af7-42c7-99a3-ee52d48422a1] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [6b1eed3e-0af7-42c7-99a3-ee52d48422a1] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 32.004051219s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-811691 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-811691 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-811691 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [847cd248-feaf-42d7-b5d4-6cc88f0fb745] Pending
helpers_test.go:344: "sp-pod" [847cd248-feaf-42d7-b5d4-6cc88f0fb745] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [847cd248-feaf-42d7-b5d4-6cc88f0fb745] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.004513007s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-811691 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (50.19s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-811691 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-811691 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-811691 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-811691 ssh -n functional-811691 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-811691 cp functional-811691:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2097547664/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-811691 ssh -n functional-811691 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-811691 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-811691 ssh -n functional-811691 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (36.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-811691 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-fqd5v" [2e64f12e-b997-462a-9da4-d0dc46877042] Pending
helpers_test.go:344: "mysql-64454c8b5c-fqd5v" [2e64f12e-b997-462a-9da4-d0dc46877042] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-fqd5v" [2e64f12e-b997-462a-9da4-d0dc46877042] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 33.006092928s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-811691 exec mysql-64454c8b5c-fqd5v -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-811691 exec mysql-64454c8b5c-fqd5v -- mysql -ppassword -e "show databases;": exit status 1 (296.886594ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-811691 exec mysql-64454c8b5c-fqd5v -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-811691 exec mysql-64454c8b5c-fqd5v -- mysql -ppassword -e "show databases;": exit status 1 (321.650579ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0806 07:21:57.050258   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/addons-505457/client.crt: no such file or directory
functional_test.go:1803: (dbg) Run:  kubectl --context functional-811691 exec mysql-64454c8b5c-fqd5v -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (36.58s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/20246/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-811691 ssh "sudo cat /etc/test/nested/copy/20246/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/20246.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-811691 ssh "sudo cat /etc/ssl/certs/20246.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/20246.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-811691 ssh "sudo cat /usr/share/ca-certificates/20246.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-811691 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/202462.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-811691 ssh "sudo cat /etc/ssl/certs/202462.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/202462.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-811691 ssh "sudo cat /usr/share/ca-certificates/202462.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-811691 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.22s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-811691 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-811691 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-811691 ssh "sudo systemctl is-active docker": exit status 1 (222.214059ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-811691 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-811691 ssh "sudo systemctl is-active containerd": exit status 1 (208.320465ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-811691 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-811691 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.3
registry.k8s.io/kube-proxy:v1.30.3
registry.k8s.io/kube-controller-manager:v1.30.3
registry.k8s.io/kube-apiserver:v1.30.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
localhost/minikube-local-cache-test:functional-811691
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20240715-585640e9
docker.io/kicbase/echo-server:functional-811691
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-811691 image ls --format short --alsologtostderr:
I0806 07:22:09.943126   31233 out.go:291] Setting OutFile to fd 1 ...
I0806 07:22:09.943262   31233 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0806 07:22:09.943281   31233 out.go:304] Setting ErrFile to fd 2...
I0806 07:22:09.943291   31233 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0806 07:22:09.943587   31233 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19370-13057/.minikube/bin
I0806 07:22:09.944317   31233 config.go:182] Loaded profile config "functional-811691": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0806 07:22:09.944479   31233 config.go:182] Loaded profile config "functional-811691": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0806 07:22:09.944896   31233 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0806 07:22:09.944940   31233 main.go:141] libmachine: Launching plugin server for driver kvm2
I0806 07:22:09.960617   31233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35451
I0806 07:22:09.961080   31233 main.go:141] libmachine: () Calling .GetVersion
I0806 07:22:09.961650   31233 main.go:141] libmachine: Using API Version  1
I0806 07:22:09.961676   31233 main.go:141] libmachine: () Calling .SetConfigRaw
I0806 07:22:09.961999   31233 main.go:141] libmachine: () Calling .GetMachineName
I0806 07:22:09.962226   31233 main.go:141] libmachine: (functional-811691) Calling .GetState
I0806 07:22:09.964405   31233 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0806 07:22:09.964454   31233 main.go:141] libmachine: Launching plugin server for driver kvm2
I0806 07:22:09.979777   31233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46403
I0806 07:22:09.980204   31233 main.go:141] libmachine: () Calling .GetVersion
I0806 07:22:09.980653   31233 main.go:141] libmachine: Using API Version  1
I0806 07:22:09.980677   31233 main.go:141] libmachine: () Calling .SetConfigRaw
I0806 07:22:09.980941   31233 main.go:141] libmachine: () Calling .GetMachineName
I0806 07:22:09.981069   31233 main.go:141] libmachine: (functional-811691) Calling .DriverName
I0806 07:22:09.981294   31233 ssh_runner.go:195] Run: systemctl --version
I0806 07:22:09.981350   31233 main.go:141] libmachine: (functional-811691) Calling .GetSSHHostname
I0806 07:22:09.984149   31233 main.go:141] libmachine: (functional-811691) DBG | domain functional-811691 has defined MAC address 52:54:00:a5:a1:0b in network mk-functional-811691
I0806 07:22:09.984556   31233 main.go:141] libmachine: (functional-811691) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:a1:0b", ip: ""} in network mk-functional-811691: {Iface:virbr1 ExpiryTime:2024-08-06 08:18:11 +0000 UTC Type:0 Mac:52:54:00:a5:a1:0b Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:functional-811691 Clientid:01:52:54:00:a5:a1:0b}
I0806 07:22:09.984585   31233 main.go:141] libmachine: (functional-811691) DBG | domain functional-811691 has defined IP address 192.168.39.203 and MAC address 52:54:00:a5:a1:0b in network mk-functional-811691
I0806 07:22:09.984725   31233 main.go:141] libmachine: (functional-811691) Calling .GetSSHPort
I0806 07:22:09.984959   31233 main.go:141] libmachine: (functional-811691) Calling .GetSSHKeyPath
I0806 07:22:09.985083   31233 main.go:141] libmachine: (functional-811691) Calling .GetSSHUsername
I0806 07:22:09.985197   31233 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/functional-811691/id_rsa Username:docker}
I0806 07:22:10.067212   31233 ssh_runner.go:195] Run: sudo crictl images --output json
I0806 07:22:10.113201   31233 main.go:141] libmachine: Making call to close driver server
I0806 07:22:10.113213   31233 main.go:141] libmachine: (functional-811691) Calling .Close
I0806 07:22:10.113501   31233 main.go:141] libmachine: (functional-811691) DBG | Closing plugin on server side
I0806 07:22:10.113501   31233 main.go:141] libmachine: Successfully made call to close driver server
I0806 07:22:10.113537   31233 main.go:141] libmachine: Making call to close connection to plugin binary
I0806 07:22:10.113548   31233 main.go:141] libmachine: Making call to close driver server
I0806 07:22:10.113554   31233 main.go:141] libmachine: (functional-811691) Calling .Close
I0806 07:22:10.113781   31233 main.go:141] libmachine: Successfully made call to close driver server
I0806 07:22:10.113795   31233 main.go:141] libmachine: (functional-811691) DBG | Closing plugin on server side
I0806 07:22:10.113807   31233 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-811691 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-811691 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/kube-scheduler          | v1.30.3            | 3edc18e7b7672 | 63.1MB |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| registry.k8s.io/kube-controller-manager | v1.30.3            | 76932a3b37d7e | 112MB  |
| registry.k8s.io/kube-proxy              | v1.30.3            | 55bb025d2cfa5 | 86MB   |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| docker.io/library/nginx                 | latest             | a72860cb95fd5 | 192MB  |
| registry.k8s.io/etcd                    | 3.5.12-0           | 3861cfcd7c04c | 151MB  |
| registry.k8s.io/coredns/coredns         | v1.11.1            | cbb01a7bd410d | 61.2MB |
| registry.k8s.io/kube-apiserver          | v1.30.3            | 1f6d574d502f3 | 118MB  |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| localhost/minikube-local-cache-test     | functional-811691  | d43d575c3b7c2 | 3.33kB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| docker.io/kicbase/echo-server           | functional-811691  | 9056ab77afb8e | 4.94MB |
| docker.io/kindest/kindnetd              | v20240715-585640e9 | 5cc3abe5717db | 87.2MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-811691 image ls --format table --alsologtostderr:
I0806 07:22:11.919604   31604 out.go:291] Setting OutFile to fd 1 ...
I0806 07:22:11.919705   31604 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0806 07:22:11.919716   31604 out.go:304] Setting ErrFile to fd 2...
I0806 07:22:11.919724   31604 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0806 07:22:11.919924   31604 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19370-13057/.minikube/bin
I0806 07:22:11.920527   31604 config.go:182] Loaded profile config "functional-811691": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0806 07:22:11.920695   31604 config.go:182] Loaded profile config "functional-811691": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0806 07:22:11.921112   31604 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0806 07:22:11.921155   31604 main.go:141] libmachine: Launching plugin server for driver kvm2
I0806 07:22:11.936849   31604 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34945
I0806 07:22:11.937298   31604 main.go:141] libmachine: () Calling .GetVersion
I0806 07:22:11.937814   31604 main.go:141] libmachine: Using API Version  1
I0806 07:22:11.937840   31604 main.go:141] libmachine: () Calling .SetConfigRaw
I0806 07:22:11.938245   31604 main.go:141] libmachine: () Calling .GetMachineName
I0806 07:22:11.938470   31604 main.go:141] libmachine: (functional-811691) Calling .GetState
I0806 07:22:11.940436   31604 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0806 07:22:11.940473   31604 main.go:141] libmachine: Launching plugin server for driver kvm2
I0806 07:22:11.955252   31604 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40329
I0806 07:22:11.955731   31604 main.go:141] libmachine: () Calling .GetVersion
I0806 07:22:11.956253   31604 main.go:141] libmachine: Using API Version  1
I0806 07:22:11.956273   31604 main.go:141] libmachine: () Calling .SetConfigRaw
I0806 07:22:11.956602   31604 main.go:141] libmachine: () Calling .GetMachineName
I0806 07:22:11.956788   31604 main.go:141] libmachine: (functional-811691) Calling .DriverName
I0806 07:22:11.957010   31604 ssh_runner.go:195] Run: systemctl --version
I0806 07:22:11.957030   31604 main.go:141] libmachine: (functional-811691) Calling .GetSSHHostname
I0806 07:22:11.959849   31604 main.go:141] libmachine: (functional-811691) DBG | domain functional-811691 has defined MAC address 52:54:00:a5:a1:0b in network mk-functional-811691
I0806 07:22:11.960411   31604 main.go:141] libmachine: (functional-811691) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:a1:0b", ip: ""} in network mk-functional-811691: {Iface:virbr1 ExpiryTime:2024-08-06 08:18:11 +0000 UTC Type:0 Mac:52:54:00:a5:a1:0b Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:functional-811691 Clientid:01:52:54:00:a5:a1:0b}
I0806 07:22:11.960437   31604 main.go:141] libmachine: (functional-811691) DBG | domain functional-811691 has defined IP address 192.168.39.203 and MAC address 52:54:00:a5:a1:0b in network mk-functional-811691
I0806 07:22:11.960590   31604 main.go:141] libmachine: (functional-811691) Calling .GetSSHPort
I0806 07:22:11.960758   31604 main.go:141] libmachine: (functional-811691) Calling .GetSSHKeyPath
I0806 07:22:11.960930   31604 main.go:141] libmachine: (functional-811691) Calling .GetSSHUsername
I0806 07:22:11.961075   31604 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/functional-811691/id_rsa Username:docker}
I0806 07:22:12.085346   31604 ssh_runner.go:195] Run: sudo crictl images --output json
I0806 07:22:12.166003   31604 main.go:141] libmachine: Making call to close driver server
I0806 07:22:12.166022   31604 main.go:141] libmachine: (functional-811691) Calling .Close
I0806 07:22:12.166313   31604 main.go:141] libmachine: Successfully made call to close driver server
I0806 07:22:12.166333   31604 main.go:141] libmachine: Making call to close connection to plugin binary
I0806 07:22:12.166333   31604 main.go:141] libmachine: (functional-811691) DBG | Closing plugin on server side
I0806 07:22:12.166365   31604 main.go:141] libmachine: Making call to close driver server
I0806 07:22:12.166381   31604 main.go:141] libmachine: (functional-811691) Calling .Close
I0806 07:22:12.166621   31604 main.go:141] libmachine: Successfully made call to close driver server
I0806 07:22:12.166632   31604 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-811691 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-811691 image ls --format json --alsologtostderr:
[{"id":"a72860cb95fd59e9c696c66441c64f18e66915fa26b249911e83c3854477ed9a","repoDigests":["docker.io/library/nginx@sha256:6af79ae5de407283dcea8b00d5c37ace95441fd58a8b1d2aa1ed93f5511bb18c","docker.io/library/nginx@sha256:baa881b012a49e3c2cd6ab9d80f9fcd2962a98af8ede947d0ef930a427b28afc"],"repoTags":["docker.io/library/nginx:latest"],"size":"191750286"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb
027320eb185c6110e0850b997870"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"61245718"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f","repoDigests":["do
cker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115","docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"],"repoTags":["docker.io/kindest/kindnetd:v20240715-585640e9"],"size":"87165492"},{"id":"d43d575c3b7c2b14d76fc1604be7bf8f8ed86a0e39d968fd6d19ec3bce1d59b9","repoDigests":["localhost/minikube-local-cache-test@sha256:ba4e8fe06898a2b700b5a64d0ec2526f888ac189a513111dccce317a93572066"],"repoTags":["localhost/minikube-local-cache-test:functional-811691"],"size":"3330"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":["registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"150779692"},{"id":"3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2","repoDigests":["registry.k8s.io/kube-schedul
er@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266","registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.3"],"size":"63051080"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d","repoDigests":["registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605
b8dfbc81e088f58221796631e107966c","registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.3"],"size":"117609954"},{"id":"76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7","registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.3"],"size":"112198984"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:functional-811691"],"size":"4943877"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443
453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1","repoDigests":["registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80","registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"],"repoTags":["registry.k8s.io/kube-proxy:v1.30.3"],"size":"85953945"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-811691 image ls --format json --alsologtostderr:
I0806 07:22:11.649058   31581 out.go:291] Setting OutFile to fd 1 ...
I0806 07:22:11.649200   31581 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0806 07:22:11.649210   31581 out.go:304] Setting ErrFile to fd 2...
I0806 07:22:11.649218   31581 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0806 07:22:11.649390   31581 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19370-13057/.minikube/bin
I0806 07:22:11.649964   31581 config.go:182] Loaded profile config "functional-811691": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0806 07:22:11.650077   31581 config.go:182] Loaded profile config "functional-811691": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0806 07:22:11.650441   31581 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0806 07:22:11.650494   31581 main.go:141] libmachine: Launching plugin server for driver kvm2
I0806 07:22:11.667320   31581 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38215
I0806 07:22:11.667803   31581 main.go:141] libmachine: () Calling .GetVersion
I0806 07:22:11.668499   31581 main.go:141] libmachine: Using API Version  1
I0806 07:22:11.668526   31581 main.go:141] libmachine: () Calling .SetConfigRaw
I0806 07:22:11.668927   31581 main.go:141] libmachine: () Calling .GetMachineName
I0806 07:22:11.669133   31581 main.go:141] libmachine: (functional-811691) Calling .GetState
I0806 07:22:11.671370   31581 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0806 07:22:11.671448   31581 main.go:141] libmachine: Launching plugin server for driver kvm2
I0806 07:22:11.686699   31581 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43811
I0806 07:22:11.687162   31581 main.go:141] libmachine: () Calling .GetVersion
I0806 07:22:11.687688   31581 main.go:141] libmachine: Using API Version  1
I0806 07:22:11.687708   31581 main.go:141] libmachine: () Calling .SetConfigRaw
I0806 07:22:11.688020   31581 main.go:141] libmachine: () Calling .GetMachineName
I0806 07:22:11.688213   31581 main.go:141] libmachine: (functional-811691) Calling .DriverName
I0806 07:22:11.688410   31581 ssh_runner.go:195] Run: systemctl --version
I0806 07:22:11.688436   31581 main.go:141] libmachine: (functional-811691) Calling .GetSSHHostname
I0806 07:22:11.691317   31581 main.go:141] libmachine: (functional-811691) DBG | domain functional-811691 has defined MAC address 52:54:00:a5:a1:0b in network mk-functional-811691
I0806 07:22:11.691712   31581 main.go:141] libmachine: (functional-811691) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:a1:0b", ip: ""} in network mk-functional-811691: {Iface:virbr1 ExpiryTime:2024-08-06 08:18:11 +0000 UTC Type:0 Mac:52:54:00:a5:a1:0b Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:functional-811691 Clientid:01:52:54:00:a5:a1:0b}
I0806 07:22:11.691740   31581 main.go:141] libmachine: (functional-811691) DBG | domain functional-811691 has defined IP address 192.168.39.203 and MAC address 52:54:00:a5:a1:0b in network mk-functional-811691
I0806 07:22:11.691862   31581 main.go:141] libmachine: (functional-811691) Calling .GetSSHPort
I0806 07:22:11.692049   31581 main.go:141] libmachine: (functional-811691) Calling .GetSSHKeyPath
I0806 07:22:11.692228   31581 main.go:141] libmachine: (functional-811691) Calling .GetSSHUsername
I0806 07:22:11.692402   31581 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/functional-811691/id_rsa Username:docker}
I0806 07:22:11.798228   31581 ssh_runner.go:195] Run: sudo crictl images --output json
I0806 07:22:11.873107   31581 main.go:141] libmachine: Making call to close driver server
I0806 07:22:11.873130   31581 main.go:141] libmachine: (functional-811691) Calling .Close
I0806 07:22:11.873444   31581 main.go:141] libmachine: Successfully made call to close driver server
I0806 07:22:11.873464   31581 main.go:141] libmachine: Making call to close connection to plugin binary
I0806 07:22:11.873480   31581 main.go:141] libmachine: Making call to close driver server
I0806 07:22:11.873490   31581 main.go:141] libmachine: (functional-811691) Calling .Close
I0806 07:22:11.873689   31581 main.go:141] libmachine: Successfully made call to close driver server
I0806 07:22:11.873702   31581 main.go:141] libmachine: Making call to close connection to plugin binary
I0806 07:22:11.873786   31581 main.go:141] libmachine: (functional-811691) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-811691 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-811691 image ls --format yaml --alsologtostderr:
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests:
- registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "150779692"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:functional-811691
size: "4943877"
- id: 5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f
repoDigests:
- docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115
- docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493
repoTags:
- docker.io/kindest/kindnetd:v20240715-585640e9
size: "87165492"
- id: a72860cb95fd59e9c696c66441c64f18e66915fa26b249911e83c3854477ed9a
repoDigests:
- docker.io/library/nginx@sha256:6af79ae5de407283dcea8b00d5c37ace95441fd58a8b1d2aa1ed93f5511bb18c
- docker.io/library/nginx@sha256:baa881b012a49e3c2cd6ab9d80f9fcd2962a98af8ede947d0ef930a427b28afc
repoTags:
- docker.io/library/nginx:latest
size: "191750286"
- id: 1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c
- registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.3
size: "117609954"
- id: 55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1
repoDigests:
- registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80
- registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65
repoTags:
- registry.k8s.io/kube-proxy:v1.30.3
size: "85953945"
- id: 3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266
- registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.3
size: "63051080"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: d43d575c3b7c2b14d76fc1604be7bf8f8ed86a0e39d968fd6d19ec3bce1d59b9
repoDigests:
- localhost/minikube-local-cache-test@sha256:ba4e8fe06898a2b700b5a64d0ec2526f888ac189a513111dccce317a93572066
repoTags:
- localhost/minikube-local-cache-test:functional-811691
size: "3330"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "61245718"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7
- registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.3
size: "112198984"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-811691 image ls --format yaml --alsologtostderr:
I0806 07:22:10.159770   31286 out.go:291] Setting OutFile to fd 1 ...
I0806 07:22:10.159902   31286 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0806 07:22:10.159913   31286 out.go:304] Setting ErrFile to fd 2...
I0806 07:22:10.159919   31286 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0806 07:22:10.160105   31286 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19370-13057/.minikube/bin
I0806 07:22:10.160685   31286 config.go:182] Loaded profile config "functional-811691": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0806 07:22:10.160800   31286 config.go:182] Loaded profile config "functional-811691": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0806 07:22:10.161195   31286 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0806 07:22:10.161251   31286 main.go:141] libmachine: Launching plugin server for driver kvm2
I0806 07:22:10.176083   31286 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40575
I0806 07:22:10.176605   31286 main.go:141] libmachine: () Calling .GetVersion
I0806 07:22:10.177190   31286 main.go:141] libmachine: Using API Version  1
I0806 07:22:10.177222   31286 main.go:141] libmachine: () Calling .SetConfigRaw
I0806 07:22:10.177537   31286 main.go:141] libmachine: () Calling .GetMachineName
I0806 07:22:10.177709   31286 main.go:141] libmachine: (functional-811691) Calling .GetState
I0806 07:22:10.179718   31286 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0806 07:22:10.179763   31286 main.go:141] libmachine: Launching plugin server for driver kvm2
I0806 07:22:10.194602   31286 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44659
I0806 07:22:10.194991   31286 main.go:141] libmachine: () Calling .GetVersion
I0806 07:22:10.195463   31286 main.go:141] libmachine: Using API Version  1
I0806 07:22:10.195486   31286 main.go:141] libmachine: () Calling .SetConfigRaw
I0806 07:22:10.195805   31286 main.go:141] libmachine: () Calling .GetMachineName
I0806 07:22:10.195961   31286 main.go:141] libmachine: (functional-811691) Calling .DriverName
I0806 07:22:10.196154   31286 ssh_runner.go:195] Run: systemctl --version
I0806 07:22:10.196181   31286 main.go:141] libmachine: (functional-811691) Calling .GetSSHHostname
I0806 07:22:10.198878   31286 main.go:141] libmachine: (functional-811691) DBG | domain functional-811691 has defined MAC address 52:54:00:a5:a1:0b in network mk-functional-811691
I0806 07:22:10.199275   31286 main.go:141] libmachine: (functional-811691) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:a1:0b", ip: ""} in network mk-functional-811691: {Iface:virbr1 ExpiryTime:2024-08-06 08:18:11 +0000 UTC Type:0 Mac:52:54:00:a5:a1:0b Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:functional-811691 Clientid:01:52:54:00:a5:a1:0b}
I0806 07:22:10.199302   31286 main.go:141] libmachine: (functional-811691) DBG | domain functional-811691 has defined IP address 192.168.39.203 and MAC address 52:54:00:a5:a1:0b in network mk-functional-811691
I0806 07:22:10.199440   31286 main.go:141] libmachine: (functional-811691) Calling .GetSSHPort
I0806 07:22:10.199609   31286 main.go:141] libmachine: (functional-811691) Calling .GetSSHKeyPath
I0806 07:22:10.199761   31286 main.go:141] libmachine: (functional-811691) Calling .GetSSHUsername
I0806 07:22:10.199873   31286 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/functional-811691/id_rsa Username:docker}
I0806 07:22:10.291141   31286 ssh_runner.go:195] Run: sudo crictl images --output json
I0806 07:22:10.355704   31286 main.go:141] libmachine: Making call to close driver server
I0806 07:22:10.355716   31286 main.go:141] libmachine: (functional-811691) Calling .Close
I0806 07:22:10.355945   31286 main.go:141] libmachine: Successfully made call to close driver server
I0806 07:22:10.355961   31286 main.go:141] libmachine: Making call to close connection to plugin binary
I0806 07:22:10.355970   31286 main.go:141] libmachine: (functional-811691) DBG | Closing plugin on server side
I0806 07:22:10.355981   31286 main.go:141] libmachine: Making call to close driver server
I0806 07:22:10.355989   31286 main.go:141] libmachine: (functional-811691) Calling .Close
I0806 07:22:10.356250   31286 main.go:141] libmachine: (functional-811691) DBG | Closing plugin on server side
I0806 07:22:10.356235   31286 main.go:141] libmachine: Successfully made call to close driver server
I0806 07:22:10.356289   31286 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-811691 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-811691 ssh pgrep buildkitd: exit status 1 (252.652894ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-811691 image build -t localhost/my-image:functional-811691 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-811691 image build -t localhost/my-image:functional-811691 testdata/build --alsologtostderr: (5.534941688s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-811691 image build -t localhost/my-image:functional-811691 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 728b864047b
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-811691
--> a0f308227c3
Successfully tagged localhost/my-image:functional-811691
a0f308227c3b99331f0f59ced220537ef5b80bb8532296114a543077a84b584c
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-811691 image build -t localhost/my-image:functional-811691 testdata/build --alsologtostderr:
I0806 07:22:10.653790   31439 out.go:291] Setting OutFile to fd 1 ...
I0806 07:22:10.654169   31439 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0806 07:22:10.654191   31439 out.go:304] Setting ErrFile to fd 2...
I0806 07:22:10.654198   31439 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0806 07:22:10.654696   31439 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19370-13057/.minikube/bin
I0806 07:22:10.655322   31439 config.go:182] Loaded profile config "functional-811691": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0806 07:22:10.655860   31439 config.go:182] Loaded profile config "functional-811691": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0806 07:22:10.656216   31439 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0806 07:22:10.656271   31439 main.go:141] libmachine: Launching plugin server for driver kvm2
I0806 07:22:10.671626   31439 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34103
I0806 07:22:10.672145   31439 main.go:141] libmachine: () Calling .GetVersion
I0806 07:22:10.672704   31439 main.go:141] libmachine: Using API Version  1
I0806 07:22:10.672730   31439 main.go:141] libmachine: () Calling .SetConfigRaw
I0806 07:22:10.673095   31439 main.go:141] libmachine: () Calling .GetMachineName
I0806 07:22:10.673329   31439 main.go:141] libmachine: (functional-811691) Calling .GetState
I0806 07:22:10.675419   31439 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0806 07:22:10.675469   31439 main.go:141] libmachine: Launching plugin server for driver kvm2
I0806 07:22:10.690398   31439 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42189
I0806 07:22:10.690808   31439 main.go:141] libmachine: () Calling .GetVersion
I0806 07:22:10.691269   31439 main.go:141] libmachine: Using API Version  1
I0806 07:22:10.691288   31439 main.go:141] libmachine: () Calling .SetConfigRaw
I0806 07:22:10.691611   31439 main.go:141] libmachine: () Calling .GetMachineName
I0806 07:22:10.691794   31439 main.go:141] libmachine: (functional-811691) Calling .DriverName
I0806 07:22:10.691993   31439 ssh_runner.go:195] Run: systemctl --version
I0806 07:22:10.692021   31439 main.go:141] libmachine: (functional-811691) Calling .GetSSHHostname
I0806 07:22:10.694647   31439 main.go:141] libmachine: (functional-811691) DBG | domain functional-811691 has defined MAC address 52:54:00:a5:a1:0b in network mk-functional-811691
I0806 07:22:10.695101   31439 main.go:141] libmachine: (functional-811691) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:a1:0b", ip: ""} in network mk-functional-811691: {Iface:virbr1 ExpiryTime:2024-08-06 08:18:11 +0000 UTC Type:0 Mac:52:54:00:a5:a1:0b Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:functional-811691 Clientid:01:52:54:00:a5:a1:0b}
I0806 07:22:10.695126   31439 main.go:141] libmachine: (functional-811691) DBG | domain functional-811691 has defined IP address 192.168.39.203 and MAC address 52:54:00:a5:a1:0b in network mk-functional-811691
I0806 07:22:10.695228   31439 main.go:141] libmachine: (functional-811691) Calling .GetSSHPort
I0806 07:22:10.695392   31439 main.go:141] libmachine: (functional-811691) Calling .GetSSHKeyPath
I0806 07:22:10.695519   31439 main.go:141] libmachine: (functional-811691) Calling .GetSSHUsername
I0806 07:22:10.695645   31439 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/functional-811691/id_rsa Username:docker}
I0806 07:22:10.779506   31439 build_images.go:161] Building image from path: /tmp/build.2399843359.tar
I0806 07:22:10.779572   31439 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0806 07:22:10.790847   31439 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2399843359.tar
I0806 07:22:10.795677   31439 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2399843359.tar: stat -c "%s %y" /var/lib/minikube/build/build.2399843359.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2399843359.tar': No such file or directory
I0806 07:22:10.795707   31439 ssh_runner.go:362] scp /tmp/build.2399843359.tar --> /var/lib/minikube/build/build.2399843359.tar (3072 bytes)
I0806 07:22:10.821027   31439 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2399843359
I0806 07:22:10.831026   31439 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2399843359 -xf /var/lib/minikube/build/build.2399843359.tar
I0806 07:22:10.841939   31439 crio.go:315] Building image: /var/lib/minikube/build/build.2399843359
I0806 07:22:10.842016   31439 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-811691 /var/lib/minikube/build/build.2399843359 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0806 07:22:16.121714   31439 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-811691 /var/lib/minikube/build/build.2399843359 --cgroup-manager=cgroupfs: (5.279671897s)
I0806 07:22:16.121778   31439 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2399843359
I0806 07:22:16.132575   31439 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2399843359.tar
I0806 07:22:16.144156   31439 build_images.go:217] Built localhost/my-image:functional-811691 from /tmp/build.2399843359.tar
I0806 07:22:16.144198   31439 build_images.go:133] succeeded building to: functional-811691
I0806 07:22:16.144205   31439 build_images.go:134] failed building to: 
I0806 07:22:16.144230   31439 main.go:141] libmachine: Making call to close driver server
I0806 07:22:16.144243   31439 main.go:141] libmachine: (functional-811691) Calling .Close
I0806 07:22:16.144597   31439 main.go:141] libmachine: Successfully made call to close driver server
I0806 07:22:16.144616   31439 main.go:141] libmachine: Making call to close connection to plugin binary
I0806 07:22:16.144625   31439 main.go:141] libmachine: Making call to close driver server
I0806 07:22:16.144633   31439 main.go:141] libmachine: (functional-811691) Calling .Close
I0806 07:22:16.144690   31439 main.go:141] libmachine: (functional-811691) DBG | Closing plugin on server side
I0806 07:22:16.144888   31439 main.go:141] libmachine: Successfully made call to close driver server
I0806 07:22:16.144905   31439 main.go:141] libmachine: Making call to close connection to plugin binary
I0806 07:22:16.144907   31439 main.go:141] libmachine: (functional-811691) DBG | Closing plugin on server side
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-811691 image ls
2024/08/06 07:22:17 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (6.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull docker.io/kicbase/echo-server:1.0
functional_test.go:341: (dbg) Done: docker pull docker.io/kicbase/echo-server:1.0: (1.968086s)
functional_test.go:346: (dbg) Run:  docker tag docker.io/kicbase/echo-server:1.0 docker.io/kicbase/echo-server:functional-811691
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.99s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-811691 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-811691 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-811691 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-811691 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-811691 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (45.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-811691 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-811691 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-dsl5l" [c09bf737-99ef-417c-a449-86427582b9e9] Pending
helpers_test.go:344: "hello-node-6d85cfcfd8-dsl5l" [c09bf737-99ef-417c-a449-86427582b9e9] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-dsl5l" [c09bf737-99ef-417c-a449-86427582b9e9] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 45.004888879s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (45.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-811691 image load --daemon docker.io/kicbase/echo-server:functional-811691 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-811691 image load --daemon docker.io/kicbase/echo-server:functional-811691 --alsologtostderr: (1.04619449s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-811691 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-811691 image load --daemon docker.io/kicbase/echo-server:functional-811691 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-811691 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull docker.io/kicbase/echo-server:latest
functional_test.go:239: (dbg) Run:  docker tag docker.io/kicbase/echo-server:latest docker.io/kicbase/echo-server:functional-811691
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-811691 image load --daemon docker.io/kicbase/echo-server:functional-811691 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-811691 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-811691 image save docker.io/kicbase/echo-server:functional-811691 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-811691 image rm docker.io/kicbase/echo-server:functional-811691 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-811691 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-811691 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-811691 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi docker.io/kicbase/echo-server:functional-811691
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-811691 image save --daemon docker.io/kicbase/echo-server:functional-811691 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect docker.io/kicbase/echo-server:functional-811691
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "220.219333ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "45.603284ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "269.90115ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "51.559107ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-811691 /tmp/TestFunctionalparallelMountCmdany-port2375760563/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1722928920146229145" to /tmp/TestFunctionalparallelMountCmdany-port2375760563/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1722928920146229145" to /tmp/TestFunctionalparallelMountCmdany-port2375760563/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1722928920146229145" to /tmp/TestFunctionalparallelMountCmdany-port2375760563/001/test-1722928920146229145
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-811691 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-811691 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (193.756692ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-811691 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-811691 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug  6 07:22 created-by-test
-rw-r--r-- 1 docker docker 24 Aug  6 07:22 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug  6 07:22 test-1722928920146229145
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-811691 ssh cat /mount-9p/test-1722928920146229145
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-811691 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [4c8086e4-5378-47ee-af50-28402a5e88ef] Pending
helpers_test.go:344: "busybox-mount" [4c8086e4-5378-47ee-af50-28402a5e88ef] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [4c8086e4-5378-47ee-af50-28402a5e88ef] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [4c8086e4-5378-47ee-af50-28402a5e88ef] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.003913524s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-811691 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-811691 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-811691 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-811691 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-811691 /tmp/TestFunctionalparallelMountCmdany-port2375760563/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-811691 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-811691 service list -o json
functional_test.go:1490: Took "474.419315ms" to run "out/minikube-linux-amd64 -p functional-811691 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-811691 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.39.203:32481
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-811691 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-811691 /tmp/TestFunctionalparallelMountCmdspecific-port1784362682/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-811691 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-811691 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (217.101871ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-811691 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-811691 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-811691 /tmp/TestFunctionalparallelMountCmdspecific-port1784362682/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-811691 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-811691 ssh "sudo umount -f /mount-9p": exit status 1 (194.116774ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-811691 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-811691 /tmp/TestFunctionalparallelMountCmdspecific-port1784362682/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.60s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-811691 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.39.203:32481
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-811691 /tmp/TestFunctionalparallelMountCmdVerifyCleanup102088402/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-811691 /tmp/TestFunctionalparallelMountCmdVerifyCleanup102088402/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-811691 /tmp/TestFunctionalparallelMountCmdVerifyCleanup102088402/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-811691 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-811691 ssh "findmnt -T" /mount1: exit status 1 (257.086311ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-811691 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-811691 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-811691 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-811691 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-811691 /tmp/TestFunctionalparallelMountCmdVerifyCleanup102088402/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-811691 /tmp/TestFunctionalparallelMountCmdVerifyCleanup102088402/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-811691 /tmp/TestFunctionalparallelMountCmdVerifyCleanup102088402/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.40s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:1.0
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:functional-811691
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-811691
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-811691
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (277.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-118173 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0806 07:24:13.203568   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/addons-505457/client.crt: no such file or directory
E0806 07:24:40.890724   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/addons-505457/client.crt: no such file or directory
E0806 07:26:21.824644   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/functional-811691/client.crt: no such file or directory
E0806 07:26:21.829943   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/functional-811691/client.crt: no such file or directory
E0806 07:26:21.840254   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/functional-811691/client.crt: no such file or directory
E0806 07:26:21.860562   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/functional-811691/client.crt: no such file or directory
E0806 07:26:21.900916   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/functional-811691/client.crt: no such file or directory
E0806 07:26:21.981262   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/functional-811691/client.crt: no such file or directory
E0806 07:26:22.141540   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/functional-811691/client.crt: no such file or directory
E0806 07:26:22.462149   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/functional-811691/client.crt: no such file or directory
E0806 07:26:23.103148   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/functional-811691/client.crt: no such file or directory
E0806 07:26:24.383712   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/functional-811691/client.crt: no such file or directory
E0806 07:26:26.944823   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/functional-811691/client.crt: no such file or directory
E0806 07:26:32.065732   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/functional-811691/client.crt: no such file or directory
E0806 07:26:42.306547   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/functional-811691/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-118173 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (4m36.833071202s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-118173 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (277.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (8.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-118173 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-118173 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-118173 -- rollout status deployment/busybox: (5.907837059s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-118173 -- get pods -o jsonpath='{.items[*].status.podIP}'
E0806 07:27:02.787034   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/functional-811691/client.crt: no such file or directory
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-118173 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-118173 -- exec busybox-fc5497c4f-852mp -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-118173 -- exec busybox-fc5497c4f-j9d92 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-118173 -- exec busybox-fc5497c4f-rml5d -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-118173 -- exec busybox-fc5497c4f-852mp -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-118173 -- exec busybox-fc5497c4f-j9d92 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-118173 -- exec busybox-fc5497c4f-rml5d -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-118173 -- exec busybox-fc5497c4f-852mp -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-118173 -- exec busybox-fc5497c4f-j9d92 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-118173 -- exec busybox-fc5497c4f-rml5d -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (8.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-118173 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-118173 -- exec busybox-fc5497c4f-852mp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-118173 -- exec busybox-fc5497c4f-852mp -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-118173 -- exec busybox-fc5497c4f-j9d92 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-118173 -- exec busybox-fc5497c4f-j9d92 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-118173 -- exec busybox-fc5497c4f-rml5d -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-118173 -- exec busybox-fc5497c4f-rml5d -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (58.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-118173 -v=7 --alsologtostderr
E0806 07:27:43.747879   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/functional-811691/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-118173 -v=7 --alsologtostderr: (57.971552936s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-118173 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (58.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-118173 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-118173 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-118173 cp testdata/cp-test.txt ha-118173:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-118173 ssh -n ha-118173 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-118173 cp ha-118173:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4204706009/001/cp-test_ha-118173.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-118173 ssh -n ha-118173 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-118173 cp ha-118173:/home/docker/cp-test.txt ha-118173-m02:/home/docker/cp-test_ha-118173_ha-118173-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-118173 ssh -n ha-118173 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-118173 ssh -n ha-118173-m02 "sudo cat /home/docker/cp-test_ha-118173_ha-118173-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-118173 cp ha-118173:/home/docker/cp-test.txt ha-118173-m03:/home/docker/cp-test_ha-118173_ha-118173-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-118173 ssh -n ha-118173 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-118173 ssh -n ha-118173-m03 "sudo cat /home/docker/cp-test_ha-118173_ha-118173-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-118173 cp ha-118173:/home/docker/cp-test.txt ha-118173-m04:/home/docker/cp-test_ha-118173_ha-118173-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-118173 ssh -n ha-118173 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-118173 ssh -n ha-118173-m04 "sudo cat /home/docker/cp-test_ha-118173_ha-118173-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-118173 cp testdata/cp-test.txt ha-118173-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-118173 ssh -n ha-118173-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-118173 cp ha-118173-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4204706009/001/cp-test_ha-118173-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-118173 ssh -n ha-118173-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-118173 cp ha-118173-m02:/home/docker/cp-test.txt ha-118173:/home/docker/cp-test_ha-118173-m02_ha-118173.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-118173 ssh -n ha-118173-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-118173 ssh -n ha-118173 "sudo cat /home/docker/cp-test_ha-118173-m02_ha-118173.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-118173 cp ha-118173-m02:/home/docker/cp-test.txt ha-118173-m03:/home/docker/cp-test_ha-118173-m02_ha-118173-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-118173 ssh -n ha-118173-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-118173 ssh -n ha-118173-m03 "sudo cat /home/docker/cp-test_ha-118173-m02_ha-118173-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-118173 cp ha-118173-m02:/home/docker/cp-test.txt ha-118173-m04:/home/docker/cp-test_ha-118173-m02_ha-118173-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-118173 ssh -n ha-118173-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-118173 ssh -n ha-118173-m04 "sudo cat /home/docker/cp-test_ha-118173-m02_ha-118173-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-118173 cp testdata/cp-test.txt ha-118173-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-118173 ssh -n ha-118173-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-118173 cp ha-118173-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4204706009/001/cp-test_ha-118173-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-118173 ssh -n ha-118173-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-118173 cp ha-118173-m03:/home/docker/cp-test.txt ha-118173:/home/docker/cp-test_ha-118173-m03_ha-118173.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-118173 ssh -n ha-118173-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-118173 ssh -n ha-118173 "sudo cat /home/docker/cp-test_ha-118173-m03_ha-118173.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-118173 cp ha-118173-m03:/home/docker/cp-test.txt ha-118173-m02:/home/docker/cp-test_ha-118173-m03_ha-118173-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-118173 ssh -n ha-118173-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-118173 ssh -n ha-118173-m02 "sudo cat /home/docker/cp-test_ha-118173-m03_ha-118173-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-118173 cp ha-118173-m03:/home/docker/cp-test.txt ha-118173-m04:/home/docker/cp-test_ha-118173-m03_ha-118173-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-118173 ssh -n ha-118173-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-118173 ssh -n ha-118173-m04 "sudo cat /home/docker/cp-test_ha-118173-m03_ha-118173-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-118173 cp testdata/cp-test.txt ha-118173-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-118173 ssh -n ha-118173-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-118173 cp ha-118173-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4204706009/001/cp-test_ha-118173-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-118173 ssh -n ha-118173-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-118173 cp ha-118173-m04:/home/docker/cp-test.txt ha-118173:/home/docker/cp-test_ha-118173-m04_ha-118173.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-118173 ssh -n ha-118173-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-118173 ssh -n ha-118173 "sudo cat /home/docker/cp-test_ha-118173-m04_ha-118173.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-118173 cp ha-118173-m04:/home/docker/cp-test.txt ha-118173-m02:/home/docker/cp-test_ha-118173-m04_ha-118173-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-118173 ssh -n ha-118173-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-118173 ssh -n ha-118173-m02 "sudo cat /home/docker/cp-test_ha-118173-m04_ha-118173-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-118173 cp ha-118173-m04:/home/docker/cp-test.txt ha-118173-m03:/home/docker/cp-test_ha-118173-m04_ha-118173-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-118173 ssh -n ha-118173-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-118173 ssh -n ha-118173-m03 "sudo cat /home/docker/cp-test_ha-118173-m04_ha-118173-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.46531306s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (17.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-118173 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-118173 node delete m03 -v=7 --alsologtostderr: (16.486721645s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-118173 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (17.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (277.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-118173 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0806 07:41:21.825959   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/functional-811691/client.crt: no such file or directory
E0806 07:42:44.869999   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/functional-811691/client.crt: no such file or directory
E0806 07:44:13.203003   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/addons-505457/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-118173 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (4m36.77679266s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-118173 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (277.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (79.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-118173 --control-plane -v=7 --alsologtostderr
E0806 07:46:21.824940   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/functional-811691/client.crt: no such file or directory
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-118173 --control-plane -v=7 --alsologtostderr: (1m18.853262483s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-118173 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (79.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.52s)

                                                
                                    
x
+
TestJSONOutput/start/Command (58.09s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-783198 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-783198 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (58.08496108s)
--- PASS: TestJSONOutput/start/Command (58.09s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.76s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-783198 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.76s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.64s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-783198 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.64s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (9.35s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-783198 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-783198 --output=json --user=testUser: (9.350445313s)
--- PASS: TestJSONOutput/stop/Command (9.35s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-568149 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-568149 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (61.104879ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"2833e53f-f356-478f-9732-d0d098700a2f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-568149] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"2aed7de3-1ac7-41fa-b257-657317d69b76","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19370"}}
	{"specversion":"1.0","id":"8ccf3967-ad19-4a83-ba76-0450ca7af7b2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"fbf85341-a3ab-4a7c-9bdd-50ed3475132e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19370-13057/kubeconfig"}}
	{"specversion":"1.0","id":"2903b77f-b59d-4275-8388-3831d05a944c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19370-13057/.minikube"}}
	{"specversion":"1.0","id":"13d26da0-be71-499c-a103-428570bad119","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"15b1a6aa-19ca-4db5-b062-020776202229","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"e57392e4-e1b6-449b-9194-fd9f45b9727b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-568149" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-568149
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (89.71s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-351454 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-351454 --driver=kvm2  --container-runtime=crio: (42.357085067s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-354764 --driver=kvm2  --container-runtime=crio
E0806 07:49:13.203850   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/addons-505457/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-354764 --driver=kvm2  --container-runtime=crio: (44.717203415s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-351454
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-354764
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-354764" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-354764
helpers_test.go:175: Cleaning up "first-351454" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-351454
--- PASS: TestMinikubeProfile (89.71s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (29.95s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-890742 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-890742 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (28.9503768s)
--- PASS: TestMountStart/serial/StartWithMountFirst (29.95s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-890742 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-890742 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (28.48s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-906562 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-906562 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.480369636s)
--- PASS: TestMountStart/serial/StartWithMountSecond (28.48s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-906562 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-906562 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.72s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-890742 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.72s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-906562 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-906562 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-906562
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-906562: (1.276256025s)
--- PASS: TestMountStart/serial/Stop (1.28s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.57s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-906562
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-906562: (22.571175996s)
--- PASS: TestMountStart/serial/RestartStopped (23.57s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-906562 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-906562 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (121.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-551471 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0806 07:51:21.824535   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/functional-811691/client.crt: no such file or directory
E0806 07:52:16.252635   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/addons-505457/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-551471 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m1.388369342s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-551471 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (121.79s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-551471 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-551471 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-551471 -- rollout status deployment/busybox: (4.266888839s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-551471 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-551471 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-551471 -- exec busybox-fc5497c4f-kjt5w -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-551471 -- exec busybox-fc5497c4f-rkhtz -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-551471 -- exec busybox-fc5497c4f-kjt5w -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-551471 -- exec busybox-fc5497c4f-rkhtz -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-551471 -- exec busybox-fc5497c4f-kjt5w -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-551471 -- exec busybox-fc5497c4f-rkhtz -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.70s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-551471 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-551471 -- exec busybox-fc5497c4f-kjt5w -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-551471 -- exec busybox-fc5497c4f-kjt5w -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-551471 -- exec busybox-fc5497c4f-rkhtz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-551471 -- exec busybox-fc5497c4f-rkhtz -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.80s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (53.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-551471 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-551471 -v 3 --alsologtostderr: (52.973634389s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-551471 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (53.53s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-551471 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.21s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-551471 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-551471 cp testdata/cp-test.txt multinode-551471:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-551471 ssh -n multinode-551471 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-551471 cp multinode-551471:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2280163446/001/cp-test_multinode-551471.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-551471 ssh -n multinode-551471 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-551471 cp multinode-551471:/home/docker/cp-test.txt multinode-551471-m02:/home/docker/cp-test_multinode-551471_multinode-551471-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-551471 ssh -n multinode-551471 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-551471 ssh -n multinode-551471-m02 "sudo cat /home/docker/cp-test_multinode-551471_multinode-551471-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-551471 cp multinode-551471:/home/docker/cp-test.txt multinode-551471-m03:/home/docker/cp-test_multinode-551471_multinode-551471-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-551471 ssh -n multinode-551471 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-551471 ssh -n multinode-551471-m03 "sudo cat /home/docker/cp-test_multinode-551471_multinode-551471-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-551471 cp testdata/cp-test.txt multinode-551471-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-551471 ssh -n multinode-551471-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-551471 cp multinode-551471-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2280163446/001/cp-test_multinode-551471-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-551471 ssh -n multinode-551471-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-551471 cp multinode-551471-m02:/home/docker/cp-test.txt multinode-551471:/home/docker/cp-test_multinode-551471-m02_multinode-551471.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-551471 ssh -n multinode-551471-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-551471 ssh -n multinode-551471 "sudo cat /home/docker/cp-test_multinode-551471-m02_multinode-551471.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-551471 cp multinode-551471-m02:/home/docker/cp-test.txt multinode-551471-m03:/home/docker/cp-test_multinode-551471-m02_multinode-551471-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-551471 ssh -n multinode-551471-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-551471 ssh -n multinode-551471-m03 "sudo cat /home/docker/cp-test_multinode-551471-m02_multinode-551471-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-551471 cp testdata/cp-test.txt multinode-551471-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-551471 ssh -n multinode-551471-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-551471 cp multinode-551471-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2280163446/001/cp-test_multinode-551471-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-551471 ssh -n multinode-551471-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-551471 cp multinode-551471-m03:/home/docker/cp-test.txt multinode-551471:/home/docker/cp-test_multinode-551471-m03_multinode-551471.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-551471 ssh -n multinode-551471-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-551471 ssh -n multinode-551471 "sudo cat /home/docker/cp-test_multinode-551471-m03_multinode-551471.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-551471 cp multinode-551471-m03:/home/docker/cp-test.txt multinode-551471-m02:/home/docker/cp-test_multinode-551471-m03_multinode-551471-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-551471 ssh -n multinode-551471-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-551471 ssh -n multinode-551471-m02 "sudo cat /home/docker/cp-test_multinode-551471-m03_multinode-551471-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.90s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-551471 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-551471 node stop m03: (1.531695718s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-551471 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-551471 status: exit status 7 (412.749598ms)

                                                
                                                
-- stdout --
	multinode-551471
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-551471-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-551471-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-551471 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-551471 status --alsologtostderr: exit status 7 (413.284503ms)

                                                
                                                
-- stdout --
	multinode-551471
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-551471-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-551471-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 07:53:59.563386   48849 out.go:291] Setting OutFile to fd 1 ...
	I0806 07:53:59.563496   48849 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 07:53:59.563505   48849 out.go:304] Setting ErrFile to fd 2...
	I0806 07:53:59.563509   48849 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 07:53:59.563678   48849 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19370-13057/.minikube/bin
	I0806 07:53:59.563841   48849 out.go:298] Setting JSON to false
	I0806 07:53:59.563868   48849 mustload.go:65] Loading cluster: multinode-551471
	I0806 07:53:59.563942   48849 notify.go:220] Checking for updates...
	I0806 07:53:59.564415   48849 config.go:182] Loaded profile config "multinode-551471": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 07:53:59.564435   48849 status.go:255] checking status of multinode-551471 ...
	I0806 07:53:59.564911   48849 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:53:59.564962   48849 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:53:59.584830   48849 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40659
	I0806 07:53:59.585219   48849 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:53:59.585825   48849 main.go:141] libmachine: Using API Version  1
	I0806 07:53:59.585842   48849 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:53:59.586143   48849 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:53:59.586332   48849 main.go:141] libmachine: (multinode-551471) Calling .GetState
	I0806 07:53:59.587927   48849 status.go:330] multinode-551471 host status = "Running" (err=<nil>)
	I0806 07:53:59.587945   48849 host.go:66] Checking if "multinode-551471" exists ...
	I0806 07:53:59.588360   48849 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:53:59.588431   48849 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:53:59.603209   48849 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34467
	I0806 07:53:59.603621   48849 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:53:59.604071   48849 main.go:141] libmachine: Using API Version  1
	I0806 07:53:59.604092   48849 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:53:59.604447   48849 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:53:59.604637   48849 main.go:141] libmachine: (multinode-551471) Calling .GetIP
	I0806 07:53:59.607173   48849 main.go:141] libmachine: (multinode-551471) DBG | domain multinode-551471 has defined MAC address 52:54:00:e9:96:6b in network mk-multinode-551471
	I0806 07:53:59.607600   48849 main.go:141] libmachine: (multinode-551471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:96:6b", ip: ""} in network mk-multinode-551471: {Iface:virbr1 ExpiryTime:2024-08-06 08:51:03 +0000 UTC Type:0 Mac:52:54:00:e9:96:6b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:multinode-551471 Clientid:01:52:54:00:e9:96:6b}
	I0806 07:53:59.607637   48849 main.go:141] libmachine: (multinode-551471) DBG | domain multinode-551471 has defined IP address 192.168.39.186 and MAC address 52:54:00:e9:96:6b in network mk-multinode-551471
	I0806 07:53:59.607722   48849 host.go:66] Checking if "multinode-551471" exists ...
	I0806 07:53:59.608155   48849 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:53:59.608194   48849 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:53:59.622639   48849 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37361
	I0806 07:53:59.623100   48849 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:53:59.623722   48849 main.go:141] libmachine: Using API Version  1
	I0806 07:53:59.623752   48849 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:53:59.624070   48849 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:53:59.624261   48849 main.go:141] libmachine: (multinode-551471) Calling .DriverName
	I0806 07:53:59.624466   48849 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0806 07:53:59.624487   48849 main.go:141] libmachine: (multinode-551471) Calling .GetSSHHostname
	I0806 07:53:59.626922   48849 main.go:141] libmachine: (multinode-551471) DBG | domain multinode-551471 has defined MAC address 52:54:00:e9:96:6b in network mk-multinode-551471
	I0806 07:53:59.627313   48849 main.go:141] libmachine: (multinode-551471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:96:6b", ip: ""} in network mk-multinode-551471: {Iface:virbr1 ExpiryTime:2024-08-06 08:51:03 +0000 UTC Type:0 Mac:52:54:00:e9:96:6b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:multinode-551471 Clientid:01:52:54:00:e9:96:6b}
	I0806 07:53:59.627344   48849 main.go:141] libmachine: (multinode-551471) DBG | domain multinode-551471 has defined IP address 192.168.39.186 and MAC address 52:54:00:e9:96:6b in network mk-multinode-551471
	I0806 07:53:59.627517   48849 main.go:141] libmachine: (multinode-551471) Calling .GetSSHPort
	I0806 07:53:59.627673   48849 main.go:141] libmachine: (multinode-551471) Calling .GetSSHKeyPath
	I0806 07:53:59.627820   48849 main.go:141] libmachine: (multinode-551471) Calling .GetSSHUsername
	I0806 07:53:59.627948   48849 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/multinode-551471/id_rsa Username:docker}
	I0806 07:53:59.707776   48849 ssh_runner.go:195] Run: systemctl --version
	I0806 07:53:59.714731   48849 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 07:53:59.729557   48849 kubeconfig.go:125] found "multinode-551471" server: "https://192.168.39.186:8443"
	I0806 07:53:59.729584   48849 api_server.go:166] Checking apiserver status ...
	I0806 07:53:59.729614   48849 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 07:53:59.743271   48849 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1130/cgroup
	W0806 07:53:59.752803   48849 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1130/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0806 07:53:59.752876   48849 ssh_runner.go:195] Run: ls
	I0806 07:53:59.757706   48849 api_server.go:253] Checking apiserver healthz at https://192.168.39.186:8443/healthz ...
	I0806 07:53:59.762997   48849 api_server.go:279] https://192.168.39.186:8443/healthz returned 200:
	ok
	I0806 07:53:59.763027   48849 status.go:422] multinode-551471 apiserver status = Running (err=<nil>)
	I0806 07:53:59.763039   48849 status.go:257] multinode-551471 status: &{Name:multinode-551471 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0806 07:53:59.763059   48849 status.go:255] checking status of multinode-551471-m02 ...
	I0806 07:53:59.763403   48849 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:53:59.763448   48849 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:53:59.778643   48849 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34241
	I0806 07:53:59.779056   48849 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:53:59.779560   48849 main.go:141] libmachine: Using API Version  1
	I0806 07:53:59.779581   48849 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:53:59.779904   48849 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:53:59.780124   48849 main.go:141] libmachine: (multinode-551471-m02) Calling .GetState
	I0806 07:53:59.781975   48849 status.go:330] multinode-551471-m02 host status = "Running" (err=<nil>)
	I0806 07:53:59.781991   48849 host.go:66] Checking if "multinode-551471-m02" exists ...
	I0806 07:53:59.782425   48849 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:53:59.782472   48849 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:53:59.797681   48849 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38163
	I0806 07:53:59.798102   48849 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:53:59.798536   48849 main.go:141] libmachine: Using API Version  1
	I0806 07:53:59.798562   48849 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:53:59.798850   48849 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:53:59.799054   48849 main.go:141] libmachine: (multinode-551471-m02) Calling .GetIP
	I0806 07:53:59.801858   48849 main.go:141] libmachine: (multinode-551471-m02) DBG | domain multinode-551471-m02 has defined MAC address 52:54:00:50:e9:ef in network mk-multinode-551471
	I0806 07:53:59.802286   48849 main.go:141] libmachine: (multinode-551471-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:e9:ef", ip: ""} in network mk-multinode-551471: {Iface:virbr1 ExpiryTime:2024-08-06 08:52:17 +0000 UTC Type:0 Mac:52:54:00:50:e9:ef Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:multinode-551471-m02 Clientid:01:52:54:00:50:e9:ef}
	I0806 07:53:59.802305   48849 main.go:141] libmachine: (multinode-551471-m02) DBG | domain multinode-551471-m02 has defined IP address 192.168.39.134 and MAC address 52:54:00:50:e9:ef in network mk-multinode-551471
	I0806 07:53:59.802474   48849 host.go:66] Checking if "multinode-551471-m02" exists ...
	I0806 07:53:59.802773   48849 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:53:59.802807   48849 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:53:59.817810   48849 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38295
	I0806 07:53:59.818321   48849 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:53:59.818898   48849 main.go:141] libmachine: Using API Version  1
	I0806 07:53:59.818925   48849 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:53:59.819186   48849 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:53:59.819343   48849 main.go:141] libmachine: (multinode-551471-m02) Calling .DriverName
	I0806 07:53:59.819564   48849 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0806 07:53:59.819585   48849 main.go:141] libmachine: (multinode-551471-m02) Calling .GetSSHHostname
	I0806 07:53:59.822083   48849 main.go:141] libmachine: (multinode-551471-m02) DBG | domain multinode-551471-m02 has defined MAC address 52:54:00:50:e9:ef in network mk-multinode-551471
	I0806 07:53:59.822544   48849 main.go:141] libmachine: (multinode-551471-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:e9:ef", ip: ""} in network mk-multinode-551471: {Iface:virbr1 ExpiryTime:2024-08-06 08:52:17 +0000 UTC Type:0 Mac:52:54:00:50:e9:ef Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:multinode-551471-m02 Clientid:01:52:54:00:50:e9:ef}
	I0806 07:53:59.822571   48849 main.go:141] libmachine: (multinode-551471-m02) DBG | domain multinode-551471-m02 has defined IP address 192.168.39.134 and MAC address 52:54:00:50:e9:ef in network mk-multinode-551471
	I0806 07:53:59.822724   48849 main.go:141] libmachine: (multinode-551471-m02) Calling .GetSSHPort
	I0806 07:53:59.822894   48849 main.go:141] libmachine: (multinode-551471-m02) Calling .GetSSHKeyPath
	I0806 07:53:59.823036   48849 main.go:141] libmachine: (multinode-551471-m02) Calling .GetSSHUsername
	I0806 07:53:59.823163   48849 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-13057/.minikube/machines/multinode-551471-m02/id_rsa Username:docker}
	I0806 07:53:59.903484   48849 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 07:53:59.917596   48849 status.go:257] multinode-551471-m02 status: &{Name:multinode-551471-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0806 07:53:59.917638   48849 status.go:255] checking status of multinode-551471-m03 ...
	I0806 07:53:59.918031   48849 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 07:53:59.918099   48849 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 07:53:59.934211   48849 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34865
	I0806 07:53:59.934711   48849 main.go:141] libmachine: () Calling .GetVersion
	I0806 07:53:59.935247   48849 main.go:141] libmachine: Using API Version  1
	I0806 07:53:59.935266   48849 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 07:53:59.935592   48849 main.go:141] libmachine: () Calling .GetMachineName
	I0806 07:53:59.935823   48849 main.go:141] libmachine: (multinode-551471-m03) Calling .GetState
	I0806 07:53:59.937415   48849 status.go:330] multinode-551471-m03 host status = "Stopped" (err=<nil>)
	I0806 07:53:59.937430   48849 status.go:343] host is not running, skipping remaining checks
	I0806 07:53:59.937436   48849 status.go:257] multinode-551471-m03 status: &{Name:multinode-551471-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.36s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (39.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-551471 node start m03 -v=7 --alsologtostderr
E0806 07:54:13.203533   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/addons-505457/client.crt: no such file or directory
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-551471 node start m03 -v=7 --alsologtostderr: (39.207646482s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-551471 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (39.83s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-551471 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-551471 node delete m03: (1.940438753s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-551471 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.47s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (185.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-551471 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0806 08:04:13.203272   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/addons-505457/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-551471 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m4.660837927s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-551471 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (185.18s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (46.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-551471
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-551471-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-551471-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (62.812392ms)

                                                
                                                
-- stdout --
	* [multinode-551471-m02] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19370
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19370-13057/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19370-13057/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-551471-m02' is duplicated with machine name 'multinode-551471-m02' in profile 'multinode-551471'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-551471-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-551471-m03 --driver=kvm2  --container-runtime=crio: (44.729164764s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-551471
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-551471: exit status 80 (207.828162ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-551471 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-551471-m03 already exists in multinode-551471-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-551471-m03
E0806 08:06:21.825026   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/functional-811691/client.crt: no such file or directory
--- PASS: TestMultiNode/serial/ValidateNameConflict (46.04s)

                                                
                                    
x
+
TestScheduledStopUnix (114.63s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-071544 --memory=2048 --driver=kvm2  --container-runtime=crio
E0806 08:11:21.825310   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/functional-811691/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-071544 --memory=2048 --driver=kvm2  --container-runtime=crio: (43.054767242s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-071544 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-071544 -n scheduled-stop-071544
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-071544 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-071544 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-071544 -n scheduled-stop-071544
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-071544
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-071544 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-071544
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-071544: exit status 7 (64.641415ms)

                                                
                                                
-- stdout --
	scheduled-stop-071544
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-071544 -n scheduled-stop-071544
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-071544 -n scheduled-stop-071544: exit status 7 (63.883932ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-071544" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-071544
--- PASS: TestScheduledStopUnix (114.63s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (204.07s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3635382971 start -p running-upgrade-763145 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3635382971 start -p running-upgrade-763145 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m49.968463738s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-763145 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-763145 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m30.318611018s)
helpers_test.go:175: Cleaning up "running-upgrade-763145" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-763145
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-763145: (1.189191278s)
--- PASS: TestRunningBinaryUpgrade (204.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-681138 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-681138 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (76.042175ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-681138] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19370
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19370-13057/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19370-13057/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (120.52s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-681138 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-681138 --driver=kvm2  --container-runtime=crio: (2m0.264298665s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-681138 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (120.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (2.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-405163 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-405163 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (102.533444ms)

                                                
                                                
-- stdout --
	* [false-405163] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19370
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19370-13057/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19370-13057/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 08:14:35.528621   57672 out.go:291] Setting OutFile to fd 1 ...
	I0806 08:14:35.528860   57672 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 08:14:35.528869   57672 out.go:304] Setting ErrFile to fd 2...
	I0806 08:14:35.528873   57672 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 08:14:35.529049   57672 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19370-13057/.minikube/bin
	I0806 08:14:35.529626   57672 out.go:298] Setting JSON to false
	I0806 08:14:35.530555   57672 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7021,"bootTime":1722925054,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0806 08:14:35.530624   57672 start.go:139] virtualization: kvm guest
	I0806 08:14:35.532966   57672 out.go:177] * [false-405163] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0806 08:14:35.534383   57672 notify.go:220] Checking for updates...
	I0806 08:14:35.534415   57672 out.go:177]   - MINIKUBE_LOCATION=19370
	I0806 08:14:35.536027   57672 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0806 08:14:35.537551   57672 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19370-13057/kubeconfig
	I0806 08:14:35.538750   57672 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19370-13057/.minikube
	I0806 08:14:35.540131   57672 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0806 08:14:35.541532   57672 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0806 08:14:35.543530   57672 config.go:182] Loaded profile config "NoKubernetes-681138": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 08:14:35.543649   57672 config.go:182] Loaded profile config "cert-expiration-739400": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 08:14:35.543746   57672 config.go:182] Loaded profile config "force-systemd-flag-261275": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 08:14:35.543860   57672 driver.go:392] Setting default libvirt URI to qemu:///system
	I0806 08:14:35.582149   57672 out.go:177] * Using the kvm2 driver based on user configuration
	I0806 08:14:35.583419   57672 start.go:297] selected driver: kvm2
	I0806 08:14:35.583438   57672 start.go:901] validating driver "kvm2" against <nil>
	I0806 08:14:35.583453   57672 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0806 08:14:35.585535   57672 out.go:177] 
	W0806 08:14:35.586887   57672 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0806 08:14:35.588161   57672 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-405163 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-405163

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-405163

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-405163

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-405163

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-405163

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-405163

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-405163

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-405163

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-405163

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-405163

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-405163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-405163"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-405163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-405163"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-405163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-405163"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-405163

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-405163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-405163"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-405163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-405163"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-405163" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-405163" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-405163" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-405163" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-405163" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-405163" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-405163" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-405163" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-405163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-405163"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-405163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-405163"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-405163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-405163"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-405163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-405163"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-405163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-405163"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-405163" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-405163" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-405163" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-405163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-405163"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-405163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-405163"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-405163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-405163"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-405163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-405163"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-405163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-405163"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 06 Aug 2024 08:14:32 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.61.227:8443
name: cert-expiration-739400
contexts:
- context:
cluster: cert-expiration-739400
extensions:
- extension:
last-update: Tue, 06 Aug 2024 08:14:32 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: cert-expiration-739400
name: cert-expiration-739400
current-context: cert-expiration-739400
kind: Config
preferences: {}
users:
- name: cert-expiration-739400
user:
client-certificate: /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/cert-expiration-739400/client.crt
client-key: /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/cert-expiration-739400/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-405163

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-405163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-405163"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-405163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-405163"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-405163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-405163"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-405163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-405163"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-405163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-405163"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-405163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-405163"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-405163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-405163"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-405163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-405163"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-405163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-405163"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-405163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-405163"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-405163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-405163"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-405163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-405163"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-405163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-405163"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-405163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-405163"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-405163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-405163"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-405163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-405163"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-405163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-405163"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-405163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-405163"

                                                
                                                
----------------------- debugLogs end: false-405163 [took: 2.501117657s] --------------------------------
helpers_test.go:175: Cleaning up "false-405163" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-405163
--- PASS: TestNetworkPlugins/group/false (2.74s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (42.92s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-681138 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-681138 --no-kubernetes --driver=kvm2  --container-runtime=crio: (41.636312926s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-681138 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-681138 status -o json: exit status 2 (255.320326ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-681138","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-681138
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-681138: (1.025517668s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (42.92s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (31.91s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-681138 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-681138 --no-kubernetes --driver=kvm2  --container-runtime=crio: (31.905203266s)
--- PASS: TestNoKubernetes/serial/Start (31.91s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-681138 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-681138 "sudo systemctl is-active --quiet service kubelet": exit status 1 (196.658853ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.8s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.80s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-681138
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-681138: (2.482613591s)
--- PASS: TestNoKubernetes/serial/Stop (2.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (60.64s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-681138 --driver=kvm2  --container-runtime=crio
E0806 08:16:21.824906   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/functional-811691/client.crt: no such file or directory
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-681138 --driver=kvm2  --container-runtime=crio: (1m0.643187133s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (60.64s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-681138 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-681138 "sudo systemctl is-active --quiet service kubelet": exit status 1 (191.475613ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.59s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.59s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (107.86s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2394440559 start -p stopped-upgrade-909164 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2394440559 start -p stopped-upgrade-909164 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m1.863176886s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2394440559 -p stopped-upgrade-909164 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2394440559 -p stopped-upgrade-909164 stop: (2.138031543s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-909164 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-909164 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (43.861708453s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (107.86s)

                                                
                                    
x
+
TestPause/serial/Start (75.5s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-728784 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-728784 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m15.496707797s)
--- PASS: TestPause/serial/Start (75.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (82.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-405163 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-405163 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m22.821650834s)
--- PASS: TestNetworkPlugins/group/auto/Start (82.82s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.87s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-909164
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (104.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-405163 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
E0806 08:19:13.202822   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/addons-505457/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-405163 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m44.746781683s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (104.75s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (127.9s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-728784 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-728784 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (2m7.878474033s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (127.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-405163 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-405163 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-ckcff" [4480567e-272a-4f50-a7d0-f3a68ed1e913] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-ckcff" [4480567e-272a-4f50-a7d0-f3a68ed1e913] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004600676s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-405163 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-405163 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-405163 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (156.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-405163 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-405163 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (2m36.993643845s)
--- PASS: TestNetworkPlugins/group/calico/Start (156.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-tsjkt" [c7d60312-301b-4201-ad20-4bf17c350c67] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004247768s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-405163 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-405163 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-xp7tr" [ebdc6080-29c1-42d0-b328-ce8db5018438] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-xp7tr" [ebdc6080-29c1-42d0-b328-ce8db5018438] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.003696052s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-405163 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-405163 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-405163 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (134.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-405163 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-405163 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (2m14.489602963s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (134.49s)

                                                
                                    
x
+
TestPause/serial/Pause (0.78s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-728784 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.78s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.27s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-728784 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-728784 --output=json --layout=cluster: exit status 2 (267.825185ms)

                                                
                                                
-- stdout --
	{"Name":"pause-728784","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-728784","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.27s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.88s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-728784 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.88s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.13s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-728784 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-728784 --alsologtostderr -v=5: (1.126404785s)
--- PASS: TestPause/serial/PauseAgain (1.13s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.08s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-728784 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-728784 --alsologtostderr -v=5: (1.079443454s)
--- PASS: TestPause/serial/DeletePaused (1.08s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (1.71s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (1.71235335s)
--- PASS: TestPause/serial/VerifyDeletedResources (1.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (142.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-405163 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-405163 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (2m22.139266398s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (142.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-b2mgn" [5064842a-a460-4023-83ab-138a9298bfb6] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.00681269s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-405163 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-405163 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-zgg5p" [f9e5eca9-481e-4435-8590-953236a2f33a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-zgg5p" [f9e5eca9-481e-4435-8590-953236a2f33a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.004077444s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (85.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-405163 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-405163 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m25.308144581s)
--- PASS: TestNetworkPlugins/group/flannel/Start (85.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-405163 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-405163 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-405163 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-405163 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-405163 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-ttrxh" [25e720da-bc9a-459c-848a-c9e8c9038dac] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-ttrxh" [25e720da-bc9a-459c-848a-c9e8c9038dac] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.004579469s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-405163 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-405163 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-405163 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (66.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-405163 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-405163 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m6.727702788s)
--- PASS: TestNetworkPlugins/group/bridge/Start (66.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-405163 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-405163 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-s5fqx" [c38883bc-8e20-4654-a982-9ec7d63d79db] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-s5fqx" [c38883bc-8e20-4654-a982-9ec7d63d79db] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.004123409s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-405163 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-405163 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-405163 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (78.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-703340 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-rc.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-703340 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-rc.0: (1m18.084497452s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (78.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-t9w2p" [bc63f003-3c6e-42db-8a8b-61cf2dbcaefe] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.006864227s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-405163 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-405163 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-xkfrb" [c0167e0b-43bf-44e8-a3dd-4ff5e9df721c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-xkfrb" [c0167e0b-43bf-44e8-a3dd-4ff5e9df721c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.004887181s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-405163 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-405163 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-gt7wj" [dfc2dd7d-e6e9-4e17-80c2-770a854b1042] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-gt7wj" [dfc2dd7d-e6e9-4e17-80c2-770a854b1042] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.003904752s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-405163 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-405163 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-405163 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-405163 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-405163 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-405163 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)
E0806 08:55:01.379989   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/flannel-405163/client.crt: no such file or directory
E0806 08:55:04.748595   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/bridge-405163/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (60.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-324808 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3
E0806 08:25:36.254694   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/addons-505457/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-324808 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3: (1m0.207346125s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (60.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (117.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-990751 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3
E0806 08:25:41.077860   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/auto-405163/client.crt: no such file or directory
E0806 08:25:57.500558   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/kindnet-405163/client.crt: no such file or directory
E0806 08:25:57.506685   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/kindnet-405163/client.crt: no such file or directory
E0806 08:25:57.517010   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/kindnet-405163/client.crt: no such file or directory
E0806 08:25:57.537370   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/kindnet-405163/client.crt: no such file or directory
E0806 08:25:57.577671   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/kindnet-405163/client.crt: no such file or directory
E0806 08:25:57.658632   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/kindnet-405163/client.crt: no such file or directory
E0806 08:25:57.818868   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/kindnet-405163/client.crt: no such file or directory
E0806 08:25:58.139476   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/kindnet-405163/client.crt: no such file or directory
E0806 08:25:58.779878   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/kindnet-405163/client.crt: no such file or directory
E0806 08:26:00.060359   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/kindnet-405163/client.crt: no such file or directory
E0806 08:26:01.558752   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/auto-405163/client.crt: no such file or directory
E0806 08:26:02.621356   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/kindnet-405163/client.crt: no such file or directory
E0806 08:26:07.742454   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/kindnet-405163/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-990751 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3: (1m57.220482683s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (117.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-703340 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [826b3f8f-7900-4496-b9ac-cff2dcf2d599] Pending
helpers_test.go:344: "busybox" [826b3f8f-7900-4496-b9ac-cff2dcf2d599] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [826b3f8f-7900-4496-b9ac-cff2dcf2d599] Running
E0806 08:26:17.982917   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/kindnet-405163/client.crt: no such file or directory
E0806 08:26:21.824446   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/functional-811691/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.005253235s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-703340 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-703340 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-703340 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.121344001s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-703340 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-324808 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [41718f5d-fd79-4af8-9464-04a00be13504] Pending
helpers_test.go:344: "busybox" [41718f5d-fd79-4af8-9464-04a00be13504] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [41718f5d-fd79-4af8-9464-04a00be13504] Running
E0806 08:26:38.463524   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/kindnet-405163/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.003902895s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-324808 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.96s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-324808 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0806 08:26:42.519727   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/auto-405163/client.crt: no such file or directory
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-324808 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.96s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-990751 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [0c9c181e-0ac8-4463-bcdd-90536fa7ace0] Pending
helpers_test.go:344: "busybox" [0c9c181e-0ac8-4463-bcdd-90536fa7ace0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [0c9c181e-0ac8-4463-bcdd-90536fa7ace0] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 11.00358858s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-990751 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.95s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-990751 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-990751 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.95s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (687.05s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-703340 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-rc.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-703340 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-rc.0: (11m26.803400888s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-703340 -n no-preload-703340
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (687.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (561.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-324808 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3
E0806 08:29:26.229845   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/custom-flannel-405163/client.crt: no such file or directory
E0806 08:29:26.450334   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/enable-default-cni-405163/client.crt: no such file or directory
E0806 08:29:26.455632   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/enable-default-cni-405163/client.crt: no such file or directory
E0806 08:29:26.465887   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/enable-default-cni-405163/client.crt: no such file or directory
E0806 08:29:26.486178   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/enable-default-cni-405163/client.crt: no such file or directory
E0806 08:29:26.526461   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/enable-default-cni-405163/client.crt: no such file or directory
E0806 08:29:26.606822   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/enable-default-cni-405163/client.crt: no such file or directory
E0806 08:29:26.767733   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/enable-default-cni-405163/client.crt: no such file or directory
E0806 08:29:27.088479   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/enable-default-cni-405163/client.crt: no such file or directory
E0806 08:29:27.729429   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/enable-default-cni-405163/client.crt: no such file or directory
E0806 08:29:29.010027   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/enable-default-cni-405163/client.crt: no such file or directory
E0806 08:29:31.570259   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/enable-default-cni-405163/client.crt: no such file or directory
E0806 08:29:36.691170   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/enable-default-cni-405163/client.crt: no such file or directory
E0806 08:29:44.964038   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/calico-405163/client.crt: no such file or directory
E0806 08:29:46.932119   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/enable-default-cni-405163/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-324808 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3: (9m20.917385191s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-324808 -n embed-certs-324808
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (561.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (516.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-990751 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3
E0806 08:30:20.597557   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/auto-405163/client.crt: no such file or directory
E0806 08:30:21.862335   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/flannel-405163/client.crt: no such file or directory
E0806 08:30:25.229276   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/bridge-405163/client.crt: no such file or directory
E0806 08:30:42.342789   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/flannel-405163/client.crt: no such file or directory
E0806 08:30:45.710219   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/bridge-405163/client.crt: no such file or directory
E0806 08:30:48.281286   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/auto-405163/client.crt: no such file or directory
E0806 08:30:48.373603   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/enable-default-cni-405163/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-990751 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3: (8m36.076665254s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-990751 -n default-k8s-diff-port-990751
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (516.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (4.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-409903 --alsologtostderr -v=3
E0806 08:30:57.500575   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/kindnet-405163/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-409903 --alsologtostderr -v=3: (4.28737429s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (4.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-409903 -n old-k8s-version-409903
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-409903 -n old-k8s-version-409903: exit status 7 (61.06796ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-409903 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (49.02s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-384471 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-rc.0
E0806 08:54:26.449824   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/enable-default-cni-405163/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-384471 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-rc.0: (49.016080974s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (49.02s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.16s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-384471 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-384471 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.156996416s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.38s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-384471 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-384471 --alsologtostderr -v=3: (10.3785995s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.38s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-384471 -n newest-cni-384471
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-384471 -n newest-cni-384471: exit status 7 (64.138692ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-384471 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (36.68s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-384471 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-rc.0
E0806 08:55:20.597950   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/auto-405163/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-384471 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-rc.0: (36.423048277s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-384471 -n newest-cni-384471
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (36.68s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-384471 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240719-e7903573
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.38s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-384471 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-384471 -n newest-cni-384471
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-384471 -n newest-cni-384471: exit status 2 (234.489773ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-384471 -n newest-cni-384471
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-384471 -n newest-cni-384471: exit status 2 (236.766237ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-384471 --alsologtostderr -v=1
E0806 08:55:57.500958   20246 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/kindnet-405163/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-384471 -n newest-cni-384471
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-384471 -n newest-cni-384471
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.38s)

                                                
                                    

Test skip (40/326)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.30.3/cached-images 0
15 TestDownloadOnly/v1.30.3/binaries 0
16 TestDownloadOnly/v1.30.3/kubectl 0
23 TestDownloadOnly/v1.31.0-rc.0/cached-images 0
24 TestDownloadOnly/v1.31.0-rc.0/binaries 0
25 TestDownloadOnly/v1.31.0-rc.0/kubectl 0
29 TestDownloadOnlyKic 0
38 TestAddons/serial/Volcano 0
47 TestAddons/parallel/Olm 0
57 TestDockerFlags 0
60 TestDockerEnvContainerd 0
62 TestHyperKitDriverInstallOrUpdate 0
63 TestHyperkitDriverSkipUpgrade 0
114 TestFunctional/parallel/DockerEnv 0
115 TestFunctional/parallel/PodmanEnv 0
135 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
136 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
137 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
138 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
139 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
140 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
141 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
142 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
163 TestGvisorAddon 0
185 TestImageBuild 0
212 TestKicCustomNetwork 0
213 TestKicExistingNetwork 0
214 TestKicCustomSubnet 0
215 TestKicStaticIP 0
247 TestChangeNoneUser 0
250 TestScheduledStopWindows 0
252 TestSkaffold 0
254 TestInsufficientStorage 0
258 TestMissingContainerUpgrade 0
264 TestNetworkPlugins/group/kubenet 2.65
272 TestNetworkPlugins/group/cilium 3.3
289 TestStartStop/group/disable-driver-mounts 0.16
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-rc.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-rc.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0-rc.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:879: skipping: crio not supported
--- SKIP: TestAddons/serial/Volcano (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (2.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-405163 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-405163

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-405163

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-405163

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-405163

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-405163

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-405163

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-405163

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-405163

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-405163

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-405163

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-405163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-405163"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-405163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-405163"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-405163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-405163"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-405163

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-405163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-405163"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-405163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-405163"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-405163" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-405163" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-405163" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-405163" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-405163" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-405163" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-405163" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-405163" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-405163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-405163"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-405163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-405163"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-405163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-405163"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-405163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-405163"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-405163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-405163"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-405163" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-405163" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-405163" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-405163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-405163"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-405163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-405163"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-405163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-405163"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-405163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-405163"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-405163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-405163"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 06 Aug 2024 08:14:32 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.61.227:8443
name: cert-expiration-739400
contexts:
- context:
cluster: cert-expiration-739400
extensions:
- extension:
last-update: Tue, 06 Aug 2024 08:14:32 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: cert-expiration-739400
name: cert-expiration-739400
current-context: cert-expiration-739400
kind: Config
preferences: {}
users:
- name: cert-expiration-739400
user:
client-certificate: /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/cert-expiration-739400/client.crt
client-key: /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/cert-expiration-739400/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-405163

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-405163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-405163"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-405163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-405163"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-405163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-405163"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-405163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-405163"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-405163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-405163"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-405163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-405163"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-405163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-405163"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-405163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-405163"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-405163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-405163"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-405163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-405163"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-405163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-405163"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-405163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-405163"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-405163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-405163"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-405163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-405163"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-405163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-405163"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-405163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-405163"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-405163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-405163"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-405163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-405163"

                                                
                                                
----------------------- debugLogs end: kubenet-405163 [took: 2.512825076s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-405163" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-405163
--- SKIP: TestNetworkPlugins/group/kubenet (2.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-405163 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-405163

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-405163

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-405163

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-405163

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-405163

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-405163

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-405163

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-405163

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-405163

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-405163

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-405163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-405163"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-405163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-405163"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-405163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-405163"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-405163

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-405163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-405163"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-405163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-405163"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-405163" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-405163" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-405163" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-405163" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-405163" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-405163" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-405163" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-405163" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-405163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-405163"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-405163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-405163"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-405163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-405163"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-405163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-405163"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-405163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-405163"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-405163

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-405163

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-405163" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-405163" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-405163

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-405163

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-405163" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-405163" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-405163" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-405163" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-405163" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-405163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-405163"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-405163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-405163"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-405163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-405163"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-405163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-405163"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-405163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-405163"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19370-13057/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 06 Aug 2024 08:14:32 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.61.227:8443
name: cert-expiration-739400
contexts:
- context:
cluster: cert-expiration-739400
extensions:
- extension:
last-update: Tue, 06 Aug 2024 08:14:32 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: cert-expiration-739400
name: cert-expiration-739400
current-context: cert-expiration-739400
kind: Config
preferences: {}
users:
- name: cert-expiration-739400
user:
client-certificate: /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/cert-expiration-739400/client.crt
client-key: /home/jenkins/minikube-integration/19370-13057/.minikube/profiles/cert-expiration-739400/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-405163

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-405163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-405163"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-405163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-405163"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-405163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-405163"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-405163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-405163"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-405163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-405163"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-405163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-405163"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-405163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-405163"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-405163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-405163"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-405163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-405163"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-405163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-405163"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-405163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-405163"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-405163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-405163"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-405163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-405163"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-405163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-405163"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-405163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-405163"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-405163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-405163"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-405163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-405163"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-405163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-405163"

                                                
                                                
----------------------- debugLogs end: cilium-405163 [took: 3.13728346s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-405163" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-405163
--- SKIP: TestNetworkPlugins/group/cilium (3.30s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-607762" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-607762
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
Copied to clipboard